Metadata only
Date
2024-08-25Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Despite the encouraging successes in numerous applications, machine learning methods grounded on the i.i.d. assumption often experience performance deterioration when confronted with the distribution shift between training and test data. This challenge has instigated recent research endeavors focusing on out-of-distribution (OOD) generalization. A particularly pervasive and intricate OOD problem is to enhance the model's generalization ability by training it on samples drawn from a single environment. In response to the problem, we propose a simple model-agnostic method tailored for a practical OOD scenario in this paper. Our approach centers on pursuing robust weighted empirical risks, utilizing randomly shifted training distributions derived through a specific sample-based weighting strategy. Furthermore, we theoretically establish that the expected risk of the shifted training distribution can bound the expected risk of the test distribution. This theoretical foundation ensures the improved prediction performance of our method when employed in uncertain test distributions. Extensive experiments conducted on diverse real-world datasets affirm the effectiveness of our method, highlighting its potential to address the distribution shifts in machine learning applications. Show more
Publication status
publishedExternal links
Book title
KDD '24: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningPages / Article No.
Publisher
Association for Computing MachineryEvent
Subject
Out-of-Distribution Generalization; Distribution shift; Sample WeightingMore
Show all metadata
ETH Bibliography
yes
Altmetrics