DALI: Dynamically Adjusted Label Importance for Noisy Partial Label Learning
Noisy partial label learning (noisy PLL) is an important branch of weakly supervised learning. Unlike PLL where the ground-truth label must reside in the candidate set, noisy PLL relaxes this constraint and allows the ground-truth label may not be in the candidate set. To address this problem, existing works attempt to detect noisy samples and estimate the ground-truth label for each noisy sample. However, detection errors are inevitable, and these errors will accumulate during training and continuously affect model optimization. To address this challenge, we propose a novel framework for noisy PLL, called “Dynamically Adjusted Label Importance (DALI)”. It aims to reduce the negative impact of detection errors by trading off the initial candidate set and model outputs with theoretical guarantees. Experimental results on multiple datasets demonstrate that our DALI succeeds over existing state-of-the-art approaches on noisy PLL. Our code will soon be publicly available.
READ FULL TEXT