Label-noise learning (LNL) aims to increase the model's generalization g...
Adversarial training (AT) is a robust learning algorithm that can defend...
Adversarial contrastive learning (ACL), without requiring labels,
incorp...
Adversarial contrastive learning (ACL) does not require expensive data
a...
While leveraging additional training data is well established to improve...
Adversarial training (AT) with imperfect supervision is significant but
...
Score-based generative models (SGMs) have recently emerged as a promisin...
People are not always receptive to their voice data being collected and
...
DNNs' demand for massive data forces practitioners to collect data from ...
Non-parametric two-sample tests (TSTs) that judge whether two sets of sa...
This work systematically investigates the adversarial robustness of deep...
In multimodal tasks, we find that the importance of text and image modal...
In fine-grained image recognition (FGIR), the localization and amplifica...
In ordinary distillation, student networks are trained with soft labels ...
Adversarial training (AT) based on minimax optimization is a popular lea...
We investigate the adversarial robustness of CNNs from the perspective o...
Noisy labels (NL) and adversarial examples both undermine trained models...
In adversarial training (AT), the main focus has been the objective and
...
The maximum mean discrepancy (MMD) test, as a representative two-sample ...
In adversarial machine learning, there was a common belief that robustne...
Federated learning facilitates collaboration among self-interested agent...
Adversarial training based on the minimax formulation is necessary for
o...
Deep neural networks (DNNs) are incredibly brittle due to adversarial
ex...
This paper presents a simple yet principled approach to boosting the
rob...
Recent work has studied the reasons for the remarkable performance of de...