Multimodal Large Language Models (MLLMs) that integrate text and other
m...
The rise of deepfake images, especially of well-known personalities, pos...
Vision-language pre-training models (VLP) are vulnerable, especially to
...
This work focuses on the 3D reconstruction of non-rigid objects based on...
Viewpoint invariance remains challenging for visual recognition in the 3...
Adversarial patch is one of the important forms of performing adversaria...
Adversarial attacks in the physical world, particularly patch attacks, p...
Recently, diffusion models have been successfully applied to improving
a...
With the help of conditioning mechanisms, the state-of-the-art diffusion...
Large-scale pre-trained models have achieved remarkable success in a var...
3D object detection is an essential perception task in autonomous drivin...
Face recognition is a prevailing authentication solution in numerous
bio...
Binary Neural Network (BNN) represents convolution weights with 1-bit va...
3D object detection is an important task in autonomous driving to percei...
Deep learning models are vulnerable to adversarial examples. Transfer-ba...
Learning partial differential equations' (PDEs) solution operators is an...
The security of artificial intelligence (AI) is an important research ar...
Previous work has shown that 3D point cloud classifiers can be vulnerabl...
3D deep learning models are shown to be as vulnerable to adversarial exa...
Recent studies have demonstrated that visual recognition models lack
rob...
Self-supervised pre-training has drawn increasing attention in recent ye...
Certified defenses such as randomized smoothing have shown promise towar...
Although Deep Neural Network (DNN) has led to unprecedented progress in
...
Deep learning models have been deployed in numerous real-world applicati...
Adversarial attacks have been extensively studied in recent years since ...
Recent studies have revealed the vulnerability of face recognition model...
Due to the vulnerability of deep neural networks (DNNs) to adversarial
e...
The vulnerability of deep neural networks to adversarial examples has
mo...
Transfer-based adversarial attacks can effectively evaluate model robust...
Face recognition is greatly improved by deep convolutional neural networ...
Collecting training data from untrusted sources exposes machine learning...
It is well known that deep learning models have a propensity for fitting...
Adversarial training (AT) is one of the most effective strategies for
pr...
Deep learning models are vulnerable to adversarial examples, which can f...
Although deep neural networks (DNNs) have made rapid progress in recent
...
Adversarial training (AT) is one of the most effective strategies for
pr...
Face recognition has recently made substantial progress and achieved hig...
As billions of personal data such as photos are shared through social me...
Adversarial training (AT) is one of the most effective defenses to impro...
Adversarial training (AT) is among the most effective techniques to impr...
Deep neural networks are vulnerable to adversarial examples, which becom...
We consider the black-box adversarial setting, where the adversary has t...
Previous work shows that adversarially robust generalization requires la...
Face recognition has obtained remarkable progress in recent years due to...
Deep neural networks are vulnerable to adversarial examples, which can
m...
We present batch virtual adversarial training (BVAT), a novel regulariza...
Sometimes it is not enough for a DNN to produce an outcome. For example,...
Binary neural networks have great resource and computing efficiency, whi...
Visual question answering (VQA) requires joint comprehension of images a...
To accelerate research on adversarial examples and robustness of machine...