Membership inference attacks are a key measure to evaluate privacy leaka...
Deep neural networks are known to be vulnerable to adversarially perturb...
The adversarial patch attack against image classification models aims to...
NeuraCrypt (Yara et al. arXiv 2021) is an algorithm that converts a sens...
In the text processing context, most ML models are built on word embeddi...
We focus on the use of proxy distributions, i.e., approximations of the
...
Property inference attacks consider an adversary who has access to the
t...
Machine learning systems that rely on training data collected from untru...
Poisoning attacks have emerged as a significant security threat to machi...
Product measures of dimension n are known to be concentrated in Hamming
...
In this work, we initiate a formal study of probably approximately corre...
Many recent works have shown that adversarial examples that fool classif...
Over recent years, devising classification algorithms that are robust to...
We study adversarial perturbations when the instances are uniformly
dist...
Making learners robust to adversarial perturbation at test time (i.e.,
e...
In a poisoning attack against a learning algorithm, an adversary tampers...
Many modern machine learning classifiers are shown to be vulnerable to
a...
Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algor...