In this paper, we present a novel approach to adapt a sequence-to-sequen...
Federated learning is particularly susceptible to model poisoning and
ba...
Imperceptible poisoning attacks on entire datasets have recently been to...
We discuss methods for visualizing neural network decision boundaries an...
Federated learning (FL) has rapidly risen in popularity due to its promi...
A central tenet of Federated learning (FL), which trains models without
...
Data poisoning for reinforcement learning has historically focused on ge...
Federated learning has quickly gained popularity with its promises of
in...
The adversarial machine learning literature is largely partitioned into
...
As the curation of data for machine learning becomes increasingly automa...
Data poisoning and backdoor attacks manipulate training data to induce
s...
Data poisoning is a threat model in which a malicious actor tampers with...
Large organizations such as social media companies continually release d...
Data poisoning and backdoor attacks manipulate victim models by maliciou...
Generative models are increasingly able to produce remarkably high quali...
Data Poisoning attacks involve an attacker modifying training data to
ma...
Transfer learning facilitates the training of task-specific classifiers ...
Data poisoning–the process by which an attacker takes control of a model...
Meta-learning algorithms produce feature extractors which achieve
state-...
Previous work on adversarially robust neural networks requires large tra...
Targeted clean-label poisoning is a type of adversarial attack on machin...
The power of neural networks lies in their ability to generalize to unse...
Knowledge distillation is effective for producing small high-performance...