Security Threats

Adversarial Attacks

Techniques used to manipulate AI systems by creating inputs specifically designed to cause misclassification or unintended behavior while appearing normal to human observers.

Examples & Use Cases

  • Adversarial images fooling classifiers
  • Evasion attacks on AI systems
  • Perturbation-based attacks

Related Terms

Model Security
AI Security
Model Poisoning

Category

Security Threats