Security Threats

Model Poisoning

A security threat where an attacker manipulates training data or the learning process to compromise an AI model's behavior while potentially maintaining model accuracy on normal inputs.

Examples & Use Cases

  • Backdoor attacks in neural networks
  • Data poisoning in federated learning
  • Targeted model manipulation

Related Terms

Model Security
Adversarial Attacks
AI Security

Category

Security Threats