A security threat where an attacker manipulates training data or the learning process to compromise an AI model's behavior while potentially maintaining model accuracy on normal inputs.