Your comprehensive guide to understanding AI privacy and security concepts. Explore our curated collection of terms and definitions.
Techniques used to manipulate AI systems by creating inputs specifically designed to cause misclassification or unintended behavior while appearing normal to human observers.
The protection of personal and sensitive information when using artificial intelligence systems, ensuring data confidentiality and user anonymity throughout the AI processing pipeline.
A privacy attack where an adversary attempts to infer sensitive attributes about individuals from non-sensitive attributes using machine learning models.
The process of removing or modifying personally identifiable information from datasets used in AI systems while preserving their utility for analysis.
The process of converting data into a coded format that can only be read by authorized parties with the correct decryption key, essential for protecting sensitive information in AI systems.
A technique that replaces sensitive data with realistic but inauthentic substitute values while maintaining the data's format and utility for AI training and testing.
The practice of limiting data collection and processing to only what is directly relevant and necessary to accomplish a specified purpose in AI systems.
A system for publicly sharing information about a dataset by describing patterns of groups within the dataset while withholding information about individuals.
A machine learning approach where models are trained across multiple decentralized devices holding local data samples, without exchanging them, thereby preserving privacy.
A form of encryption that allows computations to be performed on encrypted data without decrypting it first, enabling private AI inference and secure data processing.
A privacy model that ensures each record in a dataset is indistinguishable from at least k-1 other records with respect to certain identifying attributes.
A subset of artificial intelligence that enables systems to learn and improve from experience without being explicitly programmed.
A privacy attack that attempts to determine whether a particular data point was used to train an AI model, potentially revealing sensitive information about the training data.
A security threat where an attacker attempts to steal or duplicate an AI model's functionality by repeatedly querying it and analyzing its responses.
A privacy attack where an adversary attempts to reconstruct training data or extract sensitive information by exploiting a machine learning model's predictions or parameters.
A security threat where an attacker manipulates training data or the learning process to compromise an AI model's behavior while potentially maintaining model accuracy on normal inputs.
Techniques and practices to protect AI models from unauthorized access, reverse engineering, and extraction of sensitive training data.
Computing systems inspired by biological neural networks that can learn to perform tasks by considering examples, generally without being programmed with task-specific rules.
A quantitative limit on the amount of privacy loss that can be incurred when querying or using a privacy-preserving AI system, typically used in differential privacy.
An approach to AI system development that incorporates privacy protection throughout the entire engineering process, from initial architecture to deployment.
Technologies that protect personal data by minimizing data processing, maximizing data security, and empowering individuals to control their information in AI systems.
A systematic process to evaluate the potential effects that a project, initiative or system might have on privacy and to determine the appropriate management of privacy risks.
AI systems and techniques designed to maintain user privacy while delivering AI functionality, often using advanced cryptographic methods and secure computing techniques.
Protected memory regions that provide isolated execution environments for running sensitive AI computations, ensuring data and code privacy even if the host system is compromised.
A cryptographic technique that enables multiple parties to jointly compute a function over their inputs while keeping those inputs private from other parties.
Artificially generated data that mimics the statistical properties of real data while containing no actual personal information, used to train AI models while preserving privacy.
Cryptographic methods that allow one party to prove to another party that a statement is true without revealing any information beyond the validity of the statement itself.