AI Dictionary
ELOQUENCE’s AI Dictionary for Beginners: Need-to-Know Terms and Definitions
Artificial Intelligence (AI) has been thrust into the spotlight, bringing with it a host of phrases, acronyms, and concepts that, until recently, were hardly used outside of computer science. It’s fast becoming essential to have an understanding of these terms. If this new lexicon is overwhelming you, don’t worry — we’ve got your back. Here’s your pocket dictionary of the most common, need-to-know terms in artificial intelligence.
50+ AI terms and phrases to know:
Artificial Intelligence (AI)
The simulation of human intelligence processes by machines, especially computer systems.
Machine Learning (ML)
A subset of AI that involves the use of algorithms and statistical models to enable computers to improve at tasks through experience.
Neural Network
A series of algorithms that attempt to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates.
Deep Learning
A subset of ML that uses neural networks with many layers (deep networks) to analyze various factors of data.
Natural Language Processing (NLP)
A field of AI focused on the interaction between computers and humans through natural language.
Computer Vision
A field of AI that trains computers to interpret and understand the visual world.
Supervised Learning
A type of ML where the model is trained on labeled data.
Unsupervised Learning
A type of ML where the model is trained on unlabeled data and must find patterns and relationships within it.
Reinforcement Learning
A type of ML where an agent learns to make decisions by taking actions in an environment to maximize some notion of cumulative reward.
Algorithm
A set of rules or steps followed by a computer to perform a task or solve a problem.
Big Data
Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations.
Data Mining
The practice of examining large databases to generate new information.
Feature Extraction
The process of transforming raw data into a set of features that can be used in ML.
Overfitting
A modeling error in ML where a model is too closely fit to a limited set of data points.
Underfitting
A modeling error in ML where a model is too simple to capture the underlying pattern of the data.
Bias
A systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.
Variance
The degree of spread in a set of data values.
Hyperparameter
A parameter whose value is set before the learning process begins.
Training Data
A dataset used to train a model.
Validation Data
A dataset used to provide an unbiased evaluation of a model fit during the training phase.
Test Data
A dataset used to provide an unbiased evaluation of a final model fit.
Confusion Matrix
A table used to describe the performance of a classification model.
Accuracy
The ratio of correctly predicted instances to the total instances.
Precision
The ratio of correctly predicted positive observations to the total predicted positives.
Recall (Sensitivity)
The ratio of correctly predicted positive observations to all observations in the actual class.
F1 Score
A measure of a test’s accuracy that considers both the precision and the recall.
ROC Curve
A graphical plot that illustrates the diagnostic ability of a binary classifier system.
AUC (Area Under the ROC Curve)
A performance measurement for classification problems at various threshold settings.
Loss Function
A method of evaluating how well specific algorithm models the given data.
Gradient Descent
An optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent.
Backpropagation
A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.
Epoch
One complete pass through the training dataset.
Batch Size
The number of training examples utilized in one iteration.
Dropout
A regularization technique for reducing overfitting in neural networks.
Transfer Learning
A machine learning technique where a model developed for a task is reused as the starting point for a model on a second task.
Federated Learning
A distributed ML approach where the model is trained across multiple decentralized devices.
Explainable AI (XAI)
Methods and techniques in AI that make the results of the solution understandable by humans.
AI Ethics
The branch of ethics that focuses on the moral and ethical implications of AI and related technologies.
AI Governance
The framework that ensures the development and deployment of AI is aligned with organizational values and objectives.
AI Act
Legislation proposed by the European Union aimed at regulating AI to ensure it is trustworthy and respects fundamental rights.
Multimodal AI
AI that can process and analyze multiple types of data inputs (e.g., text, images, and audio).
Self-Supervised Learning
A type of unsupervised learning where the data itself provides the supervision.
Zero-Shot Learning
The ability of a model to recognize objects it has never seen before.
Few-Shot Learning
A type of machine learning problem where the algorithm is trained to learn information about a category from a small number of training examples.
Adversarial Machine Learning
Techniques that attempt to fool models by supplying deceptive input.
Robustness
The ability of an AI system to continue to function properly in the presence of invalid, incomplete, or unexpected inputs.
Scalability
The capability of an AI system to handle growing amounts of work or its potential to be enlarged to accommodate growth.
Edge AI
AI algorithms processed locally on a hardware device rather than in a centralized data center.
Cloud AI
AI services provided through cloud computing.
Cognitive Computing
Technology platforms that simulate human thought processes in a computerized model.