AI Dictionary

ELOQUENCE’s AI Dictionary for Beginners: Need-to-Know Terms and Definitions

Artificial Intelligence (AI) has been thrust into the spotlight, bringing with it a host of phrases, acronyms, and concepts that, until recently, were hardly used outside of computer science. It’s fast becoming essential to have an understanding of these terms. If this new lexicon is overwhelming you, don’t worry — we’ve got your back. Here’s your pocket dictionary of the most common, need-to-know terms in artificial intelligence.

50+ AI terms and phrases to know:

Artificial Intelligence (AI)

The simulation of human intelligence processes by machines, especially computer systems.

Subset of AI focusing on algorithms that enable computers to learn from and make predictions or decisions based on data.

Computational models inspired by the human brain, used in machine learning to recognize patterns and make decisions.

Convolutional Neural Networks (CNNs) are specialized neural networks adept at processing data with a grid-like topology, like images, using convolutional layers; key in image recognition tasks.

Recurrent neural networks (RNNs) are designed to recognize patterns in sequences of data, such as text or time series, by using their internal state (memory) to process sequences.

Generative Adversarial Networks (GANs): a framework of two neural networks contesting with each other in a game (generative and discriminative); widely used in image generation, style transfer, and more.

A subset of Machine Learning using neural networks with multiple layers to model complex patterns in data. It’s fundamental to LLMs.

Generative AI (GenAI) refers to artificial intelligence systems that can generate new content or data that is similar but not identical to the data they were trained on, often used for creating images, videos, sounds, or text in a variety of domains. Not to be confused with AGI.

A foundation model is a type of large-scale AI model that is pre-trained on a vast amount of data across various domains and can later be fine-tuned for specific tasks. These models serve as a base for further specialization. Training foundation models can cost billions of $US.

Large Language Models (LLMs) are advanced AI models that process and generate human-like text, relying on ML and DL techniques.

A small language model is a scaled-down language model designed for language processing tasks. It’s less complex and powerful than lfull LLMs, offering reduced computational requirements and often faster response times.

A small language model is a scaled-down language model designed for language processing tasks. It’s less complex and powerful than lfull LLMs, offering reduced computational requirements and often faster response times.

A branch of AI that focuses on enabling machines to understand, interpret, and respond to human languages, used in applications such as translation, sentiment analysis, and chatbots.

A field of AI focused on enabling machines to interpret and understand the visual world from digital images and videos, used in object detection, image classification, etc.

A training method where models learn from labeled data, using input-output pairs to predict outcomes. Common in classification and regression tasks.

A form of unsupervised learning where the model generates its own labels from the input data, often used in language model training.

Combines both labeled and unlabeled data for training. Useful when acquiring a fully labeled dataset is costly or impractical.

Involves training models on data without labels. The model detects patterns and structures in the data, often used for clustering and association.

An approach where models learn to make decisions by performing actions and receiving feedback, often used in gaming, navigation, and real-time decisions.

A set of rules or steps followed by a computer to perform a task or solve a problem.

Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations.

The practice of examining large databases to generate new information.

The process of transforming raw data into a set of features that can be used in ML.

A modeling error in ML where a model is too closely fit to a limited set of data points.

A modeling error in ML where a model is too simple to capture the underlying pattern of the data.

A systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.

The degree of spread in a set of data values.

A parameter whose value is set before the learning process begins.

A dataset used to train a model.

A dataset used to provide an unbiased evaluation of a model fit during the training phase.

The dataset used for training AI models. The quality and quantity of learning data are crucial for the accuracy and effectiveness of the trained model.

A table used to describe the performance of a classification model.

The ratio of correctly predicted instances to the total instances.

The ratio of correctly predicted positive observations to the total predicted positives.

The ratio of correctly predicted positive observations to all observations in the actual class.

A measure of a test’s accuracy that considers both the precision and the recall.

A graphical plot that illustrates the diagnostic ability of a binary classifier system.

A performance measurement for classification problems at various threshold settings.

A method of evaluating how well specific algorithm models the given data.

An optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent.

A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.

One complete pass through the training dataset.

The number of training examples utilized in one iteration.

A regularization technique for reducing overfitting in neural networks.

A training approach where the algorithm can choose the data it learns from. It queries the most informative and relevant examples to learn faster with fewer data (frequently with human-in-the-loop).

Involves taking a model pre-trained or trained on a given task and fine-tuning it for a different task, effective in reducing the need for large labeled datasets.

A technique for training algorithms across decentralized devices or servers holding local data samples, without exchanging them. Enhances privacy and reduces data centralization.

Fine-tuning in AI refers to taking a pre-trained model (like a foundation model) and further training it on a more specific dataset to specialize its abilities, usually through supervised or reinforcement learning.

Methods and techniques in AI that make the results of the solution understandable by humans.

The branch of ethics that focuses on the moral and ethical implications of AI and related technologies.

The framework that ensures the development and deployment of AI is aligned with organizational values and objectives.

Legislation proposed by the European Union aimed at regulating AI to ensure it is trustworthy and respects fundamental rights.

AI that can process and analyze multiple types of data inputs (e.g., text, images, and audio).

A type of unsupervised learning where the data itself provides the supervision.

The ability of a model to recognize objects it has never seen before.

A type of machine learning problem where the algorithm is trained to learn information about a category from a small number of training examples.

Techniques that attempt to fool models by supplying deceptive input.

The ability of an AI system to continue to function properly in the presence of invalid, incomplete, or unexpected inputs.

The capability of an AI system to handle growing amounts of work or its potential to be enlarged to accommodate growth.

AI algorithms processed locally on a hardware device rather than in a centralized data center.

AI services provided through cloud computing.

Technology platforms that simulate human thought processes in a computerized model.

Involves the analysis and interpretation of sound, such as speech recognition, voice synthesis, and audio enhancement using AI techniques.

A field of AI focused on enabling machines to interpret and understand the visual world from digital images and videos, used in object detection, image classification, etc.

The use of AI to automatically create content, including text, images, videos, and music, often leveraging GANs, LLMs, and other generative models.

Involves the use of AI to control and coordinate robots, enabling them to perform complex tasks autonomously or semi-autonomously, often used in manufacturing and healthcare.

This involves using AI techniques to understand, model, and predict the behaviour of complex systems. AI can process vast datasets and model intricate interactions within these systems.

The application of AI to automatically translate text or speech from one language to another, utilizing NLP and deep learning techniques.