Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

AI Dictionary

ELOQUENCE’s AI Dictionary for Beginners: Need-to-Know Terms and Definitions

Artificial Intelligence (AI) has been thrust into the spotlight, bringing with it a host of phrases, acronyms, and concepts that, until recently, were hardly used outside of computer science. It’s fast becoming essential to have an understanding of these terms. If this new lexicon is overwhelming you, don’t worry — we’ve got your back. Here’s your pocket dictionary of the most common, need-to-know terms in artificial intelligence.

50+ AI terms and phrases to know:

Artificial Intelligence (AI)

The simulation of human intelligence processes by machines, especially computer systems.

A subset of AI that involves the use of algorithms and statistical models to enable computers to improve at tasks through experience.

A series of algorithms that attempt to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates.

A subset of ML that uses neural networks with many layers (deep networks) to analyze various factors of data.

A field of AI focused on the interaction between computers and humans through natural language.

A field of AI that trains computers to interpret and understand the visual world.

A type of ML where the model is trained on labeled data.

A type of ML where the model is trained on unlabeled data and must find patterns and relationships within it.

A type of ML where an agent learns to make decisions by taking actions in an environment to maximize some notion of cumulative reward.

A set of rules or steps followed by a computer to perform a task or solve a problem.

Extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations.

The practice of examining large databases to generate new information.

The process of transforming raw data into a set of features that can be used in ML.

A modeling error in ML where a model is too closely fit to a limited set of data points.

A modeling error in ML where a model is too simple to capture the underlying pattern of the data.

A systematic error introduced into sampling or testing by selecting or encouraging one outcome or answer over others.

The degree of spread in a set of data values.

A parameter whose value is set before the learning process begins.

A dataset used to train a model.

A dataset used to provide an unbiased evaluation of a model fit during the training phase.

A dataset used to provide an unbiased evaluation of a final model fit.

A table used to describe the performance of a classification model.

The ratio of correctly predicted instances to the total instances.

The ratio of correctly predicted positive observations to the total predicted positives.

The ratio of correctly predicted positive observations to all observations in the actual class.

A measure of a test’s accuracy that considers both the precision and the recall.

A graphical plot that illustrates the diagnostic ability of a binary classifier system.

A performance measurement for classification problems at various threshold settings.

A method of evaluating how well specific algorithm models the given data.

An optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent.

A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.

One complete pass through the training dataset.

The number of training examples utilized in one iteration.

A regularization technique for reducing overfitting in neural networks.

A machine learning technique where a model developed for a task is reused as the starting point for a model on a second task.

A distributed ML approach where the model is trained across multiple decentralized devices.

Methods and techniques in AI that make the results of the solution understandable by humans.

The branch of ethics that focuses on the moral and ethical implications of AI and related technologies.

The framework that ensures the development and deployment of AI is aligned with organizational values and objectives.

Legislation proposed by the European Union aimed at regulating AI to ensure it is trustworthy and respects fundamental rights.

AI that can process and analyze multiple types of data inputs (e.g., text, images, and audio).

A type of unsupervised learning where the data itself provides the supervision.

The ability of a model to recognize objects it has never seen before.

A type of machine learning problem where the algorithm is trained to learn information about a category from a small number of training examples.

Techniques that attempt to fool models by supplying deceptive input.

The ability of an AI system to continue to function properly in the presence of invalid, incomplete, or unexpected inputs.

The capability of an AI system to handle growing amounts of work or its potential to be enlarged to accommodate growth.

AI algorithms processed locally on a hardware device rather than in a centralized data center.

AI services provided through cloud computing.

Technology platforms that simulate human thought processes in a computerized model.