Skip to content

Glossary (M - R)

Machine Learning (ML): A subset of AI focused on building systems that learn and improve from data without being explicitly programmed. (Beginner)

Machine Translation: The use of AI to automatically translate text or speech from one language to another. (Beginner)

Multimodal AI: An AI model that can process and understand multiple types of input, such as text, images, and audio, at the same time. (Intermediate)

Model: The “brain” of the AI; the mathematical representation of the patterns the system has learned from data. (Beginner)

MoE (Mixture of Experts): A model architecture where only dynamic parts of the model are activated for each input, increasing efficiency. (Advanced)

Multi-Agent Systems (MAS): A framework where multiple specialized AI agents work together, often communicating and critiquing each other, to solve complex problems. (Advanced)

Model Collapse: A theoretical state where future AI models degrade in quality because they are trained primarily on content originally created by other AIs. (Advanced)

Model Registry: A central management system used by engineering teams to track the versions, changes, and health of AI models in production. (Intermediate)

Multi-Head Attention: A key component of Transformers that allows the system to focus on many different relationships within a sentence simultaneously. (Advanced)

Natural Language Processing (NLP): The branch of AI that helps computers understand, interpret, and generate human language. (Beginner)

Natural Language Generation (NLG): The specific part of AI focused on converting structured data or intent into coherent, human-like written text. (Intermediate)

Natural Language Understanding (NLU): The part of AI that converts raw text into a format the computer can “understand” and process as intent. (Intermediate)

Noise: Irrelevant or random information within a dataset that can make it harder for an AI to identify the true underlying patterns. (Intermediate)

Neural Network: A computer system modeled after the human brain designed to recognize patterns and solve problems. (Intermediate)

NLP (Natural Language Processing): See Natural Language Processing. (Beginner)

Non-Deterministic: A characteristic of generative AI where the same input can result in slightly different outputs each time. (Intermediate)

Open Source AI: AI models whose underlying code and specifications are made available to the public for free use. (Intermediate)

Overfitting: When an AI learns the training data too well, including the noise, and fails to perform on new data. (Advanced)

Out-of-Distribution (OOD): Data provided to an AI that is significantly different from anything it saw during its original training phase. (Advanced)

Over-Refinement: A common pitfall in AI output where the response becomes overly verbose or sanitized, often losing the original intent. (Intermediate)

One-Shot Prompting: Providing the AI with exactly one example of the task you want it to perform. (Intermediate)

Prompt: The input or instruction you give to an AI model to get a specific output. (Beginner)

Prompt Engineering: The art and science of crafting the best possible inputs to get high-quality results from an AI. (Beginner)

Parameters: The internal variables that the AI “adjusts” during training to learn patterns in data. (Advanced)

Prompt Injection: A security vulnerability where a user crafts an input to override the system’s original instructions or exfiltrate data. (Advanced)

Positional Encoding: A technique used in Transformers to give the model information about the relative or absolute position of tokens in a sequence. (Advanced)

Perplexity: A measurement used to evaluate how well a probability model predicts a sample. (Advanced)

Predictive Analytics: Using AI to analyze historical data and make predictions about future events. (Intermediate)

Parameter-Efficient Fine-Tuning (PEFT): A collection of techniques (like LoRA) that allow engineers to update massive models using very little compute power. (Advanced)

Pattern Recognition: The core ability of AI to identify recurring structures, shapes, or sequences within large datasets. (Beginner)

Precision: A metric that measures how many of the AI’s “positive” guesses were actually correct, minimizing “false alarms.” (Intermediate)

Pre-training: The long and expensive initial phase where an AI model reads a massive dataset to learn the basic rules of language or vision. (Advanced)

Probability Distribution: A mathematical table showing the likelihood of every possible next word or pixel in a generative AI sequence. (Advanced)

Quantization: A technique to reduce the size of an AI model by using lower-precision numbers, making it faster to run on cheaper hardware. (Advanced)

Query: A request for information made to an AI system or database. (Beginner)

Quantization Aware Training (QAT): Training an AI while simulate-compressing it, ensuring it stays accurate even after it is made smaller. (Advanced)

RAG (Retrieval-Augmented Generation): A technique that allows an AI to look up internal files or web data before answering, reducing hallucinations. (Intermediate)

Reinforcement Learning (RL): A type of machine learning where the AI learns by trial and error through rewards and punishments. (Intermediate)

RLHF (Reinforcement Learning from Human Feedback): A method used to “align” AI models by having humans rank multiple AI answers. (Advanced)

Reasoning: The ability of an AI to think through a problem step-by-step rather than just predicting the next word. (Intermediate)

RNN (Recurrent Neural Network): A type of neural network often used for sequential data like time series or text. (Advanced)

Random Forest: A popular machine learning algorithm made of many “decision trees” that work together to provide more accurate predictions. (Intermediate)