Skip to content

Glossary (G - L)

Generative AI (GenAI): AI that can create new content like text, images, or music based on the data it was trained on. (Beginner)

GPU (Graphics Processing Unit): Specialized hardware that is much faster than a standard CPU at performing the calculations needed for AI. (Intermediate)

Generalization: The ability of an AI model to perform well on new, unseen data that wasn’t used during its training. (Intermediate)

GPT (Generative Pre-trained Transformer): A type of large language model architecture developed by OpenAI. (Beginner)

Gate: A control mechanism in certain neural network architectures (like LSTMs) that regulates the flow of information through the system. (Advanced)

Gaussian Distribution: A visual representation of data that forms a bell-shaped curve, often used in statistical AI models to represent natural variations. (Intermediate)

General Purpose AI (GPAI): A versatile AI system capable of performing a wide range of different tasks without needing specific training for each one. (Intermediate)

Global Minimum: The absolute lowest point of a loss function, representing the most accurate version of an AI model’s training. (Advanced)

Gold Standard: A dataset where the labels have been verified by human experts, used as a benchmark to measure AI accuracy. (Intermediate)

GPU Cloud: Remote clusters of high-performance graphics cards that businesses can rent to train large AI models without owning the hardware. (Beginner)

Guided Diffusion: A technique used to prioritize certain qualities (like sharpness or specific themes) during the image generation process. (Advanced)

Grounding: The process of linking AI outputs to real-world facts or specific internal data to prevent hallucinations. (Intermediate)

GAN (Generative Adversarial Network): A class of machine learning frameworks where two neural networks contest with each other to create realistic data. (Advanced)

Gradient Descent: The optimization algorithm used to minimize the “loss” (errors) in a machine learning model during training. (Advanced)

Hallucination: When an AI generates information that sounds confident and plausible but is actually factually incorrect. (Beginner)

Human-in-the-Loop (HITL): A system where a human reviews and corrects the AI’s output to ensure accuracy. (Intermediate)

Hyperparameter: A configuration setting used to control the learning process of an AI model. (Advanced)

Hard AI: A historical term used to describe AI that possesses true human-level consciousness and general reasoning. (Intermediate)

Heatmap: A visualization tool used to show which specific parts of an image or text attracted the most “attention” from an AI model. (Beginner)

Hidden Layer: The layers in a neural network located between the initial input and the final output where the actual “thinking” happens. (Intermediate)

Hybrid AI: An approach that combines modern deep learning (neural networks) with traditional symbolic AI (logic-based rules). (Advanced)

Hallucination Mitigation: Strategies and techniques (like RAG or grounding) used to reduce the frequency of incorrect information generated by an AI. (Intermediate)

Human-Centered AI (HCAI): An approach to AI that prioritizes human ethics, values, and usability throughout the development lifecycle. (Intermediate)

Heuristics: Mental shortcuts or “rules of thumb” used in early AI to make quick decisions. (Intermediate)

Inference: The process of using a trained AI model to make a prediction or generate an answer based on new input. (Intermediate)

Inference Latency: The amount of time it takes for an AI model to generate a response after receiving an input; critical for real-time applications. (Intermediate)

Image Recognition: The ability of an AI to identify objects, people, or places within an image. (Beginner)

Instruction Tuning: A training phase where an AI is specifically taught how to respond to common user requests (e.g., “Summarize this…”). (Intermediate)

In-Context Learning: The ability of an LLM to learn how to perform a task simply from the instructions and examples provided in the prompt. (Advanced)

Interpretability: The degree to which a human can understand why an AI model made a specific decision. (Advanced)

Jailbreaking: The act of using carefully crafted prompts to bypass an AI’s safety filters or restrictions. (Intermediate)

JSON (JavaScript Object Notation): A lightweight data-interchange format that AI models often use to structure their outputs. (Intermediate)

Knowledge Graph: A way of organizing information that shows the complex relationships between different concepts or entities. (Advanced)

Knowledge Distillation: A technique where a smaller, more efficient “student” model is trained to mimic the behavior of a larger “teacher” model. (Advanced)

KPI (Key Performance Indicator): Metrics used to measure the success of an AI implementation in a business context. (Beginner)

Labeling: The manual or automated process of adding meaningful tags to raw data (like highlighting “dogs” in photos) so an AI can learn. (Beginner)

Large Language Model (LLM): An AI trained on massive amounts of text to understand and generate human-like language. (Beginner)

Loss Function: A mathematical formula used during training to measure how far the AI’s prediction is from the truth. (Advanced)

Latent Space: A compressed representation of data where similar items are placed closer together (used in image generation). (Advanced)

Low-Rank Adaptation (LoRA): An efficient technique for fine-tuning large models using very little computing power. (Advanced)

LLMOps: A set of practices for managing the lifecycle of large language models in a business production environment. (Intermediate)