One of the biggest barriers to understanding AI is not the technology itself — it is the language used to describe it. AI jargon explained badly, or not at all, leaves people feeling like outsiders in a conversation that increasingly affects everyone. This glossary cuts through the terminology and explains the most common AI terms in plain language. No technical background required. If you’re new to these tools entirely, our introduction to AI is a good place to start before working through this glossary.
Why AI Jargon Matters
You don’t need to understand every technical term to use AI tools effectively. However, knowing the key vocabulary helps you follow conversations about AI, evaluate claims made by companies and journalists, and understand the limitations of the tools you use. Furthermore, many of these terms come up directly in AI tools themselves — so recognising them makes you a more informed user.
The Essential AI Terms Explained
Artificial Intelligence (AI)
The broad term for computer systems that perform tasks that typically require human intelligence — understanding language, recognising patterns, making decisions. In everyday conversation, “AI” usually refers specifically to the large language model chatbots like ChatGPT, Claude, and Gemini, though it technically covers a much wider field.
Large Language Model (LLM)
The specific type of AI that powers most modern chatbots. An LLM is trained on enormous amounts of text — books, websites, articles, code — and learns to predict what words and sentences should come next in any given context. When you type a question into ChatGPT, an LLM generates the response. “Large” refers to the scale of the model, which contains billions of parameters.
Prompt
The input you give to an AI tool — the question, instruction, or request you type into the chat box. The quality of your prompt directly shapes the quality of the response. A well-written prompt produces a targeted, useful answer. A vague prompt produces a generic one. For a full guide on writing effective prompts, see our prompting guide.
Response / Output
Response is what the AI produces in reply to your prompt. Also called the output. The response is generated fresh each time — the AI does not retrieve a pre-written answer from a database. Instead, it generates the response word by word based on its training.
Hallucination
One of the most important AI terms to understand. A hallucination occurs when an AI confidently produces information that is factually incorrect — a made-up statistic, a fake citation, an event that never happened, a name that doesn’t exist. The AI does not know it is wrong. It generates the most plausible-sounding response based on its training, and sometimes that response is fiction.
Hallucinations happen with all major AI tools. They are the primary reason you should always verify important factual claims before using them professionally.
We cover this in more detail in our guide on what AI can and cannot do.
Token
The unit AI models use to process text. A token is roughly equivalent to three-quarters of a word in English — so the word “explaining” might be split into two tokens. Tokens matter because AI models have a maximum number of tokens they can process in a single conversation, which determines how much text they can handle at once.
Context Window
The maximum amount of text an AI model can process in a single session — measured in tokens. Think of it as the model’s short-term memory for a conversation. A larger context window means the AI can handle longer documents and longer conversations without losing track of earlier content. Claude currently offers one of the largest context windows among mainstream tools.
Training Data
The text the AI model learned from during its development. LLMs are trained on vast datasets drawn from the internet, books, academic papers, and other sources. The content and quality of training data directly influence what the model knows, how it expresses itself, and what biases it carries.
Training Cutoff
The date after which the AI has no knowledge of world events, because its training data was collected before that point. Ask an AI about something that happened after its training cutoff and it either admits it doesn’t know or — more dangerously — generates a plausible but incorrect answer. Tools with web search enabled can access current information, which partially addresses this limitation.
Parameters
The numerical values inside an AI model that were adjusted during training to produce accurate outputs. When you hear that a model has “billions of parameters,” this refers to the scale of its internal configuration. More parameters generally — though not always — correlate with more capable responses.
Fine-Tuning
A process where a pre-trained AI model is further trained on a smaller, specific dataset to improve its performance on particular tasks or to adjust its behaviour. For example, a general-purpose model might be fine-tuned on medical literature to make it more useful in healthcare contexts.
Generative AI
AI that creates new content — text, images, audio, video, code — rather than simply analysing or classifying existing content. ChatGPT, Claude, and Gemini are all examples of generative AI. So are image generators like DALL-E and Midjourney.
Multimodal
An AI model that can process and generate multiple types of content — not just text, but also images, audio, or video. Gemini and ChatGPT are both multimodal, meaning you can show them a photo and ask questions about it, or ask them to describe an image in words.
AI Agent
An AI system that can take actions autonomously — browsing the web, sending messages, filling in forms, managing files — rather than simply generating text responses. AI agents are a rapidly developing area, and their reliability varies considerably depending on the task and platform.
System Prompt
Instructions given to an AI model before a conversation begins, typically by the developer or platform rather than the end user. System prompts shape how the AI behaves — its tone, its restrictions, its persona. When you use a product built on top of an AI model, the experience you get is partly determined by the system prompt the developer has set.
Inference
The process of running a trained AI model to generate a response. When you send a prompt and receive a reply, inference is what’s happening on the server. It is distinct from training — training is how the model learned, inference is how it applies that learning in real time.
Terms You’ll Hear in the News
AGI (Artificial General Intelligence)
A hypothetical form of AI that matches or exceeds human intelligence across all tasks, not just specific ones. AGI does not currently exist. Current AI tools are narrow — extremely capable in specific areas, but lacking the general reasoning and adaptability that characterise human intelligence.
Alignment
The challenge of ensuring AI systems behave in accordance with human values and intentions. An aligned AI does what its developers and users actually want, rather than finding unintended shortcuts or producing harmful outputs. Alignment research is a significant area of focus at companies like Anthropic.
Open Source
AI models whose underlying code and weights are made publicly available, allowing anyone to examine, modify, and deploy them. Open-source models like Meta’s Llama family contrast with closed-source models like GPT-5 and Claude, whose internal workings are not publicly accessible.

