New AI Glossary Maps LLMs, Hallucinations and More

Key Points
- A new AI glossary has been published to define over 30 key terms.
- Entries cover large language models, generative AI, diffusion, hallucinations and more.
- The guide highlights differing definitions of artificial general intelligence from OpenAI and DeepMind.
- It distinguishes AI agents, which automate multistep tasks, from basic chatbots.
- Hallucinations are defined as AI-generated misinformation that can pose real‑world risks.
- The glossary will receive regular updates as the field evolves.
- Designed for journalists, analysts and anyone tracking AI developments.
A leading tech outlet has released a comprehensive glossary of artificial‑intelligence terminology, covering everything from large language models and generative AI to hallucinations and compute. The reference, designed for journalists and industry watchers, offers clear, concise definitions and promises regular updates as the field evolves. By standardizing the language around AI, the guide aims to improve reporting accuracy and help readers navigate the rapidly shifting tech landscape.
Today, a major technology publication unveiled a searchable AI glossary that breaks down the most common terms shaping the industry. The reference includes entries for large language models (LLMs), generative AI, deep learning, diffusion, hallucinations, and a host of other concepts that have become part of everyday tech reporting.
According to the publishers, the glossary addresses a growing need for clear, consistent language. "We frequently have to use technical jargon in our coverage," the editorial team explained, noting that the lack of shared definitions often leads to confusion among readers and even among experts. The new resource is intended to serve journalists, analysts and anyone trying to make sense of AI breakthroughs.
Each entry offers a concise definition followed by contextual notes. For example, the entry on artificial general intelligence (AGI) cites OpenAI’s description of AGI as "the equivalent of a median human that you could hire as a co‑worker," while also referencing Google DeepMind’s framing of AGI as "AI that’s at least as capable as humans at most cognitive tasks." The glossary does not shy away from the nuances; it highlights that even leading researchers disagree on the precise boundaries of the term.
The guide also clarifies the difference between an AI agent and a basic chatbot. An AI agent, the glossary notes, can automate multistep tasks such as filing expenses or writing code, drawing on multiple underlying models to complete a workflow. By contrast, a chatbot typically handles single‑turn interactions without the orchestration of additional tools.
Other sections demystify technical building blocks. The entry for compute defines it as the hardware power—GPUs, TPUs, CPUs—that fuels model training and inference. The explanation of diffusion describes how AI models reverse a noise‑adding process to reconstruct images, audio or text, while the distillation entry outlines how a smaller “student” model learns from a larger “teacher” model to achieve faster performance.
Hallucinations receive particular attention because they represent a critical quality‑control issue. The glossary defines hallucinations as instances where models generate inaccurate or fabricated information, warning that such errors can pose real‑world risks, especially in domains like healthcare. It also notes that the prevalence of hallucinations is driving a shift toward more specialized, vertical AI models that aim to reduce knowledge gaps.
Beyond definitions, the publication promises to keep the glossary current. "We will regularly update this glossary to add new entries as researchers uncover novel methods and emerging safety risks," the editors wrote. The commitment reflects the fast‑moving nature of AI research, where breakthroughs and terminology can change within months.
By providing a single, authoritative source for AI terminology, the glossary hopes to streamline reporting and improve public understanding. As AI continues to infiltrate everything from consumer apps to enterprise platforms, having a reliable lexicon may prove essential for accurate coverage and informed discussion.