AI: Key Definitions

Learn the meaning of key terms used in conversations around artificial intelligence.

Grades K-12 9 min Resource by:
Listen to this article

It’s important for educators to join conversations around artificial intelligence (AI). However, the plethora of new terminology that has become embedded in this dialogue can be confusing, and this can be a barrier to entering into the conversation.

This page is intended to provide definitions of key terms being used frequently in AI discussions. Because the field of artificial intelligence is changing rapidly, some of the terms listed here may have multiple, evolving definitions. Still, this should give you a good foundation.

According to the AI Education Project, artificial intelligence (often abbreviated as AI) refers to a “problem-solving or decision-making ability displayed by a man-made system that is normally associated with humans.” In other words, AI is the science of programming computers to behave or respond like humans. Sara Brown, in Machine learning, explained for the MIT Sloan School of Management, similarly defines AI “as the capability of a machine to imitate intelligent human behavior.”

Bard is Google’s AI chatbot. Much like Microsoft’s Bing GPT integration, users can enter prompts and questions into Bard and receive answers that feel conversational. Bard is integrated into Google’s search platform, so results can be pulled from live web content as well as from its static database of knowledge. Bard is a large language model (LLM) that is powered by Google’s own PaLM 2 next generation language model. Bard is available to the general public.

This AI chatbot searches the web and is integrated into the Microsoft Bing search engine. If you use Bing search in the Microsoft Edge browser, you will see the “Chat” option at the top of the screen. This chatbot tool allows users to enter prompts and questions. Bing then scours the internet and provides an answer in natural language format. Bing Chat is powered by GPT-4 in “More Creative” mode and GPT-3 in “More Balanced” and “More Precise” modes. This Bing feature is currently the only way to use GPT-4 for free.

This is a term used by artificial intelligence experts to describe an AI system where the inputs and operations are not visible by anything or anyone outside of the program itself. In other words, a human operator can’t see what’s happening or how the program is processing information. Essentially, it’s like the AI is acting inside of a black box, leaving the programmer unsure why or how a specific output or action has occurred. The program does not provide an explanation for its conclusions, decisions, or actions.

A chatbot is a computer program that simulates interactive human conversation and communicates using natural language. It is powered by artificial intelligence and may respond to text or auditory questions and prompts. The user enters a prompt, and the chatbot responds. Chatbots are often used as virtual assistants or for customer service help. ChatGPT is an example of a chatbot.

ChatGPT is an artificial intelligence chatbot developed by OpenAI and released in November 2022. Users can enter questions and prompts and receive responses in natural, human-like language formats. Interaction with ChatGPT can feel conversational, and users can even ask contextual follow-up questions. Currently, ChatGPT is not currently connected to the internet. For now, responses are based on a static database of content compiled up to September 2021. GPT stands for Generative Pre-Trained Transformer, a technical term that refers to the large language model framework that the program uses to generate content.

This means that the artificial intelligence tool is creating new content, like images and text, that did not exist before. This differs from other applications, like a search engine, that simply locates and passes back content found in other places.

AI hallucination refers to scenarios where an artificial intelligence application will “make up” an answer that seems legitimate but is actually fictitious. For example, AI chatbots have been reported of fabricating reference sources, book titles, historical events, and even scientific facts. Because of this, users should always verify the content produced by an AI chatbot. AI companies are continuing to refine their products to reduce and, ideally, eliminate AI hallucinations.

Large language models (LLMs) are computer programs that allow end users to communicate with a computer interface using spoken or written language, and the computer can respond in natural, human-like language, which often feels like a conversation. Techopedia defines it this way: “. . . a type of machine learning model that can perform a variety of natural language processing (NLP) tasks, including generating and classifying text, answering questions in a conversational manner and translating text from one language to another.”

According to the article, Machine Learning, Explained, “Machine learning is a subfield of artificial intelligence that gives computers the ability to learn without explicitly being programmed.” Thomas W. Malone, founding director of the MIT Center for Collective Intelligence, says, “. . . Most of the current advances in AI have involved machine learning.” Popular chatbots like ChatGPT, Microsoft Bing Chat, and Google Bard have been developed on work done in the subfield of machine learning.

This is a technical term that WIRED refers to as “. . . a mathematical relationship linking words through numbers and algorithms.” Essentially, the more parameters used, the better the AI’s responses should be. For instance, ChatGPT-4 uses 100 trillion parameters while ChatGPT-3.5 uses about 175 million parameters. This number can help users understand how much more developed one iteration is from another.

A prompt engineer is someone skilled in formulating effective prompts (key words) and questions to be used for AI chatbot inquiries in applications such as Bing, Bard, and ChatGPT. Prompt engineering may include making decisions about which words and concepts to include in a prompt. The goal is to return results that are relevant, accurate, and usable.

This refers to the process of training large language models to communicate like humans. Computers don’t do this naturally. They need to be programmed, and part of this involves human interaction and feedback. WIRED explains, “Trained supervisors and end users alike help to train LLMs by pointing out mistakes, ranking answers based on how good they are, and giving the AI high-quality results to aim for.” This feedback helps to “teach” the computer how to communicate.

This is a  term used to explain a state of being where a computer becomes human-like and can feel, think, and perceive the world around it. Sentient AI is often described as being self-aware.