20 Artificial Intelligence (AI) Terms You Need to Know

edited February 6 in AI

Artificial intelligence (AI) refers to a machine’s engineered ability to perform human tasks. The age of AI is developing at an exponential rate, with little time to keep up with key terms in the field. What are some buzzwords in AI and what fundamental ideas do they represent? Here is a glossary of AI terms to keep you well in the loop.

Algorithm: A systematic set of instructions that a machine follows to execute a given task or to solve a problem. These algorithms enable AI systems to learn from data, make decisions, and improve their performance over time based on input data and defined rules or patterns. It is the very basis of AI.

Big Data: Data that is so large in volume that it simply cannot be processed by way of traditional methods. It is a combination of data from many sources, and patterns identified in such combinations of data can inform decisions in business, science, agriculture, and so forth. Examples can include how IBM’s Deep Thunder weather analytics package helped farmers identify the best timing to irrigate their crops, or when British Airways’ “Know Me Program” used big data to learn more about customer preferences. 

Chatbot: Sometimes known simply as “bot”, this is where a computer program processes language and conducts an actual conversation with a human. It is becoming more widely used to perform customer service tasks. As chatbots are AI-powered, they can operate 24/7 and serve an unlimited number of customers in any given timeframe. 

Cognitive Computing: This is where an AI system “mimics” or simulates humans’ thoughts and cognitive processes. Broadly put, cognitive computing is the foundation of the “intelligence” in artificial intelligence. 

Corpus: Latin for “body”, corpus in AI refers to a large body of language–either written or spoken–used to train machine learning models (see below for Machine Learning). Google translate, for instance, is based on a corpus. 

Data Mining: A process where correlations and patterns are identified from larger datasets, with the end goal of predicting outcomes. Netflix, for example, recommends shows based on users’ demographic details, search history, watchlist, ratings, and so forth. 

Deep Learning: A branch of machine learning with its roots in artificial neural networks (see below for neural networks). It is called “deep” learning due to the multiple layers of processing needed to identify higher levels of features within given data. Due to its ability to capture detail, deep learning can be applied to recognize text, image, sounds, and so forth. 

Generative AI: The ability of machines to “generate” new content based on an existing dataset. This content can be audio, visual (e.g., images and videos), or textual. ChatGPT and Google Bard, with their ability to create content for personal or commercial use, are examples of generative AI.

Hallucination: An outcome of an AI program that cannot be easily justified by its training data. A common example is when ChatGPT yields factually false responses when being prompted. 

Hyperparameter: Specific parameters that dictate a machine learning model’s learning process, used to optimize learning models (see Parameter below for comparison).

Machine Learning (ML): This is where a machine is programmed to learn from data itself, without the need for a human to program it explicitly. A common application of machine learning is cognition–facial cognition and voice recognition for unlocking phones.

Neural network: A machine learning model that draws inspiration from the inner workings of the human brain. When a machine processes a face, it may, just like humans, first process whether it belongs to male or a female. Neural networks are key to mirroring human intelligence, which then can be used to build artificial intelligence. 

Natural Language Processing (NLP): The broader area of computer science that studies how computers interact with natural language produced by humans. NLP lies at the intersection of linguistics and computer science; with NLP, computers can process human language and perform useful tasks as a result of it. Spell checks, predictive texts, machine translation, and speech-to-text are all applications of NLP. 

Large Language Models (LLM): An application of deep learning in the realm of linguistics, large language models enable a machine to understand, and hence create, human-like text. LLMs contain massive sets of linguistic data (often derived from the Internet); OpenAI’s GPT systems are, for example, built on the basis of LLMs. 

Overfitting: This is when a machine learning model adheres too closely to a given dataset and is unable to generalize. This happens when the training dataset is too small, or if a dataset consists of too much irrelevant (“noisy”) data.

Parameter: Originally taken from mathematical concepts, a parameter is a configuration variable internal to the machine learning model. It can be applied to make predictions. 

Reinforcement Learning: This is where a machine learns over time with trial and error. Akin to reinforcement learning in humans, a machine is “rewarded” for making right moves and “punished” for any wrong ones. Over time, the machine will learn to perform a task well. Many self-driving cars have reinforcement learning applications, wherein the car ultimately learns to drive correctly after being programmed through mistakes and rectification. 

Supervised Learning: This is where a machine is trained using “labeled” data, using feedback directly to check whether the output is accurate. Supervised learning can, for instance, help computers detect spam in emails, as spam messages often come with certain patterns that can be “labeled”.

Unsupervised Learning: The opposite of supervised learning, this is where a machine is trained on “unlabeled” data; no feedback is given to the machine for its training. With unsupervised learning, machines can identify patterns without human intervention, making it ideal for data exploration.

Transfer Learning: This is where a machine learning model, being already trained in one task, “transfers” its “intelligence” to another task. A widely-known example is that of image recognition: a machine could “learn” to recognize features of a vehicle and then be able to identify a motorcycle using transfer learning.

AI Terms and Concepts: Gearing up for the Future

The above are just a few examples among a broader sea of AI terminology, but going forward, these terms are a good foundation for anyone interested in equipping their knowledge for the age of AI. With ideas blooming in AI, humans face the task of being creative with what AI can do for the many aspects of human development. For instance, how can reinforcement learning improve diagnosis in clinical settings? How can natural language processing be used to conduct speech-to-text tasks with regional or foreign accents? The application and space for innovative thinking in AI are immeasurable, and the time to gear up and learn is now. 

Esme Lee is a science writer and editor in the UK, carrying a passion for tech copywriting. She has a background in educational neuroscience and holds a PhD from the University of Cambridge.


Stay Up to Date

Get the latest news by subscribing to Acer Corner in Google News.