top of page
  • Writer's pictureGAI Insights Team

Generative AI Glossary

The GAI Insights Team helps make sense of all of these new terms. Please reach out to Paul.baier@gaiinsights.com if you have GAI terms for which you'd like clear and concise definitions, and our team will update this resource.


GAI Term

Definition

​Chatbot

​A software program that can simulate human conversation. Chatbots are different from Generative AI in that chatbots are designed to simulate human conversation, while Generative AI is designed to generate human-like text. Chatbots use a pre-defined set of rules to generate responses, whereas Generative AI uses deep neural networks to generate human-like text based on a given prompt.

Generative AI (GAI)

A branch of artificial intelligence that focuses on generating new content using machine learning algorithms. It includes ChatGPT and other similar tools, and is capable of generating new content based on given inputs or instructions. Traditional applications of AI largely classify content, while generative AI models create it.

Deep Neural Networks (DNN)

A type of machine learning algorithm that can learn from data. They have the ability to identify patterns and relationships in large amounts of data, making them a powerful tool for businesses to gain insights and make data-driven decisions.

Machine Learning (ML)

A subset of AI that involves the development of algorithms that enable machines to learn from data.

Natural Language Processing (NLP)

A computer science and artificial intelligence field that uses machine learning algorithms to help computers understand human language. It helps computers translate text from one language to another, respond to spoken commands, and summarize large volumes of text rapidly.

Horizontal AI

AI systems that have a broad applicability and can be applied across various domains or industries to solve a wide range of problems. Cortana, Siri and Alexa are some of the examples of Horizontal AI.

Narrow AI

AI systems that are designed to perform specific tasks or solve specific problems within a limited domain or area of expertise, for example, predicting customer churn.

Prompt

A search phrase or instruction entered into a generative AI model to generate specific content or responses.

Prompt Engineer

An expert user of various modes of Generative AI, who specializes in creating effective prompts for optimal results.

AI risk management

New types of risks, such as bias or opacity, that require integrated audit software solutions and constant oversight. While AI/ML can handle and analyze large volumes of unstructured data, automate repetitive tasks, and provide real-time and predictive insights, businesses must implement risk-management strategies, much as with any other new technoogy tool.

AI bias

The underlying prejudice in data used to create AI algorithms occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process. This can result in discrimination, bias, predjudice and other social consequences. Machine learning bias generally stems from problems introduced by the individuals who design and/or train the machine learning systems, who could either create algorithms that reflect unintended cognitive biases or real-life prejudices, or introduce biases because they use incomplete, faulty or prejudicial data sets to train and/or validate the machine learning systems.

AI ethics

A branch of ethics that seeks to address the ethical issues raised by the development and deployment of artificial intelligence. It refers to the moral principles and values that govern the behavior of machines and humans creating, using, and interacting with AI systems. It is concerned with ensuring that AI systems are designed and used in ways that are ethical, transparent, and accountable.

Explainable AI (XAI)

Describes an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. It contrasts with the “black box” concept in machine learning, where even the AI’s designers cannot explain why it arrived at a specific decision.

AI governance

Refers to the design, implementation, and use of ethical, transparent, and accountable AI technology. It aims to reduce biases, promote fairness, equity, and help facilitate interpretability and explainability of outcomes, which are particularly pertinent in an educational context.

AI transparency

Refers to the ability of a human to peer into the workings of an AI model and understand how it reaches its decisions. It also involves disclosing when AI is being used in various contexts, such as predictions, recommendations, decisions, or interactions. AI transparency is important for ensuring trustworthiness in AI.

Hallucination

In the context of GAI, "hallucination" generally refers to the creation of outputs, such as images, text, or sound, that do not accurately represent the original input data, often because they don't exist in the training set. Generative AI models are trained to learn patterns from a given dataset, and they use these patterns to generate new content. However, if these models misinterpret the underlying patterns or encounter unfamiliar inputs, they might "hallucinate" details that were not present or intended. For instance, in the case of image generation, a model might add objects or details to a scene that weren't there. Similarly, in text generation, the model could generate sentences or phrases that are out of context or do not make sense based on the input.


Temperature

In the realm of GAI, "temperature" refers to a parameter used in the generation process, particularly when a model is selecting the next output (be it a word, a pixel, a musical note, etc) based on probabilities. A temperature setting is used to control the randomness of the outputs. When the temperature is high, the AI model is more likely to make random choices. This can lead to more diverse but potentially less accurate or coherent outputs. Conversely, when the temperature is low, the model is more conservative and more likely to choose the most probable output, leading to more accurate but potentially less diverse or creative results

WINS Work

The places where tasks, functions, possibly your entire company or even your entire industry – are dependent on the manipulation and interpretation of Words, Images, Numbers and Sounds (WINS). Heart surgeons and chefs are knowledge workers but not WINS workers. Software programmers, accountants, marketing professionals and many more are WINS workers. Think of GAI as power tools for WINS work. Would you hire a carpenter today without a skill saw or a roofer without a nail gun? Every WINS work task, sub-process and process within your firm should be evaluated for potential leverage with GAI.



38 views0 comments
bottom of page
Agenda
Speakers