Common Generative AI Terms
- Generative AI: A type of artificial intelligence that can create new content, such as text, images, music, code, or videos, based on patterns learned from existing data.
- Large Language Model (LLM): A deep learning model trained on massive amounts of text data, capable of understanding, generating, and manipulating human language. Examples: GPT-3.5, GPT-4, ChatGPT, Claude.
- Tokens: The basic units of text that LLMs process. They can be words, sub-word units, or punctuation, and are used to break down input text.
- Context Window: The maximum number of tokens an LLM can consider at once when processing input and generating output. A larger context window allows for longer conversations and more complex prompts.
- Prompt: The input text or instructions given to a Generative AI model to elicit a specific response or output.
- Prompt Engineering: The art and science of crafting effective prompts to guide Generative AI models to produce desired outputs, optimizing for accuracy, relevance, and style.
- Zero-Shot Prompting: Asking an LLM to perform a task it hasn’t been explicitly trained on, relying on its general knowledge and understanding of language.
- Few-Shot Prompting: Providing an LLM with a few examples of input-output pairs within the prompt itself to demonstrate the desired task and improve performance.
- Chain-of-Thought (CoT) Prompting: Encouraging an LLM to generate step-by-step reasoning before arriving at a final answer, improving performance on complex tasks.
- Temperature: A parameter that controls the randomness of an LLM’s output. Higher temperatures lead to more creative but potentially less coherent responses, while lower temperatures yield more focused and deterministic outputs.
- Hallucination: When a Generative AI model produces incorrect, nonsensical, or fabricated information that is presented as factual.
- Fine-tuning: The process of further training a pre-trained LLM on a smaller, specific dataset to adapt it to a particular task or domain.
- Retrieval Augmented Generation (RAG): A technique that enhances LLMs by retrieving relevant information from an external knowledge base before generating a response, grounding the AI in factual data.
- Embeddings: Numerical representations (vectors) of text, images, or other data that capture semantic meaning, allowing AI models to understand relationships between different pieces of information.
- Latent Space: An abstract, multi-dimensional space where Generative AI models represent and manipulate data. The process of generating content involves navigating this space.
- Diffusion Models: A class of generative models, popular for image generation, that work by gradually adding noise to data and then learning to reverse the process to create new data.
- Generative Adversarial Network (GAN): A framework consisting of two neural networks (a generator and a discriminator) that compete against each other to produce highly realistic synthetic data.
- Multimodal AI: Generative AI models capable of understanding and generating content across multiple modalities, such as text, images, audio, and video.
- Transformer Architecture: The foundational neural network architecture that powers most modern LLMs, known for its ability to process sequential data and capture long-range dependencies.
- Content Moderation: Processes and tools used to ensure that AI-generated content adheres to safety guidelines, ethical standards, and legal requirements, preventing the creation of harmful or inappropriate material.
Post Views: 14