📖 Some key terms around LLMs from papers & other resources discovered on my LLM journey, re-written as one-liners.

⭐ Large Language Model (LLM) is a deep learning model like ChatGPT, based on transformer architecture, trained on huge amounts of text

⭐ Explicit prompts are clear instructions including the role, task, & output format, to guide an LLM’s behaviour

⭐ Implicit prompts guide an LLM’s behaviour without explicit instructions, relying on model general understanding, for more creativity

⭐ Prompt engineering crafts prompts to guide the LLM for best possible output

⭐ Completion is an LLM’s output in response to a prompt

⭐ Generative AI is an AI instructed by a prompt input to create original content from existing data.

⭐ Hallucination is when an LLM generates outputs that sound plausible but incorrect

⭐ Chain-of-thought prompting aims to solve complex problems by decomposing prompts into intermediate step prompts

⭐ One-shot learning for a model trained to understand new concepts with only a single example

⭐ Few-shot learning for a model trained to understand new concepts with only a few examples

⭐ Foundational model is a pre-trained model like GPT-3 serving as a starting point for downstream tasks like information retrieval

⭐ Plugins / agents enable LLMs to access APIs for powerful capabilities like performing web searches for an up-to-date world view or fact-checking

⭐ Retrieval Augmented Generation (RAG) is the process of supplementing a prompt with additional information based on web searches or queries of internal documents

⭐ Vector database is a specialized type of dB designed to store & efficiently retrieve vector data based on similarity of, for e.g., an output from an LLM