Plain definitions for the terms that get thrown around before anyone explains what they mean. Start here.
Section 01
The Core Concepts
Large Language Model (LLM)
What it actually is
A statistical system trained on enormous amounts of text. It predicts the most likely next word given what came before. That prediction engine, run at scale and refined on human feedback, produces what feels like a conversation. Claude, GPT-4, and Llama are all LLMs.
Transformer
The architecture
The neural network design that makes modern LLMs work. Published by Google in 2017. You do not need to understand the math. You need to know it is the engine inside every major AI model today — the way knowing a car has an internal combustion engine is useful context, not a prerequisite for driving.
Vector / Embedding
How AI stores meaning
Words, sentences, and documents get converted into lists of numbers that capture meaning and context. Things with similar meaning end up close together in that number space. This is how AI can find documents that mean the same thing even when they use different words. Embeddings are the foundation of most enterprise search and retrieval systems built on AI.
Prompt
Your instruction to the model
The text you send to an AI model. A vague prompt produces vague output. A well-structured prompt with clear context, constraints, and an example of what good looks like produces reliable output. Prompt engineering is the practice of writing prompts that work consistently across many uses.
The Most Important Rule Nobody Tells You
Most problems do not need AI. A rough working rule: 60% of business problems belong in a traditional database or spreadsheet. 30% belong in simple if-then logic. About 10% actually require a language model. Starting with AI before understanding what layer the problem belongs on is the most common and costly mistake organizations make.
Section 02
Tools You Will Hear About
Claude Code
Anthropic's coding agent
A command-line tool that runs Claude as an autonomous coding assistant. You give it a task and it writes, runs, tests, and revises code on your computer. It can read your files and execute commands. Useful for developers. Less relevant if you are still learning the basics.
Project (Claude Projects)
Persistent context in Claude.ai
A feature in Claude.ai that lets you give the model standing instructions and reference documents. Instead of re-explaining your company, role, and preferences every session, you load that once and it stays. Useful for ongoing work that needs consistent context.
Agent / Agentic AI
AI that takes actions
An AI system that can use tools — search the web, run code, read files, call APIs — to complete a multi-step task on its own. A chatbot answers questions. An agent executes workflows. The term is widely overused. When someone says "agentic," ask what tools it actually has and what decisions it makes without a human in the loop.
RAG (Retrieval-Augmented Generation)
Giving the model your documents
A method where relevant documents from your own data are retrieved and included in the prompt before the model responds. This lets it answer questions about internal policies, products, or data without retraining. Most enterprise AI use cases are some version of RAG.
Fine-tuning
Retraining on your specific data
Continuing to train a pre-trained model on your own examples so it learns your style, domain, or format. More expensive and complex than prompt engineering or RAG. Usually only justified when you have thousands of consistent labeled examples and a very specific, repeatable task.
Section 03
Where to Actually Start
Step 1: Use the tools before you study the theory
Recommended order
Open Claude.ai or ChatGPT. Spend two weeks using it daily for real work — drafting, research, summarizing, writing code. Get familiar with how it fails, where it produces wrong answers, and what kinds of prompts get good results. Theory lands much faster once you have hands-on experience to attach it to.
Step 2: Learn prompt structure
The highest-leverage skill
Role, context, task, constraints, output format. Learn to write prompts with all five elements and your results will improve immediately. This is the skill with the fastest return on time invested, and it transfers across every AI tool you will ever use.
Step 3: Pick one domain and go deep
Avoid breadth-first learning
AI for writing, AI for coding, AI for data analysis, AI for customer support — these are different skill sets. Pick the one closest to your actual work and learn it thoroughly before branching out. Trying to learn everything at once is why people end up back at "I don't know where to start."