Module 1 · Lesson 1 of 5
What AI Tools Actually Do
GuideHerd Academy
Subscribe to access this lesson.
GuideHerd Academy gives your whole firm access to AI workflow training, monthly updates, and a prompt and template library. Starter Training is included with every paid GuideHerd engagement.
The short version
AI tools predict useful text based on patterns, instructions, and context. They are not search engines. They do not look things up. They generate a response that fits the shape of what you asked — and that response can be wrong, incomplete, or confidently misleading even when it reads well.
Understanding this one fact changes how you use these tools. Everything else in this module builds from it.
What AI tools actually do
Large language models — the technology behind tools like ChatGPT, Claude, Copilot, and most AI writing assistants — are trained on large amounts of text. During training, they learn patterns: which words tend to follow other words, how sentences are structured, what a professional email sounds like, how legal documents are typically written.
When you type a prompt, the model generates a response by predicting what would come next given your input. It is doing very sophisticated pattern-matching. It is not retrieving a fact from a database. It is not reading a document you haven't given it. It is not reasoning the way a person reasons.
Key idea: The model generates what sounds correct based on patterns in its training data. It has no way to verify whether what it generates is actually true.
What AI is good at
- Drafting. First drafts of emails, memos, letters, summaries. The model is good at producing structured text that matches a style or format you describe.
- Summarizing. Long documents, call notes, intake forms. Give it the source material and ask for a summary — this is one of the higher-reliability uses.
- Organizing information. Turning unstructured notes into structured lists, tables, or outlines.
- Brainstorming. Generating options, variations, or approaches when you want a starting point rather than a final answer.
- Editing and rewriting. Improving clarity, adjusting tone, simplifying language. Good as a second pass, not a replacement for professional judgment on substance.
- Answering questions about documents you provide. When you paste in a contract, policy, or set of notes, the model can answer questions about that specific text reasonably well.
What AI is not good at
- Facts it wasn't trained on. Anything after its training cutoff, anything niche enough not to appear frequently in training data, anything about your specific firm or client that you haven't provided.
- Accurate citations. AI tools regularly produce plausible-sounding citations that do not exist. Never use a citation from an AI tool without verifying it independently.
- Professional judgment. Whether a conflict exists, whether a clause is enforceable, whether a diagnosis is correct — these require a qualified person, not a pattern-matching system.
- Knowing what it doesn't know. The model will often give you a confident, well-written answer when the correct answer is "I don't have enough information." It does not reliably flag its own uncertainty.
- Current information. Most models have a training cutoff. Unless the tool has real-time web access, it does not know what happened last month.
Practical examples
Good use
You paste in a client's intake notes and ask the AI to draft a summary for the file. You review the draft, correct any errors, and save it. The AI handled the first draft. You handled the judgment and accuracy check.
Risky use
You ask the AI to tell you whether a specific regulation applies to a client's situation without providing the relevant documents. The AI gives you a confident answer based on general patterns — not the current regulation, not your client's specific facts. This is the kind of output that requires full review before acting on it.
Good use
You ask the AI to take a long, dense policy document you pasted in and produce a plain-English summary of the key obligations. You check the summary against the original before sharing it. The AI saved you time on the first pass; you ensured the output was accurate.
The mental model that helps
Think of AI tools as a fast, knowledgeable collaborator who has read a lot but cannot look anything up, cannot verify their own claims, and does not know your specific client, case, or situation unless you tell them. They are good at structure, drafts, and getting you 80% of the way there. Your job is the remaining 20% — the judgment, the verification, and the approval.
If you approach AI tools with that mental model, you will use them well. If you approach them as a system that produces correct answers, you will eventually act on something that isn't.