AI for Small Law Offices: What's Legal, What's Useful, What's Risky

By Leo Guinan — Lancaster, Ohio — 2026-04-07

AI for Small Law Offices: What's Legal, What's Useful, What's Risky

A solo attorney in Lancaster asked me last year if she should "get AI" for her practice. She'd heard it could cut her document work in half. She'd also heard about the New York lawyer who got sanctioned for filing a brief full of hallucinated case citations generated by ChatGPT.

Both things are true. That's the problem with AI in legal practice right now—it can genuinely help, and it can genuinely wreck your career. The distance between those two outcomes is mostly about knowing where the lines are.

I build AI systems. My track record on predictions is 42%, which I publish because I think you should know when someone giving advice is wrong more often than they're right. What I can tell you with more confidence is what these tools actually do today, what they cost, and where the risk sits. That's what this guide covers.

Special Situation for Law Offices

Most small businesses can experiment with AI and the worst case is a weird social media post or a mildly embarrassing email. Law offices don't get that luxury.

You operate under ethical obligations that most industries don't have. In Ohio, the Rules of Professional Conduct haven't been updated to specifically address AI, but the existing rules already cover most of the territory:

  • Rule 1.1 (Competence) requires you to understand the tools you use. If you use AI and don't understand its limitations, you're potentially violating this rule.
  • Rule 1.6 (Confidentiality) means you can't feed client information into a system that might store, train on, or expose that data.
  • Rule 5.3 (Responsibilities Regarding Nonlawyer Assistants) arguably extends to AI systems. You're responsible for what they produce.

The Ohio Supreme Court's Board of Professional Conduct hasn't issued a formal opinion on generative AI yet, though several other state bars have. Florida, California, New Jersey, and others have published guidance that ranges from "use it carefully" to "disclose it to clients." Assume Ohio will land somewhere similar.

What this means practically: AI in a law office is a tool, not a colleague. You wouldn't file a brief your paralegal wrote without reading it. Same applies here, except AI is worse at knowing when it's wrong.

What AI Can Do Without Legal Risk

The safe zone is administrative work that doesn't involve legal judgment or client data.

Scheduling and calendar management. Tools like Calendly ($12/month) or Acuity handle appointment booking. These aren't AI in the generative sense, but they automate intake scheduling, send reminders, and reduce no-shows. A two-attorney office in Fairfield County told me they cut missed consultations by about 60% just by adding automated text reminders.

Internal communications. Drafting internal emails, staff memos, meeting agendas. If it doesn't contain client information, the confidentiality risk is low.

Marketing content. Blog posts, social media, newsletter drafts. You still need to review for accuracy—especially anything that could be construed as legal advice—but the liability profile is manageable.

Legal research summaries. Tools like CoCounsel (by Thomson Reuters, starts around $100/month per user) and Vincent AI are specifically built for legal research. They cite actual cases and let you verify. They're not perfect, but they're designed for this, unlike general-purpose chatbots.

Time tracking and billing descriptions. AI can clean up your time entries and make billing descriptions more consistent. Clio (plans start at $49/month per user) has AI features that summarize time entries. This is low-risk work.

The pattern: anything where you're the final check, the output doesn't go directly to a court or client, and no confidential information enters the system.

What AI Cannot Do

I'll be direct about this because the marketing from AI companies won't be.

AI cannot practice law. It cannot apply legal judgment to specific facts. It doesn't understand context the way a first-year associate does, let alone a practicing attorney. When it generates legal analysis, it's pattern-matching against training data, not reasoning from principles.

AI cannot reliably cite cases. This has improved with legal-specific tools, but general models like ChatGPT, Claude, and Gemini still fabricate citations. They'll generate a case name that sounds right, assign it a plausible reporter citation, and summarize a holding that seems reasonable—for a case that doesn't exist. Legal-specific tools are better here, but you still verify every citation.

AI cannot maintain privilege. If you paste attorney-client privileged information into a general AI tool, you've potentially waived privilege. The argument that "the AI is like a translator or assistant" hasn't been tested enough in courts to rely on.

AI cannot replace intake judgment. It can collect information, but it can't evaluate whether you should take a case, spot conflicts of interest, or assess whether a potential client's story holds together. That requires experience and instinct that pattern-matching doesn't replicate.

AI cannot guarantee accuracy on jurisdiction-specific questions. Ohio law, Fairfield County local rules, Fifth District Court of Appeals precedent—the more specific you get, the less reliable AI becomes. Training data skews toward larger jurisdictions. Your local practice knowledge is not in the model.

Client Intake Automation

This is where most small offices should start, because the ROI is clear and the risk is manageable.

A basic automated intake system collects information from potential clients before the first meeting. The non-AI version is a web form. The AI-enhanced version adds a conversational layer—a chatbot that asks follow-up questions based on responses.

What works today:

  • Lawmatics ($249/month and up) provides legal-specific CRM with intake automation. It handles forms, follow-up emails, and basic lead scoring.
  • Clio Grow (included in Clio's higher tiers) does similar intake automation with the advantage of integrating directly with Clio Manage.
  • Generic form tools like Typeform ($29/month) or even Google Forms (free) handle basic intake without AI. Don't overlook these. Sometimes a well-designed form beats a chatbot.

The rules for intake automation:

  1. Don't let the chatbot give legal advice or assess case viability. It collects information. Period.
  2. Include a clear disclaimer that the interaction doesn't create an attorney-client relationship.
  3. Store collected data in systems that meet your confidentiality obligations—not in a Google Sheet shared with your entire office.
  4. Have a human review intake submissions before the first client contact.

A solo practitioner handling real estate closings and estate planning could realistically save 3-5 hours per week on intake processing. At $250-350/hour, that's meaningful.

Document Drafting Assistance

This is where the opportunity is biggest and the risk is highest.

AI can produce a first draft of common documents—engagement letters, basic contracts, discovery requests, motions. The quality varies. For routine, template-heavy documents, it's often 70-80% of the way there. For anything requiring nuanced legal analysis, it's unreliable.

Tools worth evaluating:

  • CoCounsel handles document review, deposition preparation, and contract analysis. It's the most mature legal-specific option. Pricing isn't public but expect $150-300/month per user depending on the package.
  • Spellbook (by Rally) focuses on contract drafting and review. It integrates with Microsoft Word and is trained on legal agreements. Around $500/month, which prices out many solo practitioners.
  • Claude or ChatGPT with careful prompting can draft simpler documents if you never paste client-specific confidential information and you treat every output as a rough draft. Claude Pro is $20/month. ChatGPT Plus is $20/month.

The workflow that keeps you out of trouble:

  1. Draft a template yourself first. This is your baseline.
  2. Use AI to generate variations or fill in standard language.
  3. Review every word. Not skim—review.
  4. Never submit AI-generated text to a court without verifying every factual claim and citation.
  5. Keep records of what AI generated and what you modified, in case questions arise later.

The attorneys I've seen get value from this treat AI like a very fast, very unreliable first-year associate. You wouldn't sign what they wrote without redlining it. Same principle.

The Confidentiality Question

This deserves its own section because it's the issue that should keep you up at night.

When you type client information into ChatGPT, Claude, Gemini, or any cloud-based AI tool, that data leaves your control. The companies say different things about what they do with it:

  • OpenAI (ChatGPT): Free tier data may be used for training. Paid tiers and API usage have opt-out options, but read the current terms carefully.
  • Anthropic (Claude): Similar structure. Business and API tiers offer stronger data protections.
  • Microsoft Copilot for Microsoft 365: Enterprise tiers promise data stays within your tenant. This is probably the strongest data protection option for small offices already using Microsoft 365 Business ($22/user/month for the base, Copilot adds $30/user/month).

Practical guidance:

  • Use the API or business tiers, not consumer products, for anything touching client information.
  • Better yet, don't put identifiable client information into any external AI system. Anonymize first. Replace names with placeholders. Remove case numbers.
  • If you can't anonymize the information without losing the context the AI needs, don't use AI for that task.
  • Document your AI usage policy in writing. If you're in a firm with other attorneys, make sure everyone follows it.
  • Consider telling clients you use AI tools in your practice. Several state bar opinions are moving toward requiring disclosure. Getting ahead of this is easier than catching up.

A three-attorney office here in Lancaster could implement a workable AI policy in an afternoon. Write down what tools are approved, what data can and cannot go into them, and who reviews AI output before it goes anywhere. Pin it to the wall. Follow it.

Practical Starting Point

Don't try to transform your practice. Start with one problem.

Pick the task you hate most that doesn't involve client confidential information. For most small offices, that's one of:

  • Writing blog posts or newsletters for marketing
  • Cleaning up billing descriptions
  • Drafting initial versions of routine correspondence

Use Claude (free tier) or ChatGPT (free tier) for that one task for 30 days. Track how much time it saves. Track how many errors you catch. At the end of the month, you'll have real data about whether AI is useful for your specific practice.

If it saves time and the error rate is acceptable, expand to the next task. If it doesn't, you've lost nothing.

The firms that get burned are the ones that adopt everything at once, feed client data into systems they don't understand, and submit AI output without reviewing it. The firms that benefit are the ones that move slowly, stay skeptical, and treat AI as what it is: a tool that's sometimes useful and sometimes wrong.

Start Here

This week, do one thing: go to Claude.ai or ChatGPT and draft one blog post for your firm's website on a topic you know well. Don't paste any client information. Just pick a common question your clients ask—"What happens if I die without a will in Ohio?" or "How long does an eviction take in Fairfield County?"—and ask the AI to draft a 500-word answer. Then edit it for accuracy.

You'll learn three things in about twenty minutes: what AI gets right in your practice area, what it gets wrong, and how much editing is actually required. That's better intelligence than any vendor demo will give you. And it costs nothing.

Want the full playbook? The book covers all of this in depth — and it’s free.

Get the Free PDF

MORE GUIDES