I’m going to be blunt because somebody needs to be.
The legal profession is about to get hit by a freight train, and most attorneys are standing on the tracks arguing about whether trains exist.
Over 1,100 lawyers have been sanctioned worldwide for submitting AI-generated fake citations to courts. The Sixth Circuit just added two more last week — 24 fabricated citations, full fee reimbursement, bar referrals. A federal judge in Kansas revoked an attorney’s admission and fined four others in February. California issued its largest AI-related sanction in history — $10,000 — and then penalized opposing counsel for not catching the fakes either.
And the profession’s response? Standing orders. CLE panels. Bar opinions that remind lawyers to do what they were already supposed to be doing.
That’s not a response. That’s rearranging deck chairs.
The Profession Has a Computer Literacy Crisis
Let’s call it what it is. The legal profession is not moving fast enough to adopt the most groundbreaking tools it’s ever had access to. Lawyers have never been the most technically inclined professionals — not because they’re incapable, but because the profession was built on books, paperwork, and precedent, not software. That worked fine for decades. It doesn’t work anymore.
The average attorney in 2026 cannot explain how a large language model works. They don’t know the difference between a search engine and a prediction engine. They don’t understand that when ChatGPT gives them a case citation, it’s not retrieving it from a database — it’s generating text that looks like a citation based on statistical patterns. It might be real. It might be completely fabricated. The model doesn’t know the difference, and neither does the lawyer using it.
This isn’t a technology problem. It’s a professional competence problem. Rule 1.1 requires competent representation. If you don’t understand how your tools work, you’re not competent to use them. Full stop.
The ABA’s most recent survey found that only 30% of lawyers use AI tools. Among solo practitioners, it’s 18%. Meanwhile, in-house legal departments — the people writing the checks — are above 80% adoption. They already know how long tasks should take with AI. They already know which firms are using it and which ones are billing eight hours for work that takes two.
The firms that haven’t figured this out yet aren’t going to get a warning. They’re going to get fired.
This Isn’t Coming. It’s Here.
The legal AI market hit $3.1 billion this year. It’s projected to triple by 2030. Lawyers using AI report saving over 30 work days per year. More than half report revenue increases. Harvey AI — a legal-specific platform — hit $190 million in annual recurring revenue and is pursuing an $11 billion valuation. One hundred thousand lawyers use it across AmLaw 100 firms.
These aren’t startups pitching a dream. These are operational platforms processing real legal work at scale, right now, today, while most solo practitioners and small firms are still debating whether to try it.
The ABA’s own Task Force on AI published its final report and concluded that AI has moved from experiment to infrastructure. Their recommendation wasn’t “proceed with caution.” It was: stop asking whether to use AI and start figuring out how to integrate it responsibly.
The profession’s leadership is telling you the building is on fire. Most lawyers are still asking if it’s warm in here.
The Sanctioned Lawyers Aren’t the Problem. They’re the Symptom.
Everyone focuses on the lawyers who submitted fake citations. They’re easy targets — lazy, sloppy, didn’t do the basic work. But they’re not the real story.
The real story is the massive number of attorneys who aren’t using AI at all and think that makes them safe. It doesn’t. It makes them slow. It makes them expensive. And it makes them replaceable — not by AI, but by the lawyer down the street who uses AI properly and delivers twice the work at half the cost.
There are two ways to fail in this environment. The first is to use AI recklessly and get sanctioned. The second is to ignore AI entirely and get outcompeted. The sanctions cases are dramatic and public. The outcompetition is quiet and invisible — until the clients stop calling.
Both paths end the same way.
How to Not Get Destroyed
Using AI effectively isn’t rocket science. It’s not even hard. It just requires the willingness to learn something new, which is apparently the tallest order in a profession that still faxes documents.
Here’s what it actually takes.
Understand what you’re using. A large language model is not a legal research database. It’s a prediction engine. It generates text that is statistically probable, not verified. If you understand that one sentence, you’re already ahead of 90% of the profession.
Never accept a first output. Every AI response is a first draft. Treat it like one. Push back. Ask follow-up questions. Make it search for sources. Challenge its conclusions. If it can’t defend its answer when you press it, the answer was probably wrong.
Verify independently. AI gives you a starting point. Your job is to confirm the endpoint. Check the citations. Read the cases. Confirm the holdings. This isn’t a new obligation — it’s the same obligation you’ve always had. AI just made it easier to skip, which is why so many lawyers are getting caught skipping it.
Use the right tools. Not all AI is created equal. Claude, for example, can search the web and surface sources in real time. ChatGPT can’t. The ability to search, cite, and be challenged in a single conversation changes the dynamic from “generate and hope” to “generate, verify, and refine.” Tool selection matters.
Treat AI like a junior associate. Smart, fast, well-read, eager to please — and absolutely capable of confidently presenting fabricated work if you don’t supervise. You wouldn’t sign a brief a first-year associate wrote without reading it. Don’t sign one AI wrote without reading it either.
That’s it. That’s the whole methodology. It’s not complicated. The lawyers getting sanctioned aren’t failing because AI is hard to use. They’re failing because they’re too lazy or too ignorant to spend five extra minutes verifying the output.
The Billable Hour Is a Ticking Bomb
Here’s the part the profession really doesn’t want to face.
If AI lets you do eight hours of work in two, and you bill by the hour, you just destroyed 75% of your revenue on that task. This is already creating a silent crisis. Lawyers are using AI personally but hiding it from their firms because the billing model punishes efficiency.
Meanwhile, in-house counsel — who adopted AI years ago — knows exactly how long things should take. They’re going to start asking why outside counsel is billing eight hours for a motion that their own team could draft in two with AI assistance. And when that conversation starts happening at scale, the firms that can’t justify their bills are going to lose those clients to firms that charge for value, not hours.
The billable hour model was already under pressure. AI is going to break it. The attorneys who restructure around value-based billing will survive the transition. The ones who keep pretending nothing has changed will be the ones explaining to their partners why revenue dropped 40% and the best clients left.
The Ethical Framework Isn’t Broken. The Lawyers Are.
We don’t need new rules for AI. We need lawyers to follow the rules we already have.
Rule 1.1 says be competent. Understand your tools. Rule 1.6 says protect client data. Don’t paste privileged information into a public AI platform. Rule 3.3 says be honest with the court. Don’t submit work you haven’t verified. Rule 5.3 says supervise your non-lawyer assistants. AI is a non-lawyer assistant.
Every sanctions case is a violation of rules that existed decades before generative AI. The technology didn’t create the problem. It exposed how many lawyers were already cutting corners. AI just made the shortcuts faster and the consequences more visible.
Why I’m Building CaseDesk
AI belongs in every modern law practice. Research, drafting, analysis, case management, client communication, strategy. But only when the attorney using it understands how the technology works and verifies every output against their professional obligations.
But the tools available today aren’t built by people who do what I do. They’re built by software companies that have never managed a caseload, never explained an exposure analysis to a carrier client, never stood in front of a judge and argued a motion they drafted that morning.
That’s why I’m building CaseDesk. A legal case management platform designed by a practicing attorney with a computer science degree. Local AI integration that keeps client data on your hardware — not on someone else’s server training someone else’s model. Workflows designed around how litigation actually works, not how a product manager imagines it works.
The profession needs tools built by lawyers who actually use AI, for lawyers who are ready to start.
Wake Up
The freight train is coming. It’s not slowing down. It doesn’t care whether you understand it, whether you approve of it, or whether you think your practice area is immune.
The lawyers who learn to use AI — and learn to use it right — are going to outwork, outperform, and outlast the ones who don’t. Not because AI makes them better lawyers. Because it frees them to actually practice law instead of drowning in tasks that never required a law degree in the first place.
Over 1,100 lawyers sanctioned and counting. A $3 billion industry and growing. Thirty work days a year in time savings. In-house departments already measuring you against AI benchmarks.
The question isn’t whether AI will change your practice. It already has. The question is whether you’re going to catch up or get run over.
Asaf David Fulks, Esq. is a California attorney, software developer, and the creator of CaseDesk. He writes about the intersection of law, technology, and sound money. Bar #343622.