Linear Accidentally Put the AI Layoff Scam on Trial
The AI layoff memo finally ate itself.

Linear did the funniest possible thing a software company can do in 2026: it copied the dead-eyed AI layoff memo and used it to announce hiring.
Cofounder Tuomas Artman opened with “Today is a hard day”, then said Linear had made the “difficult decision to increase our workforce”. No, it was not “a cost-cutting exercise”. No, it was not about performance. Linear was simply “reimagining every role for the agentic AI era”. Then came the punchline: “We’re hiring. We’re sorry about that.”
The post cleared 270,000 views, and Linear had somewhere between 17 and 25 open roles, depending on whether you looked at its careers page or the quoted company detail. Nice little ambiguity. Even the hiring count comes with product-management energy.
The joke landed because the original script already smells rotten.
Cloudflare cut roughly 1,100 people, about 20% of its workforce, while reporting quarterly revenue of $639.8 million, up 34% year over year, the highest quarter in company history. Then leadership wrote the cuts were “not a cost-cutting exercise” but about how a world-class company creates value in the “agentic AI era”.
Sure. The spreadsheet merely slipped and landed on 1,100 employees.
The Trick Needs Everyone to Mistake Motion for Progress
The AI layoff script has become painfully mechanical.
Say the company must move faster. Say AI changes everything. Say smaller teams can now do more. Remove people who understand the systems. Congratulate the remaining staff for entering the future with fewer reviewers, fewer owners, fewer backups, and the same production promises.
The company gets to sound visionary while doing headcount math. Investors hear discipline. Executives hear leverage. Recruiters hear “AI-native”. Engineers hear: congratulations, you now own three abandoned services, a migration somebody announced before reading the runbook, and a security review scheduled after launch because the deck needed momentum.
Linear mocked the script because the script deserves contempt. But Linear also made the whole thing look worse, because Linear sells product development software for “teams and agents” and advertises AI workflows around agents drafting PRDs and pushing PRs.
Even the company selling the future still wants more humans.
Put that sentence on the investor slide and watch the room find religion.
The Productivity Debate Already Leaked, So Follow the Money
In my last piece, I already went through the productivity fantasy: AI-made code can look fast while the real cost moves into review, ownership, incidents, migration, and maintenance.
No need to re-run the whole lab report here.
The more useful question now is simpler and meaner: when executives say AI made people unnecessary, which people vanish, and which work remains?
Cloudflare gave the answer away. The company said cuts would hit all teams and geographies, except salespeople who carry revenue quotas.
Beautiful. The agentic AI era has apparently not automated quota-bearing humans yet.
Engineering headcount becomes flexible. Support becomes flexible. Internal operations become flexible. The people who clean up the architecture after the miracle get converted into a margin story. But the people attached directly to booked revenue survive the transformation. Strange how the future keeps sparing the org chart where the forecast lives.
Then Prince said internal AI use had increased more than 600% in three months, and described team members becoming two, ten, even 100 times more productive, “like going from a manual to an electric screwdriver.”
Fine imagery. Wrong job site.
Software is not a deck screw. Production does not care how fast the bit spins when someone drills through the wiring.
The Review Step Is Where the Fantasy Starts to Rot
Cloudflare also said virtually its entire R&D team uses its Workers platform, including vibe coding, and that 100% of the code produced that way and deployed into products is “now reviewed by autonomous AI agents.”
Read that like someone holding the pager.
The company cutting humans because AI makes everyone more productive is also bragging about AI reviewing AI-assisted code. Fantastic. The fox now performs quarterly access reviews for the henhouse.
The generated code arrives looking composed enough to survive a tired first pass. Good names. Reasonable structure. Tests with respectable-looking assertions. Nothing dripping blood from the top of the file.
Then the smell shows up somewhere small.
A retry changed shape around money. A permission check moved just far enough to lose its job. Some helper dragged business logic out of a legacy service because the old code looked reusable, when everyone who survived that service knew it was a warning label with indentation.
Executives call this review.
Inside the repo, it feels more like digging through wet cardboard while a release manager asks whether the train is still on schedule.
The reviewer has to remember the April incident, the migration nobody finished, the customer-specific exception nobody dares delete, and the reason the old function had three miserable branches instead of one elegant one. The model saw a pattern. The reviewer has to decide whether the pattern was design, scar tissue, or something written at 2 a.m. while legal watched the incident channel.
Security is looped in, allegedly. The migration owner left during the last restructuring. The reviewer adds one defensive test, leaves a comment with more restraint than the code deserves, and lets the thing through because blocking it now means writing a courtroom brief about why the diff smells expensive.
Weeks later, support reports behavior nobody can reproduce locally.
The postmortem calls it an “unexpected interaction.”
Nobody writes “the model shipped vibes, and a tired human got assigned as containment.”
Smaller Teams Means Fewer People Left to Remember
DeepL cut 250 staff, around a quarter of the company, while saying it wanted to move to smaller teams and compete in the AI era. CEO Jarek Kutylowski called the layoffs a “deliberate structural choice” about how DeepL must operate to remain a global AI leader.
There it is again. Structural choice. Every layer. AI era. Corporate fog machine on maximum output.
DeepL also said AI should be embedded into “every layer” of operations and that the right time to move is before the shift becomes obvious to everyone.
Lovely timing. Nothing says confidence like firing the people before the operating model proves itself.
Smaller teams look clean in the slide because the slide does not have to operate anything.
The damage arrives later, through the ordinary doors. A backfill slows down and nobody knows whether speeding it up breaks reconciliation. A vendor export starts touching data with retention rules hiding in a document from two reorgs ago. The rollback plan stops right before the schema change everyone forgot was one-way.
The missing people return through the backlog.
Actual operational memory gets mistaken for drag when payroll is being optimized. Then the system starts asking for everything leadership removed: old scars, failed cleanups, customer exceptions, abandoned migrations, and weird comments nobody used to like because nobody needed them yet.
Leadership removed the people who knew which parts of the system were load-bearing by accident.
Now a staff engineer gets pulled into the room after the AI story has already been sold upward. They can see the incident forming. Too much generated surface area. Too little ownership. Too many systems crossing each other with nobody willing to say the architecture now depends on vibes and a hiring freeze.
So they translate danger into safe nouns.
Readiness. Verification. Sequencing. Ownership.
The room accepts the language and rejects the consequence.
You removed the people who knew where the bodies were buried, then asked autocomplete to landscape the cemetery.
The date does not move.
The Bill Arrives as Maintenance
Artman told Business Insider he was skeptical when layoffs get attributed to AI, adding: “If you’ve hired the right people and kept your bar high, AI will make that team more capable.”
Painful, because it cuts straight through the scam.
A serious company would use AI to increase the capability of strong teams. A cheaper company uses AI to rename layoffs as transformation.
The people inheriting the repo know how this ends. The miracle becomes a backlog. The backlog becomes risk acceptance. Risk acceptance becomes an incident. The incident becomes a carefully worded note about an edge case. Then leadership asks why velocity dropped.
Someone will point at a dashboard and say AI increased output. Someone else will point at the pager, the abandoned service, the brittle migration, the failing audit control, the rollback plan, and the reviewer now reading generated code like a bomb technician squinting at a gift basket.
The invoice keeps the “agentic AI era” branding for about five seconds.
Then remediation starts billing by the hour.
