AI Panic, It is in vogue, but we need to recalibrate
AI is coming for the jobs, or is it? We are at a turning point, and there needs to be a public policy reckoning that will prevent economic collapse.
I am seeing a lot of weird signals in the tech world around the "AI" revolution. Some people are over rotating, and many are losing their fucking minds. Even people I often expect rational arguments from. This was largely due to a lengthy blog post by a Tech CEO that makes (checks notes ...) AI Tools named Matt Shumer (no relation to Chuckles Schumer) titled "Something Big is Happening"

It is probably too long to read, but the TL;DR is that in the last few months the ability and quality of AI tools for software coding has gotten so good, that human coders are going to be replaced.
His main points are that most people judge these tools based on older models, and that historical memory is not accurate in the current versions of the tools. But the new "foundational" models are a lot better, and for him in particular, the abiogenesis moment is that these AI tools are now writing the code for the next generation.
Sidebar: One of my favorite SciFi works of all time is "Stand on Zanzibar" by John Brunner, in it there is an advanced computer named "Shalmaneser" that is the CPU of the world, driving all the media, research and government data.
Throughout the story, there are hints that the smartest people are trying to determine if this computer has achieved consciousness.
Near the end, there is a clip of Shalmaneser thinking to itself "Christ, what an imagination I have"
A great read, but it takes some getting into. Written in the 1960's, it does a lot of context shifting that is a lot like the modern world where we are constantly bombarded and multitasking.
He devolves into all sorts of metrics and results of the standardized tests, as if these model builders (Anthropic, Meta, OpenAI, and others) aren't gaming the system. Sort of like VW cheating on the emissions tests in Dieselgate. But I digress.
Couple that with the seemingly innocuous drop of some agents by Anthropic that did things like help lawyers or other white collar jobs. But this led to 20-25% declines in market value of SaaS companies, a category that has grown in the 2010's fuelled by gobs and gobs of VC funding, giving us companies like Salesforce and others that have flooded enterprises to the dismay of the workers (I personally loathe SFDC, and any applications built on their shitty framework).
Shumer also points out that the middle ware (and middle staff) at industries that employ a LOT of people:
- Law
- Finance
- Medicine
- Accounting
- Consulting (good fucking riddance)
- Wrtiting/design/marketing
- Customer support & service
Basically, all the knowledge work that drives a LOT of the middle class.
This is dire indeed. But is it true?
What Sweaty's experience is
First, I do use AI in my work. I am a senior level product manager, in a tech field (we build training to support well-known career certifications), and I can say that it does work well for what we use it for.
I wrote about 9 'graphs of shit that nobody wants to read about, so I deleted it.
I can confidently say that it took a process that required myself, several content SMEs (Subject Matter Experts) and some instructional designers 2-6 weeks to complete that would look about 80% uniform with the last design and 90% uniform with the next one, and deliver something that is a better match to the need of the moment, and get it more right than wrong in about 4 hours (actually, most of that time is for human review).
Yikes.
That leads to the problem that is not only coming, but is lapping at the shores of corporate America RIGHT NOW.
The dearth of a rising generation
In business, you typically have a range of people from fresh out of college people, or "early in career" people, as well as a ladder of people on their career journeys.
The current crop of AI tools are very good at the early in career work. The things you assign to interns, or to the junior people. To do simple research, to read and summarize reports or patents, and in general getting their feet wet.
And the leadership of companies are rubbing their hands with glee that they can disrupt this pipeline of talent. I mean, why hire low grade employees to do the mundane when you can just use these AI tools. Right?
Anyhow, it is clear that the current crop of college graduates are taking longer to find that first job, and often not getting that entry job. And often they aren't getting that introductory, career laddering role. If there aren't waves of the younger people getting their start, gaining experience, learning what corporate life is like (mock it, if you will, but it is key to white collar prosperity) they will remain on the lower rungs of the middle class.
And that breeds resentment and angst.
So, what are the thinkers blathering on about?
Why Sweaty is writing this?
Some "serious" thinkers are opining, and going far beyond their lanes. The Matt Shumer piece has become the hot topic, and it is causing some rational people to lose their fucking minds.
One thing to keep in mind is that:
- Matt Shumer is the CEO of a smallish AI company (startup) that builds wrappers for the underlying foundation models[1] and his opinions might be shaded by that, and
- Matt most assuredly used an AI Chatbot to draft and polish this exposé.
First up, we have JVL, (Jonathan Last, from The Bulwark) weighing in with serious sounding words, but here he's not really immersed in the tech.

From the Friday, February 13, 2026 Triad newsletter
Since that is paywalled, I printed to a PDF, and you can download it here.
He starts off taking Mr. Shumer at face value, because he's a tech believer and amidst this chaos that the LLM's[2] are breaking a lot of brains.
Still, his prognosis is largely correct. For good or bad, the LLM's are changing society, really rapidly, and the line between human interaction and machine is getting blurred.
His instinct is to call this what it is, an ecosystem, short for Ecology System. You know, back in high school, you likely took Biology, and you learned about the cycle of life, how the oceans, the deserts, the forests, and the flora and fauna all come together in a large feedback system that allows life to exist on Earth.
That is unless you went to a fundamentalist private school and were taught "God" did it all.
Still, he calls for the improvement of our educational system to teach kids about the system of the world.
Get kids thinking about how an ecosystem works and they can learn how a financial market, or an industry, or a network functions. It helps them understand stable-states, and systemic shocks, and evolutionary change. There’s a lot to learn.
One of the big lessons of ecology is that complex systems are tremendously resilient and adaptable if the change comes slowly enough. Complex systems are not vulnerable to change so much as they are vulnerable to shocks—sudden, rapid change.
That’s what worries me most about AI.
Amen, that is good. But it is clear that the tech leaders, the VC funds, and the oligarch class are mashing the accelerator on these.
In 2025, it is estimated that the tech majors spent about $400B on infrastructure for AI. In 2024, that was $294B.
Care to guess what they are planning for 2026? Between $600B and $660B will be sunk into datacenters, power, networking, computers, and Nvidia GPU chips.
That is an insane amount of money. Amazon just announced that it will spend $200B in 2026 chasing this dream. A spend that is what is known as CapEx, or Capital Expenditure. That is money sunk into physical assets.
An aside: In a past life, I worked for a company that made measurement equipment for lithography, photomasks in particular. The Du Pont corporation known for chemicals, polymers, and industrial goods decided to enter the space to make photomasks.
Making photomasks requires a lot of equipment. "Writers" to create the pattern, chemical baths to develop the photoresist, acid baths to etch the chrome away to leave the pattern that prints the chip.
Because of Moore's Law, the technology was changing rapidly, and every 5 years or so, a new writer had to be purchased for $10M- and up (now they are way expensive) as the market evolved.
But Du Pont was used to their physical plant being "good" and useful for 30+ years. Hell, just down the street from the Santa Clara mask shop, was a Du Pont fiberglass plant that was built in the 1950's and still turned out fiberglass that people bought.
Corporations usually expect these CapEx investments to last decades.
That aside was to set the stage that usually these expenditures will sit on the books for a long time, depreciating predictably throughout their usable life, providing tax benefits.
But AI Data Centers are not assets that have a long horizon.
Every year, Nvidia releases a new generation of GPU, faster, more efficient, yada yada.
And computers age, and begin to fail. In the before time, the big data center companies depreciated their compute assets on about a 3 year horizon. That is, they would replace their servers and storage every three years, upgrading to the newer, better systems, and continue to provide value to their clients.
Now, these majors have all gone nutz and decided to stretch that for their AI data centers to 6 years. As if a 6 year old GPU bought today in 2026 will be still providing value in 2032. All us techies know that those fuckers will replace them with newer versions.
Side note: If you want to roll in the mud about the economics, I highly recommend a subscription to Ed Zitron's "Where's your Ed At" newsletter.
For a counter, I read many people on AI, and Gary Marcus is an expert in Cognitive Science, and he is much more measured, his take on this Matt Shumer screed is a lot better:

Click to read, as Gary doesn't paywall
As I am almost at 2K words already, I will just jump to the end of Marcus' piece:
The bottom line is this: LLMs are certainly coding more, but it’s not clear that the code they are creating is secure or trustworthy. Shumer’s presentation is completely one-sided, omitting lots of concerns that have been widely expressed here and elsewhere.
A lot of people may have taken his post seriously, but they shouldn’t have.
Amen. If you are at all interested in the science behind generative AI, and cutting through the hype, I recommend Marcus wholeheartedly.
Wrapping up
I hope this give you some light at the end of the tunnel that we aren't truy fucked. But things are still bad.
Far too much of the current economic strength is tied up in some man-children spending way too much chasing a unicorn, that will eventually collapse. There are already symptoms that OpenAI in particular is on a knife's edge whether they live or die.
But, AI as it exists today isn't going away. You can't stick you head in the sand and ignore it.
It will take a lot of caution to wade through the bullshit, and to ignore the overblown hype.
What I do worry about is how in 10 - 15 years, there will be no mid-career professionals on track to being senior leaders, that will be too damn late to address this issue. I do not know what the answer is, but it isn't more Barristas and delivery drivers for the gig economy. The politics are going to get very tricky, very soon (and are already tricky).
Anyhow, I hope you have a great day, and thanks for making it this far. If you have thoughts, drop a comment or reply to this email, and let's continue the discussion!
If you made it this far and aren't a subscriber, sign up today. Always free!
1 - A foundation Model is like ChatGPT, Anthorpic's Claude, Googles's Gemini, and Facebook's Llama. I guess you could add xAI's "Grok" into that mix, but it is a fuckin' dumpster fire of Muskian self aggrandizement.
2 - LLM is Large Language Model, the "transformer" technology applied to a ginormous data set with some slick statistical algorithms that feel like magic, but in reality they are, and always have been really slick autocorrect systems that predict the most likely next bit of text (not even words). They do not think, they can't apply reasoning (they claim to, but that is just recursive and iterative guessing) and they can't know truth or falsity.
