Funnily enough, Charles Dickens was talking about late 18th century Europe rather than the state of AI and LLMs in 2026, but here goes:
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.
For the last few weeks I’ve been coming back to this quotation, again and again. It is the best of times (so far) for AI—you can literally describe an idea for a program or website, and it’s generated for you. Hallucinations are becoming fewer. This is so much more than simply guessing the next word. Honestly, it’s a sufficiently advanced technology that really is indistinguishable from magic (with apologies to Arthur C. Clarke). Whether I’d call this the age of wisdom…I’m not sure yet ;)
But at the same time… it is the worst of times, the age of foolishness, season of darkness. Bot-farms spewing divisive nonsense all over social media no longer need to copy and paste their false statements in a way that’s easily spotted; instead they can write custom text at scale whilst still giving the illusion of a real person behind the fake accounts. Combine human greed with the speed at which LLMs can generate content and you have an infinite flow of slop spurting all over the internet like a farmer’s muck spreader gone awry at scale. AI voice agents are becoming better and used for scamming people with realistic and targeted calls that would previously have been uneconomical to do at the scale necessary to reap a reward. AI-generated pictures are being used to create hoaxes and flood social media with dangerous rage-baiting.
Baby? There went the bathwater 🔗
It might be the best & worst of times, but that doesn’t mean you have to pick sides.
Having lived through the advent of cloud computing to where it is now, I can see real parallels in how developers in the tech industry are approaching it. Some, particularly vendors & VCs, are "all in". Others believe it’s a fad or a straight-up con and will give you a list of reasons why that is. Both extremes are utterly and completely wrong.
If you’re the kind of Bare Metalsson character who believed the cloud was nonsense (iT’s JuSt SoMeOnE eLsE’s CoMpUtEr!11!!) and took pride in racking your own servers (each of which has its own cute name), you’re probably also burying your head in the sand when it comes to using LLMs with cries of BuT It HaLlUcInAtEs AnD yOu CaN’t TrUsT iT!!11.
And, just as running a homelab with servers and containers named after Star Wars characters is fun but you wouldn’t use the same approach at work, refusing to acknowledge that AI today has the potential to make you more productive as a developer starts to look somewhat childish or irresponsible.
Just because AI makes shit up sometimes, it doesn’t mean that AI is not therefore ever a useful tool for the right job. Strikingly, what’s happened in the last month or two is that the list of jobs for which you can use it has suddenly grown drastically. The online chatter has moved from "omg you wouldn’t let an LLM code for you" to "omg how do we review all these PRs", because guess what: all of a sudden people are letting an LLM generate code for them.
AI, and specifically LLMs, are a valuable tool for developers, and it’s one that we need to recognise if we’re not to get left behind.
LLMs are a tool that is evolving…rapidly 🔗
Picture a Capuchin monkey sat on its haunches using a stone to crack open a nut. Rudimentary, but effective. Would we as developers use a stone when we needed a hammer to bang in a nail? No, that would be stupid—we use the right tool for the job, of course. Hammers are an evolution of the tool from a crude stone, and we use that because it’s the best tool for the job. But once the hammer drill came along, do we cling to our manual hammer when we’ve got a nail to bang into a brick wall? Again, no, that would be stupid. We want to use the best tool for the job.
It’s the same evolution of tooling happening in AI. LLMs are a tool. Magical, bamboozling, hilariously-wrong at times tools; but ones that are evolving not over centuries or longer, but weeks and months.
I’m just talking about developer productivity; nothing deeper 🔗
Some people fundamentally object to LLMs on principle, citing their use of resources, or threat to mankind. Personally, I believe that cat is out of the bag, the horse has bolted the stables…we’re way past that. Pandora’s box is open, and you and I are not shutting it.
What I would observe is that if you’re working in IT, and you’re not already adopting AI and understanding what it can (and can’t) do for you, you might find yourself with a lot more time to discuss these opinions alongside the hansom cab drivers who figured that the motor engine was a fad and stuck with their horses.
Put somewhat more confrontationally: you may as well be against the internet, or the combustion engine, or atomic energy. All have awful uses and implications; all also serve a role that cannot be overstated. What LLMs are enabling is truly of seismic impact, and I cannot fathom a path forward in which they do not continue to be central to how we do things with computers.
Appeal to authority 🔗
Not convinced by my reasoning above? How about these folk:
You can't let the slop and cringe deny you the wonder of AI. This is the most exciting thing we've made computers do since we connected them to the internet. If you spent 2025 being pessimistic or skeptical on AI, why not give the start of 2026 a try with optimism and curiosity?
— DHH (@dhh) January 3, 2026
Not a fan of DHH? How about Charity Majors:
this year was for AI what 2010 was for the cloud: the year when AI stopped being satellite, experimental tech and started being the mainstream, foundational technology. At least in the world of developer tools. It doesn’t mean there isn’t a bubble. Of COURSE there’s a fucking bubble. Cloud was a bubble. The internet was a bubble. Every massive new driver of innovation has come with its own frothy hype wave. But the existence of froth doesn’t disprove the existence of value.
Or Sam Newman:
To those of you who are deeply pessimistic around the use of AI in software delivery, the old quote from John Maynard Keynes comes to mind:
"The market can remain irrational longer than you can remain solvent".
For a considered look at the uses of LLMs, Bryan Cantrill wrote an excellent RFD: Using LLMs at Oxide
Read the above linked articles, and also check out Scott Werner’s post "The Only Skill That Matters Now" which puts it even more clearly into focus, with a nice analogy about how "skating to the puck" is no longer a viable strategy. The long and short of it is that the rate of change in AI means you have no idea where the puck will even be.
The Junior Developer Analogy Holds 🔗
I read an article a while back that I found again here, in which a hospital consultant described their view of LLMs thus:
"Think of it as the most brilliant, talented, often drunk intern you could imagine,"
This was in May 2023 (eons ago, in LLM years).
As an end user of LLMs, I think this mental model really does work. If you, as a senior+ developer, think of an LLM as a very eager junior developer working for you. They’re fresh-eyed and bushy-tailed, and goddamnit they talk too much, don’t listen enough, and make stupid mistakes. But…you give them a job to do, point them in the right direction, and iterate with them under close supervision…and suddenly you’re finding yourself a lot more productive. Tutored well, a junior developer becomes a force-multiplier, a mini-me.
A common instinct amongst inexperienced senior+ developers tasked with looking after a junior can unfortunately be "I’ve not got time to show them this, I’ll do it myself". As any decent developer knows, that’s a short-sighted and flawed way of developing others (as well as oneself). Mentoring and teaching and nurturing juniors is one step back, two steps forward. And…the same goes for an LLM. Do you have to keep telling them the same thing more than once? Yes. Do they write code that drives you into fits of rage with its idiocy and overcomplexity? Yes. Do they improve each time and ultimately give you more time on your plate to think about the bigger picture of system design and implementation? Yes.
|
edit: I’m not intending to imply—as some may take from this—that in drawing the analogy I am actually suggesting we replace junior developers with AI. After all, junior developers learn, and in time become the senior developers who know when Claude is talking bollocks—that pipeline matters. Rather, I’m trying to characterise how one may look at the tool and one’s interactions with it. I am also leaving wide open the issue of what the impact of AI on junior developers themselves actually could be. The consequences for the software industry are likely to be vast. Commenting on this is beyond my experience—and there is also plenty being written elsewhere about it. |
Working with Claude Code over the past few weeks really has got me convinced that we’ve now taken a step forward where time invested in learning how to use it (because there is a learning curve) is time that’s well spent.
Previously, using an LLM was not much more than typing explain nuclear fission in the style of peter rabbit (or various cargo-culting "prompt engineering" techniques).
Now you have to learn about context windows and the magical file called CLAUDE.md and prompting to get the most out of it for coding, and that’s ok.
Some tools are simple (pick up a hammer and hit something) and others require more understanding (I’m not using a chainsaw anytime soon without training on it first).
Where the analogy falls down 🔗
Junior developers are humans. They get tired, they need rest breaks, they need feeding, and at some point they want to go home. LLMs, on the other hand, will keep on going so long as you keep feeding them tokens.
The impact of this on you as their boss is substantial. You might task your junior developer with a piece of work and they’ll return to you later that day, perhaps with a few interruptions to clarify a point. Claude Code, on the other hand, is like an eager puppy, bounding back and forth demanding your attention often every minute or so. I’m still trying to work out how to balance the dopamine hit of each interaction bringing another astounding chunk of functionality delivered, with the impact the rapid context switching has on my brain.
Interacting with Claude Code feels a bit like the hit we get from scrolling short video feeds. One more prompt…one more video…
Because the feedback loop is so fast, it’s also very easy to get drawn down a rabbit hole of changes and either end up on a side-quest from one’s intended task, or lose sight of the big picture and end up meandering aimlessly through some Frankenstein-like development path that feels fruitful because of the near-instantaneous results but which is ultimately flawed.
My call to action: FAFO 🔗
That’s it. Go fuck around, and find out.
Exciting things are happening. Yes the hype and BS is real and nauseating; but that doesn’t stop it being true.
If you’re interested in the F’ing around and what I Found Out, have a look at the companion post to this one: Cosplaying as a webdev with Claude Code in January 2026.
