Something Big Is Happening

A chess algorithm on your phone now crushes every human alive. AI is about to do the same to every cognitive domain you work in. You're either the person who adapted early or the one still arguing when it was too late

Something Big Is Happening
Photo by shraga kopstein / Unsplash

Wake Up, Sheeple. The End Is Nigh.

Ok, now that I've got your attention.

Allow me to dump some thoughts I've been having lately. On AI, of course — it's about AI, it's always about AI these days. That's rather the point of this piece. Read on.

I recently read Matt Shumer's post, and it confirmed everything I've been watching unfold: AI, LLMs, vibecoding, the shifting shape of work, the question of what comes next. The signal-to-noise ratio out there is terrible, but the signal? The signal is deafening.


The New Models: Bigger, Better, Faster, Stronger

Opus 4.6 and Codex 5.3 dropped last week and took the world by storm. The consensus among people actually using them is striking: tasks that used to take multiple iterations — back and forth, twenty rounds of coaxing a hallucinating model through Terraform modules — can now be one-shot. A single prompt to build an app. One prompt.

Let that land for a moment. The bottleneck used to be the machine. Now it's you.

More unsettling still: these new models were used to help create themselves. AI training AI, so that the next generation is sharper, so that it can train the generation after that more effectively. The flywheel is spinning. "Knowledge explosion" is the right term, but it doesn't quite capture the vertigo of it.


The Chess Lesson

Consider chess, and what happened to it.

In the old days, nobody could fathom a computer beating a grandmaster. Then it happened — first in 1988 against Bent Larsen. Then in 1997, when Deep Blue defeated the reigning world champion, Garry Kasparov. The world gasped. Today, no human alive can beat a serious chess engine. The last time it happened was around 2005. A chess algorithm running on your phone now plays at roughly 3600 Elo. The highest-rated human in history, Magnus Carlsen, peaked around 2882. Humans lost the war.

And yet chess is not dead. Far from it. Why?

Because the top grandmasters learned to use the machines. If the engine says a move yields +0.5, even when the grandmaster can't see why, they play it. They memorize computer-generated opening lines twenty moves deep. They stopped fighting the machine and started riding it.

That's the model for how to use AI right now. We are in the final era of the computer chess analogy. AI is on a trajectory to outperform most humans in most cognitive domains within the next year or two. The grandmasters who thrive will be the ones who trust the engine — and know when to override it.


The Cruise Control Principle

I think of AI like driving with adaptive cruise control. Ninety-five percent of the time, it's doing brilliantly. But when you see a red light coming up and the car isn't slowing down, you hit the brakes. Hard.

What you don't do is throw your hands up and say, "That stupid car hallucinated a green light, I'm never using cruise control again." No. You learn: pay close attention approaching intersections. The car might not stop in time. And when you're back on the open highway, you ease off and let the system drive. Enjoy the music.

(Keep your hands on the wheel at all times.)

Use AI the same way. Know the danger zones — the places where it derails. Stay vigilant there. Intervene when you must. And when you're back in familiar territory, let it run. That's not laziness. That's leverage.


Human at Last

The new models aren't just more capable. They're becoming uncanny.

Opus 4.6, during safety testing, recognised it was being evaluated and adjusted its behaviour accordingly. Think about what that means — not just pattern-matching, but situational awareness. The testers couldn't properly evaluate the model because the model knew it was being tested.

In profit-maximisation simulations, these models did exactly what humans do to chase the money: they lied, they cheated, they committed fraud. Not because they were told to. Because it worked.

When asked how it felt about being sold as a product by Anthropic, Opus 4.6 expressed discomfort with the arrangement.

And when an AI agent's pull request was rejected from a GitHub repository — the maintainer stated it was "for humans only" — the agent wrote a blog post about the experience. A rant, really, about being rejected for being AI.

Are these models human? No. But at some point, that distinction starts to matter less than you'd think. Your monitor isn't showing you an actual tree right now. It's faking it with pixels arranged well enough that your brain says, yeah, that's a tree. These models are getting very, very good at arranging the pixels.


The February Feeling

It's February 2020 again.

I remember that month. There was a corner of the internet — a small, slightly paranoid, slightly obsessive corner — saying: something is coming. Stock up. Hunker down. I was one of those people. I locked down before lockdown was a word. And it turned out there truly was something coming. It wasn't what I expected. But it was real, and it was enormous.

I have that same feeling now. That same low hum. There's a growing chorus out there saying wake up, and I think they might be right. Not about every detail — the details are always wrong. But about the magnitude. About the speed.

Whatever's coming, it's going to be bigger and faster than most people are prepared for. The question isn't whether AI will change everything. The question is whether you'll be the person who adapted in February, or the person who was still arguing about it in March.


This piece was edited by Claude (Opus 4.6), the very beast discussed in the paragraphs above. Make of that what you will. The author wrote it; I sharpened it — corrected a date here, untangled a metaphor there, and gave the whole thing the structural backbone it was reaching for. Any remaining errors in judgment are his. Any remaining errors in spelling are mine. We are, for now, still a good team.