Anaken
Contact
← Articles

March 20, 2026

The AI Hit

The dopamine loop is real. Quitting feels irrational. And that's not a coincidence - it's just the game.

There's a moment most of us have had by now.

You're sitting with a question - a real one, the kind with some friction in it. Maybe it's a decision you're turning over, a concept you're trying to understand, or just the right word for a sentence that isn't landing. And before you've even consciously decided anything, you feel the pull. Not toward the answer. Toward the window. The chat box. The cursor blinking, ready.

Two seconds later, you have something back. Fluent. Structured. Confident. The friction is gone.

It's worth pausing on how different this is from anything that came before it. Google gave you links - you still had to do the reading, make the judgment, do the work of assembly. Wikipedia gave you a starting point. A calculator gave you a number. Each of these removed some friction while leaving the rest intact.

AI hides the rest.

It doesn't point you toward an answer. It hands you one, already shaped to your question, in your register, at your level. It's not Google on steroids - it's more unsettling than that. Google's friction was visible. You could see the ten blue links. You knew work remained. AI hands you something that feels complete, which means you often don't notice the friction that remains - the verification you skipped, the judgment you outsourced, the understanding that didn't transfer.

The friction isn't gone. It's just invisible now.

And that invisibility isn't just convenient. It's neurologically significant.

The Hit

Here's something neuroscience has known for a while but that rarely makes it into conversations about AI: dopamine isn't really about pleasure. It's about the resolution of uncertainty.

When your brain moves from "I don't know" to "I know" - that transition, that moment of closure - triggers a dopamine response. It's why finishing a puzzle feels satisfying. Why the last piece of information that completes a picture feels disproportionately good. Why checking your phone in a quiet moment delivers a small but reliable lift.

AI has accidentally become the most efficient uncertainty-resolution machine ever built.

Not because it always gets things right - it doesn't. But because it always responds. Every open question gets a closed answer. Every loose thread gets pulled taut. The experience of not-knowing is almost never allowed to linger. And your brain, which is wired to treat that resolution as a reward, notices. Consistently. Across thousands of interactions.

This is the loop. And unlike most loops, it doesn't feel like one from the inside. It feels like being productive.

The Variable Reward Problem

Here's where it gets a little uncomfortable.

The most addictive reward structures aren't consistent ones - they're variable ones. A slot machine pays out sometimes, unpredictably, and that unpredictability is precisely what makes it hard to walk away from. Psychologists call it variable ratio reinforcement - the most robust pattern for building persistent behaviour ever identified.

AI is variable in exactly this way. Sometimes it nails something so precisely it's almost startling. Sometimes it's slightly off and you catch it - which is its own small reward, the pleasure of being the one who knows better. Sometimes it surprises you with a connection you hadn't made.

You never quite know which version you'll get.

So you keep pulling.

The Aware Trap

Awareness is supposed to be the exit.

With every other compulsion we've learned to manage - phones, social media, the inbox refresh - consciousness was the first step out. See the loop, name it, create distance.

But this loop has a second lock. Because even with full awareness, even understanding every mechanism described here, quitting still looks like the wrong move. Not emotionally hard. Logically wrong. The kind of wrong that makes a thoughtful person feel like the irrational one in the room.

Because while you were having your moment of clarity - everyone else kept going.

And that's not a personal failing. That's just the game.

The Game Nobody Consented To Enter

Everyone else is using it too. Not metaphorically. Structurally.

The colleague who turns around a brief in two hours instead of two days. The applicant whose cover letter is sharper than yours. The team that ships faster, writes cleaner, responds quicker. AI is quietly becoming the baseline - for productivity, for output quality, for the pace of professional life.

And when that baseline shifts, stepping back from the tool isn't a personal health decision anymore. It's unilateral disarmament.

This is the prisoner's dilemma nobody consented to enter. You might genuinely prefer to slow down. To sit with the friction. To let confusion do its work. But if the other players aren't doing the same, the rational move - even for someone who sees the trap clearly - is to stay in the game.

This is different from social media, which you could leave at real cost to your social life but not your survival. This is different from the calculator, which didn't change the baseline of what was expected from you. AI is threading into the fabric of what "good enough" means - and that's not a choice any individual made. It just happened.

FOMO Is The Fuse

This is where the two forces stop being separate and start compounding.

The dopamine loop makes you want to use AI. The structural pressure makes you feel you have to. But FOMO - the ambient, low-grade anxiety of watching everyone else move faster - fuses them together in a way that's harder to untangle.

Because FOMO is itself a dopamine mechanism. The brain treats social exclusion and falling behind as a threat, and pushes you to close the gap. Opening AI isn't just resolving a question anymore. It's resolving the anxiety of being left behind at the same time. Two sources of relief, one action. The reward signal doubles. The habit goes deeper.

The dopamine loop gets you hooked. The game theory trap makes sure you can't quit even if you want to. And FOMO fuses them together into something that doesn't feel like compulsion - it feels like common sense.

What We Don't Know Yet

Here is where I want to be honest.

We've run this experiment on human behaviour for maybe three or four years at real scale. There is no longitudinal data. Nobody knows what a decade of this conditioning produces - whether it's like sugar, mild and chronic, or something with a stranger shape we don't have language for yet.

What's worth sitting with is this: the pull toward AI isn't a character flaw. It's not laziness or weakness or a failure of discipline. It's the entirely predictable output of a reward loop meeting a collective action problem, in a world moving fast enough that opting out feels genuinely costly.

Knowing that doesn't make the pull disappear. But it changes the quality of the choice.

The Only Thing Worth Protecting

You probably can't opt out. And this isn't arguing you should.

But there might be something valuable in noticing the pull when it happens. Not every time. Not with guilt. Just occasionally - ah, there it is - before deciding whether to follow it.

The question was never whether to use AI. It's whether you're still the one choosing when.

That small gap is the whole game.


Written in conversation with AI - as a thinking partner, not a ghostwriter. The irony remains intentional.