The Goalpost That Walks
Sometime around 2012, I was sitting in a folding chair at a card table in my friend's living room, shuffling a sixty card deck of red and black cardboard, when the conversation wandered the way conversations do between rounds of Magic: The Gathering. Someone mentioned Tesla, or maybe it was self-driving cars in general, and the thread pulled until it landed on artificial intelligence. My friend looked at me, because I was the one in the room who wrote software for a living, and he asked me a simple question. Do you think AI is possible?
I told him no.
Not maybe. Not "it depends on what you mean by intelligence." No. I said it the way you say something you have thought about long enough to feel settled in, the way you put down a land card with no hesitation because you already see the play three turns ahead. Human language, I explained, is too vast, too fuzzy, too lossy. There is too much ambiguity baked into the way we communicate. Sarcasm, idiom, context, the way a single word like "run" can mean a hundred different things depending on whether you are talking about software, stockings, rivers, or elections. The bandwidth problem alone would bury any system that tried to parse natural language in a meaningful way. I had been writing code for more than a decade at that point, and I thought I understood the shape of the problem well enough to know where the walls were.
I think about that evening a lot these days, usually while I am sitting at a different kind of table, asking a machine to help me write code, debug systems, and think through architectural problems that would have taken me days to untangle alone. The walls I was so sure about turned out to be scenery. Painted flats on a stage I mistook for the edge of the world.
There is a particular flavor of humility that comes from being confidently, profoundly wrong about something in your own domain. It does not taste like embarrassment exactly. It tastes more like waking up in a house you thought you knew and finding a room you have never seen before. The foundation did not change. Your map of it was just incomplete, and the incompleteness was invisible to you precisely because you trusted the map so thoroughly.
I bring this up because the conversation around artificial general intelligence has reached a strange kind of impasse, and I think the shape of my wrongness in 2012 has something to do with why.
By every benchmark that mattered a decade ago, AGI is here. The machines pass the Turing test. They ace the bar exam, the medical boards, the SATs. They write publishable prose, generate functional code that ships to production, compose music, and hold conversations that are, by any honest reckoning, indistinguishable from talking to a knowledgeable human on the other end of a chat window. The boxes are checked. Every single one of them. And yet the champagne stays corked, because something curious happens every time a machine reaches a milestone we once considered definitive. We move the milestone. We add a new criterion, some deeper and more elusive quality that the machine has not yet demonstrated, and we say "well, that is impressive, but it is not really intelligence." The finish line got up and walked away while we were mid-stride, and the strange part is that almost nobody finds this strange.
There is a word for this in psychology. It is called the "AI effect," and it has been happening for decades. Every time a machine masters something we thought required intelligence, we reclassify that thing as "not really intelligence." Chess was intelligence until Deep Blue won. Go was intelligence until AlphaGo won. Natural language understanding was intelligence until large language models started doing it fluently. The definition of intelligence, it turns out, is "whatever machines cannot do yet." It is a moving target by design, though the design is not conscious. It is reflexive. Something deep in us needs intelligence to remain a human thing, and so we keep redrawing the boundary to protect the territory.
This is worth sitting with, because it reveals something that has very little to do with machines and almost everything to do with us. The goalpost does not move because the technology is inadequate. It moves because we are uncomfortable with what it means if the technology is adequate. If a machine can do everything we once said required general intelligence, and we still do not call it intelligent, then the word "intelligence" was never about capability in the first place. It was about identity. It was a story we told ourselves about what makes us special, and the arrival of capable machines threatens the story, so we rewrite the criteria to keep the story intact.
Alan Watts used to talk about this pattern in a different context. He described how we spend our lives chasing a future state of arrival, some moment when we will finally be the thing we are trying to become, and how the cruel trick is that the moment of arrival always dissolves upon contact. The goal was never to arrive. The goal was to keep wanting to arrive, because the wanting is what gives the journey its structure. Take away the destination and you are left standing in the middle of your own life with nothing to reach for, and that is a kind of freedom most people find terrifying.
AGI functions the same way in the cultural imagination. It is the destination that must never be reached, because reaching it would force a reckoning that nobody is prepared for. Not the existential reckoning about machines surpassing us, though that will come on its own terms. The reckoning about what we were really measuring when we said "intelligence" all along.
And this is where the economic incentives align so neatly with the psychological ones that it becomes almost impossible to separate them. Keeping AGI perpetually "almost here" is extraordinarily good for business. As long as we have not arrived, the next ten billion dollar raise is justified. The next GPU cluster, the next breathless keynote about the threshold we are about to cross. The ambiguity is not a failure of definition. It is a product. You sell the approach, never the arrival, because the moment someone declares victory the conversation shifts from what we might build to what we have actually built, and that is a much harder pitch to make to investors who are buying futures, not present tense.
I see a version of this pattern on the homestead, scaled down to something you can hold in your hands. There is a fantasy of self-sufficiency that floats around the homesteading world, a vision of the fully realized small farm where you grow everything you eat and answer to nobody. People chase it the way the tech industry chases AGI, always one more system away, one more animal, one more harvest from "being there." But the families actually feeding themselves are not waiting for the fantasy to crystallize. They are milking the goats they have, planting the seeds they can afford, and preserving whatever comes in this season. The gap between the label and the lived work is where the real sustenance lives, and the label, if you are not careful, becomes the thing that keeps you from eating.
The people building genuinely useful things with AI right now have figured this out, whether they would articulate it that way or not. They stopped waiting for a philosophical threshold to be crossed. They are shipping with what exists today, and what exists today would have read as outright fantasy to the version of me sitting at that card table, so sure of where the walls were that I could not see the door.
What we have is staggeringly capable narrow intelligence. It is not general in the way the word was originally meant, this vague notion of a machine that can do anything a human can do with the same fluid adaptability. But it is reshaping how software gets written, how research gets done, how businesses operate, how medicine gets practiced, and how people learn. The honest question, the one that does not get enough air in a discourse obsessed with milestones, is whether the distinction between "narrow but transformatively useful" and "general" matters to anyone outside of a funding pitch or a philosophy seminar.
I suspect it does not. I suspect the label is the land card I put down too confidently in 2012, a play that felt certain at the time but was based on an incomplete read of the board.
The real conversation, the one we keep deferring, is not about AGI at all. AGI arrives the way the tide comes in. One wave at a time, each a little higher than the last, each normalized before the next rolls through. You do not notice the water rising because every individual wave looks manageable. The tremor is already under our feet, and we have been living with it long enough that it feels like solid ground.
The earthquake is ASI. Artificial superintelligence. Something that does not just match human reasoning but surpasses it in ways we cannot follow, cannot predict, and cannot contain with the governance structures we have now. That is the discontinuity, the moment where the rules of the game change so fundamentally that all prior intuitions, mine included, become scenery again. Painted walls that looked so solid until someone walked through them.
I do not know when that happens. Nobody does, and anyone who tells you they do is selling something. But I know this much. I was wrong at the card table in 2012, and the shape of my wrongness taught me something more valuable than being right would have. It taught me that the walls you are most certain about are the ones most likely to be painted on. The edges of possibility are not fixed. They are stories we tell, and stories can be rewritten, sometimes by us, and sometimes by the things we build.
We keep redefining "enough" the moment we get there. In AI, in careers, in life. The arrival never feels like an arrival because we have already moved the marker by the time our foot hits the ground. Maybe the question was never "when do we get there." Maybe it was always "why do we need there to remain somewhere we have not been yet?"
I am still shuffling cards. The game just got a lot more interesting than I thought it could.