We're Not Ready for AI
But maybe not the way you're thinking
Some years ago, a younger version of me held a particular philosophical belief. Normally I wouldn’t be one to put much faith in the thoughts I had while barely out of high school, but I feel like this one deserves to be revisited. Basically, it was that many of humanity’s problems could be traced back to the idea that our pace of technological advancement had far surpassed our rate of advancement in the humanities. This led to an imbalance and a constant “catching up” on our ability to process the way technology was being used. I even advocated for a pause in some technological development so we could all take a civilizational breath. We needed to get used to what we had before we invented the next thing. Slow ‘er down there, Rocket.
Over time this idea evolved. Certainly this is always the way technology and humanity co-evolve, right? You can’t possibly be ready for a discovery that hasn’t been made yet. Perhaps this is just in our nature. This was one of many ideas explored in the 1995 anime movie Ghost in the Shell. Our hero, Major Kusanagi, muses that every time a technological achievement is at hand, “…humanity will reach out and grab it, like it’s damn near instinctive.”
Growing up in what I have seen referred to as the “Xennial” sub-generation, my formative years were book ended by movies like The Terminator and The Matrix. Both movies show a dystopian version of the future where artificial intelligence takes over the Earth and either destroys or enslaves humanity. Grim stuff.
Now I’m not some Zizian cultist that believes the emergence of general artificial intelligence is not only likely but inevitable, and that the emergence of said AI would result in an artificial being that would seek to increase human suffering, and that the inevitability of that outcome means that we must prepare for it now a la Pascal’s Wager. I do think that are very serious physical constraints to the material, energy, and water demands of the data centers needed to support AI. But this is not why I think we aren’t ready. The reason is closer to my original thinking from a couple of decades ago.
Humanity isn’t ready for AI because of who we are; or at least, the world we’ve made.
Stories in the media about advances in artificial intelligence are nearly impossible to avoid. On one hand, AI is making our lives infinitely easier and unburdening people of menial work. On the other hand, it’s stealing entry level jobs from people and robbing them of future earnings by taking away their ability to gain needed experience. Still others string together narratives that young people “don’t want to work anymore.”
All of these truths could coexist better if not for our current political and economic arrangements. Isn’t the point of technology to make our lives easier? Why develop all this technology if not to make our lives easier? And if our lives get easier due to new technologies and we get lazy, so what? That’s why we did it! The problem is our cultural values around “earning a living.” It’s an insidious idea that you don’t deserve life, but that you have to earn it. And if you aren’t earning it, you’re either a drain on the system or a cheat. Advancements that save us labor put us in a Catch-22 situation where many would not have to work, but would be looked down upon for doing so.
So what’s the answer? We could go to war against the thinking machines and destroy any computers like they did in the Butlerian Jihad from the Dune universe. Or we could rearrange our socioeconomic structures to be more aligned with a society of our technological development, avoiding the technofeudalism that some in Silicon valley might advocate for. I’m not sure which of those things seems more likely. Could we really put a halt to the frenzied pace that AI is being developed at? There are still too many interested parties with too much money to stop that now. But those same people with all that money are also the ones keeping us from advancing our society in a direction more aligned with letting machines do our work for us.
The unfortunate reality is that those leading the charge to develop artificial intelligence would seek to leverage it to subjugate others and further enrich themselves.
Whether we seek to curtail the advancement of technology and put power back in the hands of workers, or accelerate the next leap forward and finally enjoy the spoils of all this technology, we won’t do it with those currently in charge. Paraphrasing a sentiment I’ve heard before: the end of capitalism seems impossible, but so did the divine right of kings.



What strikes me about this piece is how closely it echoes a pattern I keep seeing: our technology is evolving faster than the stories, values, and assumptions we use to make sense of it. It's like the world we've built has quietly outgrown the mental models we're still trying to navigate it with.
AI is a perfect example. The problem isn't that the technology is too new or too powerful. It's that we're trying to fit it into cultural and economic beliefs that were built for a completely different era -- ideas about work, productivity, "earning a living," and who deserves what. When those old assumptions meet exponential tech, the result isn't progress. It's friction, confusion, and fragility.
What you're describing -- the mismatch between our tools and our worldview -- is the real reason we're "not ready." Not because AI is alien or unstoppable, but because we haven't updated the way we interpret what it means, who benefits, and how society should adapt.
This is where the weak signals show up:
• Technology removing work faster than our culture can accept it
• Power concentrating in the hands of the builders
• Old narratives about labor and value cracking under pressure
• People sensing a shift but not having a shared language to describe it yet
And when we don't evolve our interpretations, we fall back into deterministic thinking: either "AI saves us" or "AI destroys us." Both miss the point. The real challenge is developing the cognitive and moral flexibility to navigate a world that no longer fits our inherited assumptions.
I think you're right: the issue isn't whether we can build AI, but whether we can rethink the systems, norms, and incentives that decide how AI is used. Without that shift, even good technology can produce bad outcomes.
In other words, technological acceleration without interpretive evolution creates systemic fragility. Only those willing to question their inherited assumptions -- and read the subtle drift early -- will be prepared to navigate what's coming down the road ahead.