A Plea for Grace
This is another post about AI. This is also not another post about AI.
I have written at length about AI. I generally do so from my perspective as a gay technologist in America, because that is the identity I live every moment of every day of my life. It is the space I am most familiar with, even if it remains foreign or alien in a lot of ways that I continue trying to understand. It is why I am so nakedly hostile towards AI in its current form, having engaged with - or been victimized by - the very systems it claims to disrupt, enhance, or replace wholesale. I could wax on about it for an entire book, if my OCD would cooperate and my distractibility decrease, and I suspect it would be about as compelling a read as the myriad of climbers-on already cozy on the bandwagon.
Instead, I would rather discuss AI from a very human, deeply personal perspective. An argument not of pure logic, or rationalism, or reason, but of empathy and compassion. One that reveals the vulnerability of my person to the coldness of the world at large, that makes a plea for civility in a world starkly lacking it.
But first, let's talk about why I generally don't play Poker.
Ruthless Competition
Poker is a straightforward card game where participants attempt to form the best hand possible from a set of cards. Depending on the format of the game, they may have to do so solely through blind draws from a deck for a hand of five cards, or by using a pair of cards in their hand and five more shared with the other players as "community" cards. Regardless of the specific ruleset, the objective remains the same: best hand wins.
The wrinkle with Poker that makes it so endearing and popular is its betting structure. See, the best hand wins only if everyone reveals their hands at the end of a round. So long as hands are unknown, the game is more about probability, reading other players, or social engineering. It is entirely possible to win despite having the weaker hand, provided you can sufficiently bluff your opponent into believing they're the weaker hand and folding from the round. This happens often enough that a cursory search of YouTube for such examples immediately yielded this gem.
When you talk business or strategy, Poker is cited as a prime way to hone one's skills. Deception, subterfuge, and misdirection all play out in every single hand, and each loss forces you to practice covering your tells, reducing your readability to others, and learning to suppress your emotions. Cooperation or civility in poker is folly; cut-throat tactics are the only acceptable method of play.
This is antithetical to how I generally operate. I am a cooperator by nature and nurture both, which natively disadvantages me to purely competitive games like poker. To be clear, my skills in maths and probability aren't bad; I have played poker before, and I am sure I will play it again, with results proportionate to the length of time I play and the skills of my opponents, with the former being a stronger indicator of success than the latter.
See, I can hyperfocus on tasks of interest or import to me, for hours at a time without breaking concentration. I can, and repeatedly have, simply forgotten to stand, or eat, or drink, or go to the bathroom until my kidneys hurt and my head throbbed and my tongue cracked with dryness. This is an amazing superpower to have in technology, because it means I can destroy incidents and problems under pressure - provided my focus is not disturbed by whinging management demanding updates every fifteen minutes or talking over me on a conference bridge. If I can shutter all outside distractions and focus, I am nigh unstoppable.
This comes back into poker because of a character flaw of mine that's gradually being rounded off over the course of my life, and that is my social deficits. Having played competitive but friendly games with friends, family, and strangers before, the outside "read" of how I play very much comes across as what the gamers would call a "try-hard": I shut down any and all "extraneous" functions to focus on the task at hand, becoming a stone-faced, cold, hostile creature. The game ceases to be a game, and instead becomes a struggle for survival, a crisis to be resolved one hand at a time. I'm ruthless, tracking hands at the table, cards being played, tells, looks, glances, tones, reflections in glasses or eyeballs or glossy surfaces, analyzing everything to find myself an edge. While others are struggling with the rules of a new game, I'm already exploiting them before the introductory round concludes.
I become an insufferable prick, essentially, in otherwise "friendly" games.
As time went on and friends drifted away, I began to realize that my success in a game was also tied to my isolation from others. I always played for keeps, not for fun, and this alienated those close to me from engaging in such games with me on any sort of regular basis. I realized that I had a choice to make: to cooperate with others and bring more warmth and life into my circles, or to compete and further alienate myself in the process.
Thanks to a lifetime of nurture, the choice was fairly easy by the time I consciously made it in my late-20s: cooperation was the only route forward, consequences be damned. Life was far too cold alone, and I was simply not built for the isolation of competition.
Graceful Cooperation
Truth be told, I had always been a cooperator first and competitor second. In Boy Scouts and school, I greatly enjoyed activities of camaraderie and loathed ones steeped in competition or zero-sum gamesmanship. I enjoyed sharing and teaching more than fighting, building things instead of tearing them down. As I gradually improved my self-awareness, I observed cooperation as my default mode - and a core reason I was taken advantage of so often by others. As I realized this pattern of exploitation, anger began building inside of me at its prevalence. When I began observing this exploitation being systemic in nature, that anger transformed into righteous fury.
You see, I dwell in systems. I take solace in their repeatability, their predictability, their scalability. The first time I genuinely sat down and read a traceroute output from node to node, identifying each system along the way, deciphering the hostnames and addresses, stitching together a mental composite of the growing internet at large, I knew where my strengths lay. I have never been a subject matter expert on any specific field; my strength lies in the ability to integrate the fundamentals of disparate systems together, forming a more cohesive understanding of reality in the process.
I am no baker, for instance, though I can certainly bake a delicious pan of brownies. I understand logistics systems in concept, even if I could not tell you the nuanced specifics of optimal routing algorithms for parcels within a domestic network. Together, I can understand the routes each specific ingredient took from agriculture to finished product, the myriad of hands it touched en route, the financial transactions underpinning the exchange of product as it's extracted, refined, and transformed along the way. As I study geopolitics and agriculture, I understand why the chocolate I buy is expensive; because I study labor, I also understand why its sourcing is problematic.
Systems fascinate me, because they are a continuous demonstration of cooperation over competition. Sure, they presently siphon most money into fewer hands, and yes, they're presently highly exploitative and dehumanizing, but they still require a base level of cooperation to function at all, and that is what gives me hope that we can ultimately resolve our ills through more cooperation, rather than continued competition.
Technology advancements in just the span of my lifetime have supercharged a lot of these systems for the better. With modern telecommunications and compute, I can share any bit of data with any person anywhere on the planet in milliseconds. Modern air travel means I can physically touch any person on Earth in less than thirty-six hours - and less than 24-hours if they're in a major city. Computing technologies and payment processors means I can pay an artist in war-torn Ukraine for artwork that's wired to me without ever physically interacting with anyone or anything except our respective compute devices. The impacts on trade, communication, and quality of life are amazing. Hell, it's why I devoted my life to evangelizing technology.
So with that pretext out of the way, let me get to the core of the plea.
Please, Slow Down
At the time of writing, it's been a rough time of things for myself and my circles. I was laid off last week, half a dozen close friends or family members are either out of work or looking for new work, and the markets are being clobbered with fears of AI replacing software companies, professional services firms, lawyers, you name it. There is a palpable tension in the air, a dire fear of tomorrow in the general societal gestalt.
If I'm being frank, if I'm really letting my guard down and being just one-hundred percent straight with folks out there: I'm legitimately scared of the near-future. I'm not scared that I might be wrong about my grievances with AI - I've already covered that - but that I might be right about AI in the worst ways.
Network Effects
AI tooling honestly blows me away. It can brute force many problems much faster than humans could, and even if it produces milquetoast output at best, that's still enough to obliterate global job markets overnight. Investors are finally beginning to look at the long-tail of the AI bubble and realize, "Holy shit, this is really, really bad." They finally look at the interconnectedness of the modern economy, where enterprises all broadly share common platforms, common formats, common software, common services, and realizing that AI could destroy the singular source of economic growth of the past thirty years right now. Companies don't need ServiceNow when they can lean up a Postgres database on AWS and vibe-code the integrations needed to do full ITAM via REST API and shell scripts on assets, and that's legitimately fucking terrifying to them. After decades of prioritizing software development consolidated into huge SaaS players, there's real potential that all of these multi-billion-dollar conglomerates could have their critical functionality replicated by some competent IT person (like me) with an Anthropic subscription - or, fuck it, a locally-running version of Qwen on a Mac Mini.
ServiceNow employs 30,000 people and pulled down $1.7bn in net profit last year, but AI tooling and some basic Open Source software can now broadly replicate or replace much of their offerings, if an enterprise were so inclined. Figma employs ~1,700 people and pulled in $750m in revenue; even locally-run AI models can replace much of their offerings, depending on your quality (sub)standards. HubSpot employs ~8,200 people and does $2.6bn in revenue; AI models can now sufficiently handle cold calls, sales follow-ups, marketing statistics and campaigns, the works. Don't get me wrong, those numbers are drops in the bucket compared to some real juggernauts, but there's hundreds of companies just like them in the same boat. What were once targeted services or apps with defensible moats are now little more than easily-replaced commodities if you've got a Claude Code subscription and some time to iron out your prompts. They'll be horrible, shitty, buggy imitators with no support and gaping security holes, but they'll get the job done.
I really need you to think about that for a moment, really let it sink in as to what that means. It's not just fifty thousand workers potentially out of jobs, it's the network of other workers whose existence is dependent upon their success. The cleaning crew, the MSPs, the suppliers, the landlords, the channel partners, the consultants, the specialists, the nearby restaurants and food delivery workers, everyone with a business relationship with these companies and their workers - BANG, no more money to be had. Systems don't stand isolated from one another; they integrate, feedback, circulate, shift, change, transform, exchange. It's not unique to the technology industry either: Tyson Foods may as well have just wiped an entire township off the map.
Intelligent investors understand systems. They understand how an event in, say, cobalt mining could affect the margins of Apple's upcoming iPhones or Tesla's EVs a year from now. For whatever reason, this was never really applied to the prospect of job displacement due to AI. Everyone was so caught up with the idea of AI replacing workers that nobody bothered to ask how a consumption-based economy is expected to survive without consumers, i.e. workers. What's soldiered on due to the wealthy seeing asset prices skyrocket is now at risk of free-fall as those same assets decline in the face of a post-AI price correction, all while boosters conveniently ignore the realities of societal upheavals as a result of this technological innovation. For all the bragging about the looming end of work, they never seem to take the next step as to what comes after in a society where work is required for survival - nor are they too keen to deal with said issues before they rear their ugly heads.
It comes across as incredibly disingenuous and intensely competitive:
- If they're right and AI eliminates work, then they believe they'll be insulated enough from consequences as to not have to care about them
- If they're wrong and AI is just another in a long line of tech bubbles to mask the stagnation of industry innovation, they believe they'll have already cashed out at the top and not have to care about the consequences
It's a zero-sum game where they believe they'll win either way, and everyone else will lose. It's competition over cooperation at a global scale, with civilizational consequences not seen since the Cold War. The stakes aren't merely some flex of political power or sphere of influence, it's the sum total of the human-based economy if AI is the real deal - and the sum total of all growth in the USA since 2010 if it's just a bubble.
Compounding this is the questions of profitability in AI. The current plan seems to be charging obscene amounts of money per month for a subscription to a data center somewhere, with rate limiting in place to incentivize higher plan tiers or more consumptive billing. Except in the case of code, what use is there for AI when the integration is written and just needs support? Look no further than the glut of companies still running old COBOL code on mainframes and mainframe emulators in 2026 rather than pay someone to rewrite it in a modern language: once the code is written and works, there is no more need for the premium subscription. Unlike email or communications, modern code with open source platforms can generally be written once and run forever with minimal modification, especially if it's properly isolated within the technology estate for security purposes. AI-based workflows aren't the future so much as AI-generated codebases for repeatable functions and integrations; nobody wants to be Air Canada, but everyone wants to ditch pricey consultants and reduce headcount, and vibe-coded one-off integrations are low-hanging fruit that every enterprise can take advantage of.
When you look at these network effects, the picture quickly becomes bleak, even if the tech and tools themselves evoke fun or enjoyment in the present.
The Plea
I love technology. It's my passion, my raison d'être. I love sharing it, I love improving the lives of others with it. I've enjoyed the smiles on the faces of those I help with it, as they discover a new way to reclaim time, improve their workflow, or just make life that much easier or more comfortable.
I also love this ideal world that the AI boosters stole from science fiction authors, a world where survival does not require work and where the populace can follow its passions freely. I have spent many a dark moment planning the things I would do with "fuck you" levels of money, like traveling the world, practicing photography on landscapes and people, learning new skills constantly in a classroom setting, running a community makerspace or net café at cost and donating my time to others, even immersing myself in new roles, professions, or trades I wouldn't dare try while there's bills to pay and friends or family to support. I would truly, genuinely love a world where technology enables these lifestyles for all regardless of wealth, background, creed, or status.
I do not believe AI will bring about this world, at this time. I believe it's in the interest of the survival of our species to slow down, not for the sake of preserving the status quo, but for reforming it to prepare for this inevitability.
Even if the current crop of AI tools don't replace human labor wholesale - and to be clear, I don't believe they will - allowing this movement to march blindly forward will not bring about such a utopia. Technology alone does not equate to progress; rather, it's the decisions of man about how technology is utilized that determines whether it's a progressive or regressive movement. The key players right now - and I cannot stress this enough - overwhelmingly do not have the best interests of the human species at heart. These are men - exclusively men, mind you - scheming to see who can have the most money and power. They aren't cooperating, they're competing, with the survival of society at stake. They're playing poker with each other, using entire countries and economies as their wagers.
I'm not saying to stop research on AI. I'm not saying to turn back the clock and regress progress. I'm not saying we need to create a society that's overly protectionist towards the status quo, or to ban technologies that could replace human labor entirely.
I'm simply asking - nay, begging, pleading - for time, for cooperation, to fix these known problems before we effectively throw lit matches onto the modern equivalent of the SS Mont-Blanc. To truly prepare the world and society for the first era in human history where nobody must work to survive, where every basic need is met freely and timely, where humanity - freed of the shackles of daily survival - may now pursue its hearts desires to their ends. To explore the unexplored, to discover the unknown, to build what was thought unbuildable.
The Path Forward
Words are cheap in a world where ChatGPT can spew out collegiate essays completely gratis; actions are what matter more than ever.
The actions of the AI and Capital elite are clear: fuck you, workers. Even for those who appear to downplay the threat posed by AI, like the oft-cited refrain of every technological revolution inevitably creating more, higher-paying jobs than it displaces, the reality is that investing in a technology with the sales pitch of replacing all human labor within a Capitalist economy, within a civilization wholly lacking in mandates to provide for the basic needs and rights of all humans, is itself a firm middle-finger to humanity.
Yes, Altman is right that there are a lot of bullshit jobs right now that shouldn't rightfully exist in an efficient market. Yes, Zuckerberg is right that one worker with an AI subscription can in fact produce as much output as entire teams, provided you don't mind lackluster quality and outsourcing your critical thinking and corporate data to a competitor for $200/month.
Counter-point: we still require people to work to survive, therefore the cooperators among us will invent busywork to justify hiring those in need. That is what I find myself increasingly arguing for in the face of mounting layoffs and increasing profit margins: these people need to pay rent, buy food, pay student loans that you told them to take on in order to survive, and therefore you have an obligation to retain them if you won't transform the economy to support them absent work. I do not believe this to be a controversial point, as it is the obligation of nobility to provide for the society they derive power from and whose laws they disproportionately shape; if these technofascists insist on being all-powerful nobility, then they have the obligation to provide for the needs of their serfs and improve their quality of life, a chore they clearly have no interest in completing given their penchant for enshittification.
Thus the workers must force action wherever and whenever possible to ensure an amicable outcome for all, instead of some perverse form of technofeudalism or technocracy for the rich few. We must not wait until after AI displaces work to demand workers be protected from unreasonable terminations - such as while a company is turning a profit, or has cash reserves sufficient to cover outstanding payroll obligations for a period of years. Nor should we wait until after AI is successful or profitable to demand compensation for those creators whose works were stolen for training purposes. Allowing ourselves to accept this narrative of total-competition, of every human forcibly being an entrepreneur, of ascribing total responsibility to the individual while allowing the wealthy to hoard power and resources among themselves through controlling and owning the AI infrastructure everyone is dependent upon, is effectively a death sentence for much of the world's populace. That is the intent, and they're so confident in this outcome that they've stopped hiding it.
We should not wait for companies to hand over money for programs, we should instead close the loopholes they exploit to dodge the very tax revenue we fund existing social entitlements with. If we wait for billionaires to be gracious enough to pay their full taxes instead of closing shelters, havens, deductions, and exploits, then we will never have the money necessary to take care of our people, pay down our debts, or improve our infrastructure.
True proponents of a post-work society brought about by AI should have no qualms demanding proactive responses today rather than reactive responses tomorrow, especially if they're so certain their prophecies will come to pass.
Anything less is just hitting the gas to calamity, accelerating destruction for the sake of a fireworks show.
And that simply must not stand.