I Might Be Wrong

I Might Be Wrong
Photo by Andrea De Santis / Unsplash
Author's Note: This post was originally drafted in the fall of 2025. While considered unfinished, I'm opting to post it as-is since I feel it largely gets across my main points about fault. Circumstances have undoubtedly changed since originally drafted, so take this within the context of the time it was written.

My little blog here isn't even a year old, and already I've made some bold and controversial statements of personal opinion. I've called modern AI a grift, proclaimed the death of the old world, and accused Software-as-a-Service of being a trojan horse to remove data sovereignty. Other, far smarter people than myself (at least in my eyes) have made similar claims, like Benn Jordan's videos on AI Music poisoning or his video on the transformation into a post-Capitalist society being excellent video essays on adjacent topics. The thing about the internet is that if you know what to search for, you can find ample like-minded individuals agreeing with you on your viewpoints. This in turn can make you feel like you're objectively correct, because what other option is there when so many other people seemingly agree with you?

Well let me make another bold, controversial proclamation right here and now:

I am, more likely than not, wrong to some degree on the topics I discuss. If I'm being honest, I hope I'm wrong.

The Acceptability of Error

Ever since I first surfed the internet (courtesy of a nearby hookup at a local university and NCSA Mosaic), something that differentiated it to me from reference material was the confidence and authority of internet speakers. While Encyclopedias and reference books in the library were written to educate, inform, and elucidate, internet pages were far more authoritative in tone by comparison. It felt like the people who crafted these webpages were innately imbued with this authority, since they were present on the most modern publishing platform in human history, slinging text and graphics around the planet over fiber optics and copper wire in ways even local TV stations could only do with huge news vans and satellite uplinks. Looking back, the teachers who introduced me to the internet were also the ones to "vaccinate" me against this toxin through a two-prong approach:

  • Demonstrating how web pages aren't authoritative on their own by corroborating or disproving their claims with research skills
  • Building my own web page, to demonstrate how easy it is for literally anyone to put slop online

Between my own strong moral compass, early vaccinations against bullshit in its myriad of forms, and repeated exposure to minor harms, I built up a pretty thick skin against scams and grifts very early on in life. Perhaps these are what make me retch or recoil at so much innovative "slop" thrust upon me by technology boosters and salesfolk, or maybe it's just my stubbornness in general against unjustified changes that lack demonstrable value.

Regardless, one positive trait I ended up maturing with is the ability to accept I can, and often will be, wrong. That's not to say I had an easy time accepting the reality that others will be wrong on the internet, only that I am able to accept I am capable of being wrong. This is a trait that's seemingly lacking online, as evidenced by an overabundance of comments, forum threads, social media posts, YouTube videos, podcasts, and blogs devoted solely to the wrongness of others.

Yes, I'm aware of the irony.

Bender, joined by Fry and the Robot Devil, exclaiming that "It's not ironic, it's just coincidental!"

Perversely, society presently does not favor those who can accept their own error; on the contrary, the perception of infallibility is so strongly incentivized that it's a known psychohazard. Leaders are far less likely to admit they were wrong and to change course, than they are to double-down and claim they're still right. The rise of fascism could be correlated to this attitude, as authoritarian and fascist leaders project an inability to ever be in error - to the point of altering fact and reality to suit their narratives, when necessary.

To admit you are wrong in the modern era, is to acknowledge weakness rather than strength. In other words, it's socially unacceptable to be wrong, and that has very real consequences - for personal growth, for societal needs, for basic governance. I could spend thousands of words going into this in detail, but that's not the point of this post.

Instead, I'd like to illustrate the benefits of being wrong, by talking about my own potential wrongness.

Mind the Gap

Being wrong is often a matter of a knowledge gap, and in that area I am no different than any other human in the sense that I have plenty of said gaps. For the sake of brevity...

A scene from the movie clue, where Wadsworth the Butler (Tim Curry) is saying, "And to make a long story short...", with the rest of the cast interrupting with a "Too Late" rebuke.

...I'm going to keep my focus on technology, since that's my primary career and personal passion. In other words, it's what I'm an "expert" in, though I use that term very loosely given that I (perceive myself to) sit squarely in the middle of my peers in technical ability.

AI (specifically, LLMs)

I'm not too keen on AI's present iteration, LLMs. Its output is far too inconsistent to be reliable for repeatable tasks, its resource costs are far too high, it's dependent upon unprecedented copyright theft to even exist, and it's presently being used to reinforce selective history and knowledge within vendor's walled gardens, nevermind the human effects on cognition, memory, learning, and critical thinking skills. This doesn't even get into the risks of alignment, the limitations of token-prediction models like LLMs, or the fact humanity itself cannot quantify intelligence objectively yet, nor does it touch on the grandiose, pie-in-the-sky promises AI boosters make regarding leisure, labor, productivity, and their vision of utopia.

Put simply, I dislike the idea of turning humanity into managers of programs they don't understand, whose output is unreliable, and whose creators have a vested interest in ensuring none of the negatives are ever actually resolved. It just seems like a recipe to speedrun extinction by any number of awful methods.

But what if I'm wrong?

A blog post by Ethan Mollick jostled my preconceptions today, as he put the present capabilities of models like OpenAI's o3 and Google's Gemini 2.5 into a different context than I'd really considered: "Jagged AGI". I'll let him explain:

My co-authors and I coined the term “Jagged Frontier” to describe the fact that AI has surprisingly uneven abilities. An AI may succeed at a task that would challenge a human expert but fail at something incredibly mundane.

He has a few really good examples of the AI models actually cranking out some very usable stuff - essentially a dropshipper business via LLM prompt, in the (arguably) better example - which is, admittedly, kinda awesome while also being terrifying to the small armies of web freelancers out there. But then he points out how, because it's using token predictions, it can fail some really trivial stuff like some brainteasers. After all, it's making future predictions based on past data, and if that data changes (in context, in factuality, or in utility), then the models quickly degrade into useless garbage; it's why the two biggest threats to LLM-based AI continue to be copyright law and poisoning.

Still, I make the case based on my own understanding of LLMs, that these tools will never become anything tantamount to "AGI". Despite a multitude of additional innovations designed to enhance these models' utility (RAG, MCP, Distillation), I still firmly believe that they will only ever be token-predictors with no understanding of the output they're creating - because it's physically true.

That doesn't mean I'm right about the rest, however. There's very real potential for LLMs to feature prominently in our daily lives going forward, possibly in ways I cannot predict. The current Hail Mary is grafting them to robots and physical sensors, providing the models a way to interact with the physical world around them. In this context, AI could be highly successful in a limited physical space with tight constraints on its autonomy or agency. Be it helping disabled people navigate their space more effectively, automating routines based on inhabitants behaviors, or saving resources through better management of appliances or climate controls, there is potential for LLMs to find success as the token-predictors they are.

Of course, these sorts of deployments would really only succeed if the models were made exponentially more energy-efficient than they are at present, which would also mean a reduction in resources needed to train foundational models going forward - which in turn would invalidate the hundreds of billions of dollars shoring up the current LLM market segment globally, because nobody would need huge datacenters of GPUs when the smart home of tomorrow can just use a distilled model like DeepSeek with specific reinforcement learning for its given tasks. AI could succeed, just not in the way its boosters want.

Or, I could be wrong entirely. Maybe human brains are little more than token-prediction machines running on twenty watts of electricity and with real-time learning capabilities, and maybe both of those are things we can shoehorn into LLMs as new capabilities. Maybe token-prediction is superior to organic intelligence in some ways, and these tools fulfill brand new roles a human never effectively could.

I could be wrong, and because I'm aware of the knowledge gap I have between myself and both actual AI scientists and AI boosters, I know where to look for new, valuable information to support or disprove my point of view - which then helps me to be less wrong.

I've Been Wrong Before...

In the mid-2000s, I was a (relatively) early adopter of HDTV in my household. Our new home had a 720p Optoma projector in the basement theater, Dad had a 26" Dell 720p TV in his office, the living room had a 32" Samsung 720p panel, and my own bedroom had a 26" Samsung 1080i CRT display. As a home theater nerd, I had done my research on viewing distances and angles, display technologies, color spaces, resolutions, and more. I was confident in my predictions and theories, and had data to back them up.

This was around the time that 1080p displays were starting to make inroads with consumers. I had done my math and read my data sources, and was confident in stating that, at average American viewing distances, 1080p was a non-starter that the TV industry was shoving down the throats of a public in an effort to move more product. 720p and 1080i were more than fine, and 1080p would be inevitable as a better way of displaying 1080i content, but 1080p luxury panels were a ripoff that should be avoided and would never see mass success.

And as luck would have it, history vindicated my opinion.

Steam's April 2025 Hardware Survey, showing 1080p as the Primary Display Resolution for 55.27% of users - a decline of 1.22% from last data set.

Well that's just the Steam hardware survey, surely the sales of TVs will support my claims.

A marketing survey showing 4K UHD displays accounting for 48.3% of the market share in Smart TVs for 2023.

Oh.

I should mention that I wasn't just wrong about 1080p not catching on, I was also wrong about 4K not catching on either. I made much the same arguments, relying on the same calculations, and the same models to justify my conclusions. I thought I was right, but both times I was flat-out wrong.

Side Note: I'm writing this on dual 1440p displays, while owning a retina display iPhone, iPad Pro, and Macbook Pro, with a 4K HDR OLED TV in the living room. So, y'know, I was fractally wrong in hindsight.

This degree of wrongness is something I look back fondly on, as a demonstration of how data, science, math, and statistics are completely irrelevant in the face of user tastes. I learned to listen more to what others had to say about their own experiences, instead of assuming my knowledge would translate into activity by others. Put more simply, I learned that I, in fact, was not as smart as I thought my gifted ass was.

Wrongness Transference

Simply acknowledging being wrong about something gradually begins inoculating you against a critical problem of modern society, something I call "wrongness transference". It's the problem of refusing to believe you're wrong, even (or especially) in areas outside your lanes of expertise. Think of the typical user who claims that technology just dislikes them, or that they know better than their auto mechanic what's wrong with their vehicle, or better than their contractor how much something should cost or how much work is required. As a more relevant example, think of the last time a non-technical leader told you how to do a technical task to their satisfaction.

That's "wrongness transference", in that they're transferring the cost of their wrongness into other fields, and onto other people, to avoid being accountable for their wrongness in the first place.

The danger of this should be evident: someone who cannot admit specific fault is not someone who can be trusted with authority or decision-making capabilities. By extension, they're more likely to extend this attitude of superiority outside the bounds of their expertise, negatively affecting others in the process. In essence, individuals who cannot admit specific fault are a net-negative in society.

What is specific fault? It's the ability to articulate not merely that you're wrong, but how you were wrong, and why you were wrong. It's a demonstration of reflection and introspection on your part, that you've considered the situation in its entirety, acknowledge your contributions to the outcome, and understand how you can do better next time. Any narcissistic idiot can admit they might be wrong sometimes, but a failure to cite specific examples and accept accountability for their actions (including any consequences) is a demonstration of a complete lack of culpability in their own mind, and reason enough to run away from them screaming.