Maintaining Principles in Employment

Maintaining Principles in Employment
Anthropic's Logo. All rights belong to Anthropic.

My job hunt is entering its fourth month this week. No offers, several final rounds, lots of ghosting. It's bleak out there.

A queer friend of mine is interviewing with a "low to mid-tier Evil" company. Several other friends on the rainbow have had similar ethical dilemmas, where they feel like they have to choose between their principles and their immediate survival.

Today, I applied to a spot on Anthropic's corporate IT team as an Engineer. I actually want to work there, believe it or not, despite my repeated misgivings about AI.

I figured now's as good a time as any to discuss how one maintains their ethical and moral positions in the midst of times like these, or at least how I rationalize mine.

It Starts With The Money

I'm a worker. That's not just my economic status, it's my human preference: I enjoy working. If I won the jackpot on the lottery tonight, I'd keep working, at least until I found a job in education or the public sector where I could contribute more meaningfully since I don't have to worry about income anymore. Failing that, I'd open a makerspace-slash-net-cafe that I operate at cost so locals can come in, hang out, share knowledge, build stuff, and not go broke in the process. I'd operate gratis cloud services for friends and family, go back to college to work on my degree, work on learning and sharing and building stuff since money was no longer an objective for survival. If I had a partner whose income meant I didn't need to hold a job myself, I'd work on the home, I'd work on myself, I'd work on them, I'd work on my community.

I am a worker.

That said, I'm also a queer American. I don't own a home, I lack a partner, and being in tech means my income supports the basic necessities of between three to ten people in a given month because society is profoundly unequal and hostile to our existence.

The Anthropic role is offering $275k to $325k. That's a good amount of money.

In my area, that's actually more than median home purchasing money ~($225k), which means housing security for myself, my found family, and my friends-in-need. That money, even if derived from sources of questionable moral or ethical positions, can do far more good than had I not taken the role at all. Even with their 25% in-office requirements, I can take the Acela to NYC or a flight to SFO, grab a basic hotel room for a week every month, and head back home without putting a ding in essentials, savings, or personal spending goals.

When I weigh that compensation against the harms I'd be contributing to by working for them, the scales tilt quite heavily towards a substantial net-good as opposed to a net-bad. Were this OpenAI, Google, Meta, Microsoft, Oracle, or some other AI company offering this sort of compensation, the scales would fall so far and so fast towards net-negative as to snap from the gravitational force; Anthropic, at least at time of writing, is publicly trying to find a principled path forward, and I can respect and work with that.

Could this change in the future? Sure, everything and everyone is always changing in some form. That said, the compensation relative to the known harms tilts the scale into a net-positive territory today, and that means I can throw myself into the application process confident that I'm not betraying my morals or values on some fundamental level.

Companies Need Contrarians

I am not a loyalist, nor am I a sycophant. I tell it like it is from my point of view, I support my arguments with evidence, and then - and this is where so many of my peers seem to lose the plot - I listen to the other perspectives. I integrate diversity into my data, and adjust my positions accordingly. I do not know everything; I am fallible.

This manifests in some degree of contrarianism in a lot of orgs. It is not that I'm out there seeking conflict so much as I have not yet been convinced of the benefit of an approach opposed to my own. That doesn't mean I won't follow the party line, necessarily; I take pride in my work, and if told to do something I will give it my all, but I also won't shirk away from something if my point of view is different than the consensus. This comes from being queer, I think, in a society that hates queer folk: I must defend my existence, and therefore I must not blindly acquiesce to a given perspective by default.

How does this help rationalize personal morals and ethics against work or employers that conflict with them to some degree? The simple - and admittedly naive - answer is that not working for them would be failing to expose their workforce to other perspectives. If AI is presently a fascist artifact but also impossible to destroy, then the only solution is reformation, and that only comes from those being harmed by its current form demonstrating ways of improving its benefits while negating said harms.

Except I no longer consider myself so willingly naive as to believe that would work on any AI company, Anthropic included. My friends include artists of varying disciplines, all of whom have had their work stolen by companies to train models on producing more slop, which then harms their ability to survive on the skills they cultivated for such ends. No, the benefit of working for Anthropic would be to bring my friends' voices into my work, to help attempt to steer their efforts away from soulless slop like their competitors enable, and guide it more towards uplifting artists and humans by amplifying them rather than replacing them. It would mean advocating for more watermarks of AI output that could then be leveraged to promote human-made output over machine slop, especially in creative professions. It would be advocating for a deliberate absence of IP protections for AI output, such that value can only come from the humans that create with their own bodies and skills and talents, leaving AI to processes instead of outcomes.

These points are not new, but I also know - from experience - that these sorts of companies can quickly become echo chambers of the power players when left unchecked. If a company only hires sycophants and loyalists, they end up throwing out the ingenuity and innovation that enabled their success in the first place. Holding contrary viewpoints isn't a weakness or a signal to fire someone, it's a different perspective worthy of being heard for the novelty of its data and position, to strengthen your own path forward through more informed decision-making.

Do I think I'll be the voice that convinces them to change course on things? No, I lack that sort of blind pride or hubris in myself (though my OCD certainly likes to rehearse otherwise). Do I think I could contribute to better outcomes by being in the proverbial room? Yeah, absolutely.

Companies Change

Change is the universal constant. Nothing stays the same forever, and companies are no exception. In fact, Anthropic is currently going through the hardest of changes an organization has to endure: the pivot to profitability in the midst of an economic downturn. They have to prepare to change everything if they want to survive the tumultuous times ahead. The old Silicon Valley business model doesn't work anymore because of AI and its associated narrative, and so the company has to find a new and novel way of prospering in an era where debt isn't cheap, money isn't free, antitrust scrutiny is on the rise, Nationalism is harming the typical M&A process, opposition to datacenter buildouts is mounting, the post-WW2 world order has all but collapsed, and many more headwinds.

Thus far, they've attempted the traditional playbook structure of enshittification. They yanked Claude Code from their base pro plan, then reinstated it with the classical corporate "only a test" boilerplate response. They've changed their tokenizer to increase token usage on newer models, resulting in higher usage burn rates and costs. They've lowered usage limits in peak hours. These are methods those of us in technology circles are all too familiar with, and we're generally done tolerating them or buying the typical excuses for their implementation.

At the same time, profitability is a very real concern. Job displacement hasn't equated to enough of a ROI to justify either continued AI expansion in most companies or profitability within most AI players. The current outsider math is that tokens are subsidized at a rate of 10 to 1; that is, it costs ~$10 of actual money to deliver AI output per $1 of subscription or token revenue paid for by the user. Critics like Ed Zitron go much, much further to demonstrate how the math simply does not work for the current products.

Money is finite. The IPO nears. Profitability is an immediate concern, not some distant promise once the market is sufficiently cornered.

This is where I can thrive, baby.

Bringing in doubters and giving them a vested interest in the success of the organization is how novel strategies are formed, how juggernauts are made. It says, "we agree this is a problem, and we want you to help us fix it." As I noted on the resume I sent to Anthropic, I am an excellent fixer. In the narrow scope of their IT role, someone like me can bring cost discipline and intentionality into a space that's often solely expansionist, like an unchecked weed choking the life from valuable crops. In the context of the broader organization, it's bringing novel ideas for new or different revenue streams into the picture: releasing open weights models, promoting local inferencing to offload our costs, undermining competitors tethered to huge datacenters by focusing research on better quantization, distillation, memory usage, and ultimately finding ways to make foundational models more accessible and useful on customer hardware while selling the tooling, harnesses, and server systems needed to operate these at the scale and reliability the customer needs.

Something I've only recently come to appreciate from my career experience is that bringing in outsiders in times of crisis and trusting them to help steer your organization is how institutional problems get resolved quickly, and organizations grow and survive despite tumultuous times. Those transformations also let us reduce or eliminate harmful behaviors while refocusing on positive ones that bring both fiscal and community profit, setting the organization up for long-term success. On the other hand, bringing in outsiders and subjecting them to existing power dynamics is a recipe for disaster: their good work won't be built upon further by those who eventually throw them back out again, and the company will suffer for it through negative reputation in the broader marketplace.

If you're already struggling to be profitable and foolish enough to listen to those who put you into the situation over those trying to help you exit it, you deserve to fail.

The Exit Strategy

No matter how good my intentions in my work, no matter how well off the compensation might make myself and my found family, no matter how much security is afforded by the employment itself, everyone with principles and morals has to have an exit strategy. I am no different.

Incidentally, this is another area of advantage for principled and ethical individuals: we're not going to fuck someone over on our way out the door unless they're a direct threat to the survival of ourselves or our community, and the benefit of wisdom is knowing to exit long before that threshold is neared, let alone crossed.

I know where my personal Rubicon lay. While every other AI company has long blown past it, Anthropic hasn't yet - though it certainly grows closer when considering recent changes. That is ultimately what balances my personal scales in the short-term.

As for the long-term? As an outsider, I only see doom and gloom over the long haul, because I don't have access to the data and knowledge Anthropic does about its core business and financials. If Anthropic succeeds in replacing workers wholesale with AI, then we're doomed anyway to decades of civilizational collapse as entirely new societal structures and institutions are built out of blood, sweat, and misery; if they fail, the sum total collapse of the wider AI industry will result in a shorter but similarly gruesome economic collapse that mandates new institutions to protect workers and reign in the myopic excesses of the Capitalist investor classes.

Thus, the path forward to sustainable success requires threading the needle between the two. It means abandoning the notion of worker replacement in favor of worker amplification, and it means following through on the promises of actually democratized, private, personal access to AI for everyone, rather than token hoarders and datacenter owners with surveillance aims.

Thus the Rubicon lays in plain view, yet untouched thus far. Everyone else has had no qualms letting their tools be used for nefarious ends with the exception of Anthropic.

Should that change, then I would seek employment elsewhere.

Simple as that.

Parting Thoughts

There is no ethical consumption under Capitalism, as the memes go. To be fair, there is no ethical consumption in general, because to consume in excess in a society where minimums aren't guaranteed is to rob someone else of their necessities. Our smartphones are excess, our multi-monitor displays are excess, our RGB lightstrips and gaming computers are excess, because they are not necessary and their creation depends on the exploitation of others.

That being said, this just means that there's always room to be better people. I'm a firm believer in concepts like rehabilitation and reformation. I reject the empty platitude of "it's always been this way", because we have millennia of evidence that actually no, it has not always been this way. We build the systems ourselves, and that means we can change their aims, remediate their flaws, and improve their outputs whenever we damn well please.

I applied to Anthropic because I believe they're the only AI company out there capable of rehabilitation or reform; the others are too far gone to bother with, and I will not miss them when they are gone. Yet inside of Anthropic, you have folks like the Claude Code team trying to make these tools more accessible to more people, to expand access to AI to wider audiences without requiring extensive new vocabularies, tools, and systems. Inside Anthropic are some genuinely good people doing genuinely good things, and I'd like to support them in their endeavors.

I think that's something to keep in mind when you're considering employment: is the compensation they're offering you worthy of the compromises you'd have to make along the way? Not merely in terms of paying sufficient amounts of money to make you look the other way (bad), but also providing the environment for improvement, acknowledgement of their harms, and a desire to change for the better.

I think Anthropic, at least in part, has that potential.

I guess we'll see what happens.