Because, we have pretty convincing historical precedent that 'just following orders' does not work as a defense when your government does something indefensible.
Let’s steel-man the parent comment. Obviously “just following orders” is not generally a morally sufficient argument even if you end up not facing repercussions for your actions.
If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...
What does that mean? How does one come to a personal moral conclusion? Vibes?
(I take "moral framework" to mean a principled stance that gives objective grounding for a moral judgement. I agree that we can come to a moral judgement without putting it through a systematic and discursive defense, and I reject the notion that there are many moralities or that they are arbitrary, but it is also true that diverging conceptions of the basis of morality will frustrate agreement. Stopping at personal moral judgement does not lend itself to fruitful dialogue and understanding, as it constraints the domain of what is intersubjectively knowable.)
My moral framework can be different from yours. Me the individual can come to the conclusion that something is immoral when the rest of the group doesn’t agree with me. And (at least for my own moral framework) I should take action accordingly.
So I don’t need a shared framework to make the claim that something is immoral (to me).
The second is that it isn’t very interesting to stop at “personal moral judgement”. You’re having a dialogue, right? So, if you want to have a dialogue, you must explain your moral reasoning. I don’t like your parent’s use of the term “moral framework”, because it does lends itself to a relativistic interpretation - though charitably, the parent need not be a relativist, and is merely acknowledging the different stances of various moral theories. But also charitably, if we lack sufficient common moral ground, the first task is to find that moral common ground before you can discuss something with two incommensurate views in play.
Are you intentionally lumping in all civic service in one moral bucket? Is working at the post office morally equivalent to developing panopticon technology to suppress protest and track citizens?
So what, they won't be using any of the existing Google Gemini models of infra then? Because all of Google - from Gemini to the data center infra etc - has (and still is) worked on by non-US persons even - gasp - outside the US. They'll do a complete clean-room ground up bootstrap of all the research and infrastructure from zero?
You of course don't have to reinvent science, but it is in fact standard practice to do infrastructure from the literal ground up with US citizens for even unclassified government data.
Can you provide a different source on that? The govcloud page you've linked says operated by US citizens, not built by US citizens. I'd be pretty surprised if they did the latter. Standard practice as I understand it is to simply run the standard software in a separate environment. A recent Propublica report [0] pointed out that Microsoft was hiring citizens to escort the actual engineers that aren't citizens, for example.
working to directly advance a product used substantially to oppress people via surveillance or war crimes, when you have many other choices, is immoral. easy.
Correct. It depends. For example, it might depend on what the collaboration is likely to result in. Perhaps it would be more likely to be moral there were some boundaries in place, like "no mass domestic surveillance" or "no fully autonomous weapons".
Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.
The US military has deployed fully autonomous weapons since at least 1979, and potential adversaries are now doing the same. For better or worse that ship has sailed.
Look, a dumb bomb is a fully autonomous weapon once it's launched. Let's be real: an LLM making decisions on who to target and when and where to launch munitions represents a meaningful change in our concept of autonomous weapons.
So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?
Those questions are moot. There are situations where it's simply impossible to have a human in the loop because reaction time is too slow or the environment is too dangerous or communication links are unreliable. Russia is deploying fully autonomous weapons to attack Ukraine today and they will be selling those weapons (or licensing the technology) to their allies. There is no option to stop. And let's please not have any nonsense suggestions that we can somehow convince Russia / China / Iran / North Korea to sign a binding, enforceable treaty banning such weapons: that's never going to happen.
There's always an option to stop. We can choose civility over barbarity, stop trying to kill people over 1000+ year old dick waving contests, and stop threatening each other with doomsday weapons because your grandpa shot my grandpa. Just because our leaders are too stupid and cowardly doesn't mean there's no option.
Not sure you're aware, but the joke may be on you. It's apparently Putin who's convinced Trump and the Mullahs (not the band) to choose civility over babarity by allowing a superyacht of one of his cronies to pass through the Hormuz.[0]
Russian trolling at its finest, truly. This timeline keeps raising the bar on the absurdity quotient.
I wasn't aware that the US was throwing away its moral compass for the just cause of frustrating Putin's expansionism. The new story seems to be Putin gets to do what he wants, and so do we.
If you think there's something wrong with giving our warfighters the most effective weapons to carry out their assigned missions with minimum casualties then your moral compass is completely broken. Personally I favor a less interventionist foreign policy but that has to be addressed through the political process. Not by unaccountable individual defense contractor employees making arbitrary policy decisions.
You should know that every single veteran I know ruthlessly mocks Hegseth for trying to use this term non-comedically. It’s a synonym for someone who takes their service way too seriously/makes it their whole identity. It’s almost exclusively used to mock people.
We aren’t Russian and Putin is not our leader. We can choose how we behave and operate. This is like saying we should use chemical weapons if someone else deploys one. You’re speaking as if it’s all so binary. “Do what they do or you lose.”
It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences. Chemical weapons have seen minimal use after WWI largely because they're not very militarily effective. Autonomous kinetic weapons actually work. Right now Ukrainians are building autonomous weapons to defend themselves against Russian autonomous weapons. For Ukrainians it is binary: do what they do or you lose. Would you prefer that they lose? And don't presume to tell us that the Russians can be persuaded to stop by non-violent means, that would be completely delusional.
>It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences.
This is a deeply flawed argument that has an obvious application back at you, but either way if you’re going to stoop to personal attacks I think we’re done here.
Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”
No. Their comment was:
“Any AI researcher who continues to work here is morally compromised.”
But, “…doing this kind of work with the federal government.” is added context that was not there and is based on your own interpretation.
The language of the parent comment charges that simply working at a company that is engaging in this makes one complicit in an immoral act, and the complicity itself is immoral. I disagree with all of that.
Yes. Working at a company explicitly profiting off of doing clearly immoral acts is wrong. It doesn’t mean working for a company contracted with the federal government is always wrong.
In a logical or mathematical sense, sure, but when it's the US government and a huge surveillance-tech company it's pretty necessarily immoral (at least in an American context where harming liberty is immoral - other cultures disagree).
Like the guy in an old clip saying "What is my crime? Enjoying a meal? A succulent Chinese meal?" while being arrested for trying to pay with a stolen credit card. The succulence of the meal has nothing to do with it, and that it's your own government has nothing to do with it. It's just a sad way to try to distract from what's actually wrong with helping build tools for mass surveillance and autonomous murder.
I don't think that was intentional, but invading countries while trying to distract them with negotiations, randomly assassinating leaders and hoping everything just turns out well, threatening to "destroy civilizations", targeting bridges and more, all while aiding and abetting Israel which is intentionally destroying pharmaceutical, educational, and other such civilian institutions is all 100% intentional.
In some ways worse than bombing the school was the effort to implicitly deny it. The school was near a military facility, and itself was a military facility in the past. US intelligence screwed up. They should have simply acknowledged what happened and why. Their response just reeked of cowardice and malice at the highest level.
You'll have to live with it somewhere else. Neither HN's administrators nor readership will tolerate that kind of behavior. If you intend to participate on Hacker News over the long term, please take up the suggestion by the other poster to review the guidelines and adhere to them.
Of course it doesn't! I acknowledge that I have no first amendment right to speak in this forum, none at all. I merely observe that the people who run the forum are themselves champions of free speech, within limits of course.
Given most government policies and direct engagement in all kind of monstrosities over the last millennia, there is really no reason to limit the case to USA, indeed.
Because the government is comprised of Nazis now and is waging wars of expansionist conquest abroad and murdering domestic dissidents at home. Anyone working toward enabling that deserves to be on the receiving end of the systems they build.
Weird, why is it morally right for anyone to work with immoral organizations? -- That's what's in the focus, right?
Whether the current government is immoral, or if government can be philosophically immoral is up to debate. But your question sounds like a deflection to me.
Heya pigpag. Your account seems to be shadowbanned, even though your comments seem normal. If you want people to be able to see your comments I reccomend creating a new account or appealing to hn@ycombinator.com
Idk about morality, but it’s certainly a way to stop dystopian mass surveillance nightmares if everyone capable of building one refuses.
So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.
It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.
You're using a strawman. This was never about just being employed by a government in the most tepid and universal sense.
Ex: "Why is it morally wrong for a US citizen to work with their government?", asked the employee compiling lists of American citizens of Japanese descent to be rounded up into Internment Camps.
"Lawful" as determined by the party executing the action is very different from actually lawful.
The courts can intervene later, but they can't un-bomb a hospital.
This is setting aside the obvious problem where governments will often set laws based on self-interest rather than morality, particularly when it comes to military conflict.
This government doesn't GAF what is "lawful" and what isn't. Was what happened to Pretti and Good in Minneapolis lawful? Would you work for ICE/CBP with no qualms at all?
See also the new national sport of hunting for fishing boats off the South American coast. Is that "lawful?"
And yes, since you went there: everything the Nazis did was "lawful." To the extent it wasn't "lawful," they made it "lawful."
> Don't attack law enforcement with a deadly weapon, whether it's a vehicle or gun.
How do you attack law enforcement with a gun while on your knees, with your arms pinned behind you and the gun is holstered? It's interesting how we can watch the same video, and some people only see what they are told to see.
Thankfully Russia, China, etc have the same qualms as we do in the United States and will refused to send their brightest engineers to work on weapons so they don't become "morally compromised"!!!
I don't know if you're being sarcastic(sounds like you are!) but indeed a lot of engineers left Russia after the war in Ukraine started as they didn't want to be drafted and didn't want to contribute to the war effort in some way, even if indirectly. Of course, many stayed or even willingly help. See how many engineers from Iran work abroad too, for moral and other reasons.
The point is - this happens everywhere, it's not just some weird western thing.
We, the people, ostensibly get to say what these security interests are. Also, the security policy executed on by the state is not some immutable monolith. One can agree or disagree with it as it changes over time, and hopefully, influence its direction to arc towards goodness.
This was the same logic that was used when building nuclear weapons, and many of the scientists involved in that tried to find a different path (most notably Niels Bohr). I think we would be in a much better world if they had been successful. It's good that we're trying again w/ LLMs.
Probably because the articles are talking about how the AI will be used in immoral ways, and that the people who know that and continue doing the work must be morally compromised.
I know that there might be $several ways those highly-paid engineers might still rationalize their work.
Some of them might have ideological reasons to treat entire classes of people as unworthy of life. Within the model of their ideologies, the most evil things might be perfectly moral.
I wonder what reasons you have to disagree with people's moral stance against using AI as a weapon.
I stand by it. I'm not including all Google employees, ofc – there are some fantastic projects coming out of there – just the people working on their AI systems which will be accessible to the government with (effectively) no oversight.
I actually don't think it's so nuanced. We know (from its spat with Anthropic) that the government wants the ability to use AI to implement mass surveillance of Americans and fully autonomous killings. We also have ample data that this administration takes the law as a mere suggestion. It's imperative not to make their abuses easier.
Google's researchers aren't stuck there; their skills are in extraordinary demand and I'm sure Anthropic, for example, would hire them in an instant.
It’s funny to me how many progressive people I know and am friends with who work at these AI companies which are marginalized demographics (Trans, Gay, Latino, Black).
Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.
I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.
Preach. The hypocrisy is startling. I think people started at these companies maybe years ago with "good intentions" and are willing to turn a blind eye. But now, given just how glaring clear it is, I don't think it is really excusable anymore. To be clear, people can work wherever they want including these companies but what kills me is the hypocrisy. They are pathological liars to themselves if they somehow think they aren't complicit.
I made another comment above. People contain multitudes. Different contexts, different choices, not everyone is in a box defined by the viewer's world view. You can't really know what's going on with someone else, in their heads, in their context, so give them some grace. Instead, this person's "friends" are "hypocrites" who were "lured" into their choices. It's very condescending. I am suggesting the poster re-examine their own views on other people in light of this.
You're missing the point. They're just lamenting the contrast between what their friends say (fuck tech, no kings) and what they spend their workweek in service of.
It's not complicated: if these friends would take a non-society-destroying job at equal pay (who wouldn't?) then their values aren't driving the decision, money is. Fine, that's a choice adults get to make. But then own it and actually justify it on its merits, don't just retreat to "who are you to judge."
Didn’t say that. The friends in question clearly think it is. My point more generally was about people who publicly talk about $X being society-destroying while materially enabling $X for a paycheck.
It’s really not clear to me that they think that. OP was clearly saying that if you’re progressive, the intellectually honest position is to be anti-AI. I don’t think that necessarily follows.
I mean no harm in saying what I said, I love my friends. I just can’t stomach the hypocrisy, it’s what the companies are preying and feeding off of.
My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.
I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.
Do you suspect there is any chance they are fully independent adult human beings with full agency, who have looked at the pros and cons, and chosen to make the choices they did with clear eyes? Do you think there's any context that might square their choices with their own internal principles that don't make them hypocrites? I mean these as real questions. For "friends you love" you really seem to take a dim view of their intelligence.
One of humanity's greatest weaknesses is cognitive dissonance. People can convince themselves of just about anything. And in some ways intelligence is a burden here. A fool will just do something with a reason of 'f you, that's why.' It's only the clever man that will even bother rationalizing the villain into the hero, and we're great at it. An interesting thought experiment is to ask people if they'd be willing to push a button that would randomly kill a person somewhere in the world for a million dollars. They'd have no direct accountability themselves and their action would be unknown to anybody else.
People will rationalize themselves into declaring this moral even though it is obviously one of the most overtly amoral actions possible. One friend I have, a rather intelligent guy otherwise, was even trying to create a utilitarian argument that he'd donate some percent of his 'earnings' to life saving charities meaning he'd be saving more life on the net. The fact that if everybody thought and behaved the same way, the entirety of humanity would cease to exist, was a consideration he didn't have a response for. Let alone the fact that he just rationalized his way into justifying near to any deed imaginable, so long as you got paid enough for it.
I’ll be honest and say it’s made me question and reposition some of my friendships with a number of these friends. Some joined well before we knew the fallout of how AI has affected and impacted society negatively, some have joined in recent years because they were offered 2x their currently already high comp package, and others will take any job they can get (who, admittedly, I judge far less as I know they are just needing to survive in a HCOL city).
My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.
I agree with the intent of your rhetorical question, so I'm jesting with you. I'm justifying my "yes" with the hopefully humorous distraction that every person, including American taxpayers, has at some point made a nonsustainable/selfish (my definition of immoral) decision.
That's not a productive stance to take, if you're trying to be good faith and an agent of progress, even assuming morality isn't realitive, and the context nuanced.
Why would they be morally compromised? So the ones building open-source models should be as well because some terrorist will use the model to do nefarious stuff?
An AI researcher can work anywhere they want, can't they? At the minimum they could work in a different field entirely. It seems like a false dichotomy to frame the question around laws.
Any AI researcher who refuses to support his own country in a technological arms race is morally bankrupt, foolishly naive and does not deserve to enjoy the the way of life created for him by those who sacrificed their lives.
> Any AI researcher who continues to work here is morally compromised.
Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.
Any AI researcher who continues to work here is morally compromised.