Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Israel use such a system to decide who and where should be be bombed to death, is that direct enough control of weapons to qualify?


For the downvoters:

https://www.972mag.com/lavender-ai-israeli-army-gaza/

It is so sad that mainstream narratives are upvoted and do not require sources, whereas heterodoxy is always downvoted. People would have downvoted Giordano Bruno here.


It's mainstream enough to have a Wikipedia article.

https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...


Giordano Bruno would have to show off his viral memory palace tricks on TikTok before he got a look in.


This is awful


On the other hand indiscriminately throwing rockets and targeting civilians like Hamas did for decades is loads better!


You're comparing the actions of what most people here view as a democratic state (parlamentary republic) and an opaquely run terrorist organization.

We're talking about potential consequences of giving AIs influence on military decisions. To that point, I'm not sure what your comment is saying. Is it perhaps: "we're still just indiscriminately killing civilians just as always, so giving AI control is fine"?


> we're still just indiscriminately killing civilians just as always, so giving AI control is fine

I don't even want to respond because the "we" and "as always" here is doing a lot. I don't have it in me to have an extended discussion to address how indiscriminately killing civilians was never accepted practice in modern warfare. Anyways.

There are two conditions in which I see this argument(?) is useful. If you assume their goal is indiscriminately killing civilians and ML helps them, or if you assume that their ML tools cause less precise targeting of militants that causes more civilians being dead contrary to intent. Which one is it? Cards on the table.

> We're talking about potential consequences of giving AIs influence on military decisions

No, I replied to a comment that was talking about a specific example.


> I don't have it in me to have an extended discussion to address how indiscriminately killing civilians was never accepted practice in modern warfare.

I did not claim it is/was accepted practice. I was asking if "doing it with AI is just the same so what's the big deal" was your position on the general issue (of AI making decisions in war), which I thought was a possible interpretation of your previous comment.

> No, I replied to a comment that was talking about a specific example.

OK. That means the two of us were/are just talking past each other and won't be having an interesting discussion.


> doing it with AI is just the same so what's the big deal

Is that implying fewer civilian deaths is NOT a big deal?

I think the parents and children of people who were killed would disagree with you.


I'm sure you can understand that both of them are awful, and one does not justify the other (feel free to choose which is the "one" and which is the "other").


Oh, totally. If there were two sides indiscriminately killing each other for no reason I couldn't say one justifies the other.

But back to the topic, if one side is using ML to ultimately kill fewer civilians then this is a bad example against using ML.


> But back to the topic, if one side is using ML to ultimately kill fewer civilians then this is a bad example against using ML.

Depends on how that ML was trained and how well its engineers can explain and understand how its outputs are derived from its inputs. LLM’s are notoriously hard to trace and explain.


I don't disagree.


Also the military is notorious for ignoring requests by scientists, for example to not use the nuclear bomb as a weapon of war.

https://en.m.wikipedia.org/wiki/Szil%C3%A1rd_petition

So the developers may program the AI to be careful, but the military has the final word on deciding if the AI is set on safety or agressiveness.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: