It is so sad that mainstream narratives are upvoted and do not require sources, whereas heterodoxy is always downvoted. People would have downvoted Giordano Bruno here.
You're comparing the actions of what most people here view as a democratic state (parlamentary republic) and an opaquely run terrorist organization.
We're talking about potential consequences of giving AIs influence on military decisions. To that point, I'm not sure what your comment is saying. Is it perhaps: "we're still just indiscriminately killing civilians just as always, so giving AI control is fine"?
> we're still just indiscriminately killing civilians just as always, so giving AI control is fine
I don't even want to respond because the "we" and "as always" here is doing a lot. I don't have it in me to have an extended discussion to address how indiscriminately killing civilians was never accepted practice in modern warfare. Anyways.
There are two conditions in which I see this argument(?) is useful. If you assume their goal is indiscriminately killing civilians and ML helps them, or if you assume that their ML tools cause less precise targeting of militants that causes more civilians being dead contrary to intent. Which one is it? Cards on the table.
> We're talking about potential consequences of giving AIs influence on military decisions
No, I replied to a comment that was talking about a specific example.
> I don't have it in me to have an extended discussion to address how indiscriminately killing civilians was never accepted practice in modern warfare.
I did not claim it is/was accepted practice. I was asking if "doing it with AI is just the same so what's the big deal" was your position on the general issue (of AI making decisions in war), which I thought was a possible interpretation of your previous comment.
> No, I replied to a comment that was talking about a specific example.
OK. That means the two of us were/are just talking past each other and won't be having an interesting discussion.
I'm sure you can understand that both of them are awful, and one does not justify the other (feel free to choose which is the "one" and which is the "other").
> But back to the topic, if one side is using ML to ultimately kill fewer civilians then this is a bad example against using ML.
Depends on how that ML was trained and how well its engineers can explain and understand how its outputs are derived from its inputs. LLM’s are notoriously hard to trace and explain.