Goals in the context of AI aren’t the type of thing you’re arguing against here. AI can absolutely have goals — sometimes in multiple senses at the same time, if they’re e.g. soccer AIs. Other times it might be a goal of “predict the next token” or “maximise score in Atari game”, but it’s still a goal, even without philosophical baggage about e.g. the purpose of life.
Those goals aren’t necessarily best achieved by humanity continuing to exist.
(I don’t know how to even begin to realistically calculate the probability of a humanity-ending outcome, before you ask).
What the parent is saying is that an AI (that is, AGI as that is what we are discussing) gets to pick its goals. For some reason, humans have a fear of AI killing all humans in order to to achieve some goal. The obvious solution is thus to achieve some goal with some human constraint. For example, maximize paperclips per human. That actually probably speeds up human civilization across the universe. No, what people are really afraid is if AÍ changes its goal to be killing humanity. That’s when humans truly lose control, when the AÍ can decide. But, then the parent’s comment does become pertinent. What would an intelligent being choose? Devolving into nihilism and self destructing is just as equal as a probability as choosing some goal that leads to humanity’s end. That’s just scratching the surface. For instance, to me, it is not obvious whether or not empathy for other sentient beings is an emergent property of sentience. That is, lacking empathy might be problem in human hardware as opposed to empathy being inherently human. The list of these open unknowable questions are endless.
> The obvious solution is thus to achieve some goal with some human constraint.
One of the hard parts is specifying that goal. This is the “outer alignment problem”.
Paperclips per human? That’s maximised by one paperclip divided by zero humans, or by a universe of paperclips divided by one human if NaN doesn’t give a better reward in the physical implementation.
If you went for “satisfied paperclip customers”? Then wirehead or drug the customers.
Then you have the inner alignment problem. There are instrumental goals, things which are useful sub-steps to larger goals. AI can and do choose those, as do us humans, e.g. “I want to have a family” which has a subgoal of “I want a partner” which in turn has a subgoal of “good personal hygiene”. An AI might be given the goal of “safely maximise paperclips” and determine the best way of doing that is to have a subgoal of “build a factory” and a sub-sub-goal of “get ten million dollars funding”.
But it’s worse than that, because even if we give a good goal to the system as a whole, as the system is creating inner sub-goals, there’s a step where the AI itself can badly specify the sub-goal and optimise for the wrong thing(s) by the standards of the real goal that we gave the system as a whole. For example, evolution gave us the desire to have sex as a way to implement its “goal” (please excuse the anthropomorphisation) of maximising reproductive fitness, and we invented contraceptives. An AI might decide the best way to get the money to build the factory is to start a pyramid scheme.
Also, it turns out that power is a subgoal of a lot of other real goals, so it’s reasonable to expect a competent optimiser to seek power regardless of what end goal we give it.
If you want to call them “tasks” you can, but the problem still exists, and AI can and do create sub-tasks (/goals) as part of whatever they were created to optimise for.
You might find it easier to just accept the jargon instead of insisting the word means something different to you.
Your left is my right, and with you definition “get laid” is a task from the point of view of evolution and a goal from the point of view of an organism.
It’s in much the same vein that it doesn’t matter if submarines “swim”, they still move through water under their own power; and it doesn’t matter if your definition of “sound” is the subjective experience or the pressure waves, a tree falling in a forest with nobody around to hear it will still make the air move.
If AI do or don’t have any subjective experience comparable to “consciousness” or “desire” is also useful to know, and in the absence of a dualistic soul it must in principle be as possible for a machine as for a human (“neither has that” is a logically acceptable answer), but I don’t even know if philosophy is advanced enough to suggest an actionable test for that at this point.
(That said, AI research does use the term “goal” for things the researchers want their AI to do. Domain specific use of words isn’t necessarily what outsiders want or expect the words to mean, as e.g. I frequently find when trying to ask physics questions).
These definitions and their distinction are particular and important in AI. The mistaken usage of these terms by machine learning experts does not change their global definition.
> Your left is my right, and with you definition “get laid” is a task from the point of view of evolution and a goal from the point of view of an organism.
Get laid is a task, not a goal. Reproduction is a task, not a goal. The goal is pleasure.
> The mistaken usage of these terms by machine learning experts does not change their global definition.
Ah, I see you’re a linguistic prescriptivist.
I can’t see your definition in any dictionary, which spoils the effect, but it’s common enough to be one.
> The goal is pleasure.
Evolution is the form of intelligence that created biological neural networks, and simulated evolution is sometimes used to set weights on artificial neural nets.
From evolution’s perspective, if you can excuse the anthropomorphisation, reproduction is the goal. Evolution doesn’t care if we are having fun, and once animals (including humans) pass reproductive age, we go wrong in all kinds of different and unpleasant ways.
Those goals aren’t necessarily best achieved by humanity continuing to exist.
(I don’t know how to even begin to realistically calculate the probability of a humanity-ending outcome, before you ask).