Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
> Why shouldn't AI be able to sufficiently model all of this
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
I hear you, but counterpoint: if you had an AI that monitored social media for mentions, used vision and audio capture in cafes to see what people ordered and how they reacted to it, had access to customer purchase data to see if people kept coming back to particular cafes and what they ordered over and over again...
Granted, there's lots that's dystopian about that picture, I'm not advocating for it, but it does start to feel like the main value of the "curator" is actually just data capture. Then they put their own subjective take on that data, but I'm not totally convinced that's better than something that could tell me a data-driven story of: "Here are the top three banana breads in the city that customers keep coming back to have a taste orgasm for".
I don't know though, it's a brave new world and I'm skeptical of anyone who thinks they know how all this will play out.
I don't even think it'd be this direct except maybe in the begining.
You already can monitor things like heart rate via motion amplification and track how and when they go where. And probably many other minor factors I can't think about atm.
Gather up enough of those and you should be able to establish a very strong sidechannel into when a restaurant might have new items, its food quality and how it changes over time.
Like how long does a person of age range a, who entered on his own with heart rate x and left with heart rate y, stay if he liked the food vs him not liking the food. Or something like that...
In the end a few public cameras or other type of sensor might be all that's needed. Even if we were to fix our portable wiretaps, I dont think a global surveillance society is avoidable.
We need to built a society that can allow for a modern equivalent of privacy and rule of law within that reality. We might not be able to get away with going 5mph over the limit or accidentally keeping a pen anymore, but neither do we want speeding in our neighborhoods or all our pens gone. So what's the solution here? Random sampling who gets punished? Law breaking quotas? Increasing fines based on severity of the crime and assets and income? Figuring out how to measure intent? Replacing all punishments for minor crimes with corporal ones? idk
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.