Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI. That, to me, is deeply disturbing, and very very difficult to justify.

I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.

In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.

And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.



While I get the vibes, and have had experience of human customer support being very weird on a few occasions, replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.

And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.

LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.


I think there's an interesting implication here: that the actually good (for the customer) support experience is a real human who has access to a RAG where they can look up company documents/policies/procedures, but still be able to use their human brain to make judgement calls (and, of course, they have to be willing to, y'know, read the notes left by the previous rep).


> replacing mediocre humans with mediocre AI isn't a win for customers getting actual solutions.

No it's not, but that's not what I described. I described replacing mediocre humans with better AI for at least the first level of customer service.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: