Having an actual human who is a "brilliant private tutor" is an enormous privilege. A chatbot is not a brilliant private tutor. It is a private tutor, yes, but if it were human it would be guilty of malpractice. It hands out answers but not questions. A tutor's job is to cause the child to learn, to be able to answer similar questions. A standard chatbot's job is to give the child the answer, thus removing the need to learn. Learning can still happen, but only if the child forces it themselves.
That's not to say that a chatbot couldn't emulate a tutor. I don't know how successful it would be, but it seems like a promising idea. In actual practice, that is not how students are using them today. (And I'd bet that if you did have a tutor chatbot, that most students would learn much more about jailbreaking them to divulge answers than they would about the subject matter.)
As for this idea that replacing effort not being a problem, I suggest you do some research because that is everywhere. Talk to a teacher. Or a psychologist, where they call it "depth of processing" (which is a primary determinant of how much of something is incorporated, alongside frequency of exposure). Or just go to a gym and see how many people are getting stronger by paying 24/7 brilliant private weightlifters to do the lifting for them.
Regarding your concerns about tutor emulation, your argument seems to be students use chatbots as a way to cheat rather than as a tutor.
My pushback is its very easy to tell a chatbot to give you hints that lead to the answer and to get deeper understanding by asking follow up questions if that's what you want. Cheating vs putting in work has always been something students have to choose between though and I don't think AI is going to change the amount of students making each choice (or if it does it won't be by a huge percentage). The gap in skills between the groups will grow, but there will still be a group of people that became skilled because they valued education and a group that cheated and didn't learn anything.
> A standard chatbot's job is to give the child the answer, thus removing the need to learn.
An LLM's job is not to give the child the answer (implying "the answer to some homework/exam question"), it's to answer the question that was asked. A huge difference. If you ask it to ask a question, it will do so. Over the next 24 hours as of today, December 5th 2025, hundreds of thousands of people will write a prompt that includes exactly that - "ask me questions".
> Learning can still happen, but only if the child forces it themselves.
This is literally what my original comment said, although "forcing" it is pure negative of a framing; rather "learning can still happen, if the child wants to". See this:
>In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I never claimed that replacing effort isn't necessarily a problem either, just that such a downside has never been brought up in the context of access to a brilliant tutor, yet suddenly an impossible-to-overcome issue when it comes to LLMs.
That's not to say that a chatbot couldn't emulate a tutor. I don't know how successful it would be, but it seems like a promising idea. In actual practice, that is not how students are using them today. (And I'd bet that if you did have a tutor chatbot, that most students would learn much more about jailbreaking them to divulge answers than they would about the subject matter.)
As for this idea that replacing effort not being a problem, I suggest you do some research because that is everywhere. Talk to a teacher. Or a psychologist, where they call it "depth of processing" (which is a primary determinant of how much of something is incorporated, alongside frequency of exposure). Or just go to a gym and see how many people are getting stronger by paying 24/7 brilliant private weightlifters to do the lifting for them.