When chatgpt churns out boilerplate code and ready snippets for your projects, it's easy to fall in that trap of "I am building this" or "I am more productive now" but in the greater scheme of things, "ChatGPT knows" is still no different than "Google knows" or "Wikipedia knows" or "Stack Overflow knows".
At the end of the day, we have just replaced one kind of "reference monster" with another that feels somewhat interactive and intimate, is good at searching and filtering, and gives all information through one interface.
But eventually, you must still learn the technical concepts the hard and old school way. AI is still no substitute for that and it won't be even if AGI ever arrives.
In some ways, LLM is more deceptive than Google/Wikipedia because it gives you the false sense of feeling that you've achieved something or know something when you actually haven't (in the strict technical sense).
Some are curious, some are anxious about getting things wrong, some are doing the bare minimum and ya some are full of themselves. Each one exploits the tool differently. But I get them to ask it about these things too (are there things I am missing cause I want to get this right/why is my boss saying I am full of myself/my boss told me he doesn't have time to teach me about X but if I am curious thats something I should look into etc) which you couldn't do with them in the past.