I still haven't found a use for LLMs. The propensity of them to get it wrong, and being fancy statistics machines based on existing data, just feels like a cheap trick. It's still a fancy Markov chain, it just has guard rails.
I'm not swigging the Kool-Aid yet either, but I think you're being a little dismissive. I've found ChatGPT useful for some sorts of questions that can only really be posed in natural language, particularly grammatical oddities that come up as I learn Spanish.
Probably the single most impressive result I've had so far was with "What's that weird thing on the top of a P-38's engine nacelle that looks like a recessed sideways wheel?" Got it in one. I can't imagine getting anywhere with a conventional search engine there.
I definitely wouldn't rely on it for anything professional, though.
They're still extremely effective as a fuzzy search engine and translator (e.g., I can write out a function in plain English and get something that would have taken me a minute in 5 seconds).
Additionally (more importantly), the most magical part is still the architecture: there seems to be no end to the expressivity of transformers as long as you can pour in more compute. This can be extremely powerful and lead to more generalizable reasoning given the right learning objective (e.g., pose tasks as an RL Markov Decision Processes where state=text, actions=tokens, rewards=performance of generated code/math/language).
If you look at the technical discussion from the perspective of individuals and companies creating sophisticated autocomplete, the whole fad seems ridiculous.
I feel quite strongly that LLMs are not a fad. Something like LLMs will be useful tools to people decades from now. Especially given that a whole generation of students is now using/relying on them for course work.
Even if we want to call them sophisticated autocomplete, why would I ever switch back to a dumb autocomplete?
Currently working on a pandas personal project (which I have only a little bit of experience with from before) and it’s easily 10x better than googling. I ask it a question to plot something simple and it spits code out which very rarely doesn’t work. I’d easily pay $20 for this… but it’s the free model that does it.
I don't doubt your experience at all, but I wonder what I am doing wrong. I tried to use Copilot to help me learn Rust, but I gave up because almost every suggestion has caused compile time issues or runtime panic.
Just an anecdote, but I recently used LLM's as part of learning helmfile/helm/k8s. Mostly i used the LLM to help me find the right documentation. I know what concept I'm looking for, but i don't know what they call it in these systems. I suppose critically I never really asked it to write code for me, I asked it to help me find the concepts to read documentation about.
I haven't actually tried copilot so I can't comment directly on that, but I have found the general space of LLM's to be useful.
It might be that there’s simply so much more content on the internet about pandas that the LLM internalized all of it. Rust is a much more complex language with less written about it. IOW my use case is a perfect match for the tool, yours is less so and there isn’t anything more to it.
Other than coding assistant, ChatGPT is awesome at answering to everyday questions where you previously had to find the answer in ad-bloated content farms.
Things like "how to do this", "how does this work", questions about medicine, DIY, etc …
And contrary to the content farm internet, it allows you to have a real conversation to give you more précise information.
Is it 100% accurate ? I think not. But it’s trained on nearly all the books about all the topics so I put a lot more trust on this than on SEO optimized content farms.
The best way to chat with it is to think that you are talking to someone who have read and remembered everything with an endless knowledge of everything but who acts as an faillible human when you speak with it.
Like pretty much every other thing knowledge product, The only real value I see in them is in terms of helping humans think better. You can kind of tell them to "interview" or "question you" and they can spark your creativity.
There just isn't a place for them in my life.