Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But LLMs do not solve natural language understanding in any of the meanings that the phrase meant before LLMs. Instead, they throw a completely new technique at it that completely sidesteps the need for language understanding and what do you know? For the purposes of responding in meaningful, generally sensible ways, it works amazingly well. And that is incredibly cool. But it doesn't solve the (all) problem(s) that more historical approaches to machine language "understanding" were concerned with.

But there is no world representation inside an LLM, only text (words, letters) representations, so nothing the LLM does can be based on reasoning in a traditional sense.

I would wager that if we build an LLM based on a training data set collection, and then we rebuild it with a heavily edited version of the data set that explicitly excludes certain significant areas of human discourse, the LLM will be severely impaired in its apparent ability to "reason" about anything connected with the excluded areas. That sounds as if it ought to surprise you, since you think they are capable of reasoning beyond the training set. It wouldn't surprise me at all, since I do not believe that it what they are doing.

LLMs contain a model of human speech (really text) behavior that is almost unimaginably more complex than anything we've built before. But by itself that doesn't mean very much with respect to general reasoning ability. The fact that LLMs can convince you otherwise points, to me, to the richness of the training data in suitable responses to almost any prompt,suitable, that is, for the purpose of persuading you that there is some kind of reasoning occuring. But there is not. The fact that neither you nor I can really build a model (hah!) of what the LLM is actually doing doesn't change that.



> But LLMs do not solve natural language understanding in any of the meanings that the phrase meant before LLMs.

Are you saying that NLP as a field of research did not exist before LLMs? This is a continuation of research that has been in progress for decades.

> But there is no world representation inside an LLM, only text (words, letters) representations, so nothing the LLM does can be based on reasoning in a traditional sense.

Not true. The model has learned a representation of semantic relationships between words and concepts at multiple levels of abstraction. That is the entire point. That's what is was trained to do.

It's a vast and deep neural network with a very high dimensional representation of the data. Those semantic/meaning relations are automatically learned and encoded in the model.


Damn HN and its delayed reply policies ...

> It's a vast and deep neural network with a very high dimensional representation of the data.

the data is text, so ...

It's a vast and deep neural network with a very high dimensional representation of *text*

And yes, to some extent, text represents the world in interesting ways. But not adequately, IMO.

If you were an alien seeking to understand the earth, starting with humans' textual encoding thereof might be a palce to start. But its inadequacies would rapidly become evident, I claim, and you would realize that you need a "vast and deep representation" of the actual planet.

> Are you saying that NLP as a field of research did not exist before LLMs? This is a continuation of research that has been in progress for decades.

Of course I'm not saying that (the first sentence). Part of my whole point is that LLMs are to NLPs as rockets are to airplanes. They're fundamentally a "rip it up and start again" approach, that discards almost everything everyone knew about NLP. The results are astounding, but the connection with, yes, "traditional" NLP is tenuous.


> Part of my whole point is that LLMs are to NLPs as rockets are to airplanes.

Yes it is deep learning applied to NLP. Makes the old designs obsolete

> the data is text

It is not randomly generated text. There are patterns in that text. It was trained to model the semantics or "meaning" in the text. There is a structure in the text which the machine has recognized.

It automatically learned a model of many concepts without any of those concepts being explicitly programmed into it. That's the entire point of machine learning.

> But not adequately, IMO.

It is adequate for some things and not adequate for other things.

It seems that all you are saying is that GPT is not AGI and doesn't have human level of understanding and reasoning. No one disagrees with that.


> It seems that all you are saying is that GPT is not AGI and doesn't have human level of understanding and reasoning. No one disagrees with that.

No, I'm saying that LLMs perform what is essentially an amazing, really hard to understand party trick that happens to be so good it is actually useful. They may be indicative (surprisingly) of how a lot of actual human speech production happens (or may not), and if so, that's really good to discover. That this is surprising represents a bit of a failure of our imagination, but it also leaves untouched the parts of speech production that don't use this sort of process.

Like actual reasoning.


Here's a prompt: "If there is a bob outside, and a bib inside, and the bib cannot see the bob, can the bob see the bib?"

Answer:

"If the bib cannot see the bob due to being inside, it is likely that the bob, which is outside, also cannot see the bib. The reason for the bib not being able to see the bob is presumably due to some kind of barrier, such as a wall or a window, which would likely obstruct the view for both the bib and the bob."

Prompt:

"What type of reasoning did you just apply to solve that task?"

Answer:

"In solving the task, I applied abductive reasoning. Abductive reasoning involves inferring the most likely explanation for a set of observations or statements. In this case, I inferred that a barrier (e.g., a wall or window) is the most likely explanation for why the bib cannot see the bob, and thus, the bob likely cannot see the bib either."

ChatGPT solves the problem by a mechanical reasoning process, not a "party trick". It's a machine, not a person at a party. Surely you don't believe that it has seen that exact sequence of questions and replies in its training set and is simply regurgitating. You can argue about whether its actually using "abductive reasoning" or not, but that is besides the point that it involves some mechanical reasoning process over an interpretation of the prompt. It's not a simple regurgitation.

AlphaZero learned to play Go starting with nothing but the rules of the game. What is it regurgitating there?


Alright so deep learning, the state of the art of AI, is a "party trick". AlphaZero is likewise a party trick. No "true" reasoning involved.

> Like actual reasoning.

You're relying on intuition and personal beliefs of what constitutes "true" reasoning instead of formal rigorous mathematical definitions of reasoning. The general concept of reasoning includes what the language models are doing when they solve natural language understanding tasks, by definition.

It just sounds like a No True Scotsman fallacy.


(follow up to adjacent comment)

So what I'm saying is, GPT "knows" what a cat is. It "knows" what an orange is. It has inferred these concepts from the data set.

Imagine approaching someone who is tripping on LSD and demanding they immediately solve a 10 digit multiplication problem, then saying "AHA! You cannot solve it, therefore you are incapable of any reasoning whatsoever!"


Also

> reasoning in a traditional sense.

We are talking about reasoning in a general sense. There are many types of reasoning in AI which I'm sure you know how to look up and read about. "Traditional" is not one of the categories.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: