Hallucinations are a feature of reality that LLMs have inherited.
It’s amazing that experts like yourself who have a good grasp of the manifold MoE configuration don’t get that.
LLMs much like humans weight high dimensionality across the entire model then manifold then string together an attentive answer best weighted.
Just like your doctor occasionally giving you wrong advice too quickly so does this sometimes either get confused by lighting up too much of the manifold or having insufficient expertise.
I asked Gemini the other day to research and summarise the pinout configuration for CANbus outputs on a list of hardware products, and to provide references for each. It came back with a table summarising pin outs for each of the eight products, and a URL reference for each.
Of the 8, 3 were wrong, and the references contained no information about pin outs whatsoever.
That kind of hallucination is, to me, entirely different than what a human researcher would ever do. They would say “for these three I couldn’t find pinouts” or perhaps misread a document and mix up pinouts from one model for another.. they wouldn’t make up pinouts and reference a document that had no such information in it.
Of course humans also imagine things, misremember etc, but what the LLMs are doing is something entirely different, is it not?
Humans are also not rewarded for making pronouncements all the time. Experts actually have a reputation to maintain and are likely more reluctant to give opionions that they are not reasonably sure of. LLMs trained on typical written narratives found in books, articles etc can be forgiven to think that they should have an opionion on any and everything. Point being that while you may be able to tune it to behave some other way you may find the new behavior less helpful.
> Hallucinations are a feature of reality that LLMs have inherited.
Huh? Are you arguing that we still live in a pre-scientific era where there’s no way to measure truth?
As a simple example, I asked Google about houseplant biology recently. The answer was very confidently wrong telling me that spider plants have a particular metabolic pathway because it confused them with jade plants and the two are often mentioned together. Humans wouldn’t make this mistake because they’d either know the answer or say that they don’t. LLMs do that constantly because they lack understanding and metacognitive abilities.
>Huh? Are you arguing that we still live in a pre-scientific era where there’s no way to measure truth?
No. A strange way to interpet their statement! Almost as if you ...hallucinated their intend!
They are arguing that humans also hallucinate: "LLMs much like humans" (...) "Just like your doctor occasionally giving you wrong advice too quickly".
As an aside, there was never a "pre-scientific era where there [was] no way to measure truth". Prior to the rise of modern science fields, there have still always been objective ways to judge truth in all kinds of domains.
Yes, that’s basically the point: what are termed hallucinations with LLMs are different than what we see in humans – even the confabulations which people with severe mental disorders exhibit tend to have some kind of underlying order or structure to them. People detect inconsistencies in their own behavior and that of others, which is why even that rushed doctor in the original comment won’t suggest something wildly off the way LLMs do routinely - they might make a mistake or have incomplete information but they will suggest things which fit a theory based on their reasoning and understanding, which yields errors at a lower rate and different class.
When you ask humans however there are all kinds of made-up "facts" they will tell you. Which is the point the parent makes (in the context of comparing to LLM), not whether some legal database has wrong cases.
Since your example comes from the legal field, you'll probably very well know that even well intentioned witnesses that don't actively try to lie, can still hallucinate all kinds of bullshit, and even be certain of it. Even for eye witnesses, you can ask 5 people and get several different incompatible descriptions of a scene or an attacker.
>When you ask humans however there are all kinds of made-up "facts" they will tell you. Which is the point the parent makes (in the context of comparing to LLM), not whether some legal database has wrong cases.
Context matters. This is the context LLMs are being commercially pushed to me in. Legal databases also inherit from reality as they consist entirely of things from the real world.
It’s amazing that experts like yourself who have a good grasp of the manifold MoE configuration don’t get that.
LLMs much like humans weight high dimensionality across the entire model then manifold then string together an attentive answer best weighted.
Just like your doctor occasionally giving you wrong advice too quickly so does this sometimes either get confused by lighting up too much of the manifold or having insufficient expertise.