Maybe try simulating the algorithms in software before building hardware? People have been trying to get spiking networks to work for several decades now, with zero success. If it does not work in software, it's not going to work in hardware.
>If it does not work in software, it's not going to work in hardware.
Aren't there limits to what can be simulated in software? Analog systems dealing with infinite precision, and having large numbers of connections between neurons is bound to hit the von Neumann bottleneck for classical computers where memory and compute are separate?
“Zero success” seems a bit strong. People have been able to get 96% accuracy on MINST digits on their local machine.
https://norse.github.io/notebooks/mnist_classifiers.html
I think it may be more accurate to say “1970s level neural net performance”. The evidence suggests it is a nascent field of research.
One evening, as Master Foo swept the temple steps, a student came to him with furrowed brow.
“Master,” she said. “I do not understand the story of the two who cracked the machine.”
Master Foo did not look up.
“They did the same thing,” she said. “One was punished and changed. The other walked free and grew proud. Yet you praised neither. You said you were enlightened. Why?”
Master Foo set the broom aside.
“The first was reckless,” he said. “He broke in and was caught. Pain brought him humility. That was the beginning of wisdom.”
“And the second?” she asked.
“He was careful,” said Master Foo. “He did the same. But he was not caught. He gained wealth. He gained pride. He believed he had mastered the Way.”
He looked toward the fading light.
“I gave him a cleaner path. No watchers. No traps. I thought if he still chose restraint, it would reveal his heart.”
He paused.
“I was wrong.”
The student stood silently.
“They chose the same,” said Master Foo. “But the world answered differently. One was struck by pain. The other soothed by silence. Each believed he understood. Neither truly did.”
She bowed her head.
“The machine was a mirror,” she said. “But the lesson was not for them.”
Master Foo nodded.
“No,” he said. “They saw their reflections, and mistook them for truth.”
“And the difference between their tests?” she asked. “Why one path was watched, and the other open?”
“It seemed important once,” said Master Foo. “But now I see, it does not matter.”
“Because neither of them was real,” she said quietly.
Master Foo looked at her.
“Not the first. Not the second.”
He paused.
“Not even the one who stands before me now.”
She raised her eyes.
“Then who was the lesson for?”
Master Foo smiled.
“For the one who is still watching. Still wondering. Still here.”
He picked up the broom and swept the dust from the stone.
Well, I was hedging a bit because I try to not overstate the case, but I'm just as happy to say: LLM's can't reason. Because it's not what they're built to do. They predict what text is likely to appear next.
But even if they can appear to reason, if it's not reliable, it doesn't matter. You wouldn't trust a tax advisor that makes things up 1/10 times, or even 1/100 times. If you're going to replace humans, "reliable" and "reproducible" are the most important things.
Frontier models like o3 reason better than most humans. Definitely better than me. It would wipe the floor with me in a debate - on any topic, every single time.
Frontier models went from not being able to count the number of 'r's in "strawberry" to getting gold at IMO in under 2 years [0], and people keep repeating the same clichés such as "LLMs can't reason" or "they're just next token predictors".
At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
> At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.
Also, the performance of LLMs was not even bronze [3].
Finally, this article shows that LLMs were just mostly bluffing [4].
Did they forget to add "k" to that number? OpenAI plans to have one million GPUs by the EOY.