Hacker Newsnew | past | comments | ask | show | jobs | submit | perfobotto's commentslogin

It’s very simple. Apple doesn’t pay as well as other places and they stock price increases and their flashiness are not enough anymore to keep people around. It’s time they realize they are not the darling they used to be


Where do you think Srouji will go next? AMD? Nvidia?


Maybe OpenAI or Anthropic


No way an offchip HBM has same or better bandwidth then onchip


> MemoryX has lower latency, but also far lower bandwidth


is this approach viable on training? wat about kernels that require different grids?


What an amazing write up! Thank you very much!


To be clear a cerebras chip is consuming a whole wafer and has only 44 GB of SRAM on it. To fit a 405B model in bf16 precision (excluding kv cache and activation memory usage) you need 19 of these “chips” (and the requirement will grow as the sequence length increases for the kvcache). Looking online it seems on one wafer one can fit between 60 to 80 H100 chips, so it’s equivalent to using >1500 H100 using wafer manufacturing cost as a metric


The budget these companies spend on this tech is seriously mind boggling to me.


Is wafer cost a major factor in the actual chip price?


This is supposed to be a product , not a research artifact.


What stops nvidia from making sure this stops working in future driver releases?


The law, hopefully.

Beeper mini only worked with iMessage for a few days before Apple killed it. A few months later the DOJ sued Apple. Hacks like this show us the world we could be living in, a world which can be hard to envision otherwise. If we want to actually live in that world, we have to fight for it (and protect the hackers besides).


I was thinking the same but in terms of firmware updates.


I mean, he was right … for what we knew at that time. He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain that the hardware of the time was very far away from achieving. He could not predict that things would move so fast on the hardware side (nobody could have)that made this somewhat possible. We are atill I would argue a bit out in having the appropriate computer power to make this a reality still, but it now is much more obvious that it is possible if we continue on this path


> He predicted correctly that the only way to achieve a general intelligence it would require to mimic the extremely complex neural networks in our brain

Besides naming neutral networks and human brains don't have that much in common


Most of the relevant similarities are there. Every plausible model in computational neuroscience is based on neural nets or a close approximation thereof, everything else is either magic or a complete non-starter.


> nobody could have

Hans Moravec at McCarthy's lab in roughly this timeframe (the 70s) wrote about this then -- you can find the seed of his 80s/90s books in text files in the SAIL archive https://saildart.org/HPM (I'm not going to look for them again). Easier to find: https://web.archive.org/web/20060615031852/http://transhuman...

(Same McCarthy as in this debate.)

Gordon Moore made up Moore's Law in 1965 and reaffirmed it in 1975.


ok - except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.

secondly, people selling things and people banding together behind one-way mirrors have a lot of incentive to devolve into smoke-and-mirrors.

Predicting is a social grandstand in a way, as well as insight. Lots of ordinary research has insight without grandstanding.. so this is a media item as much as it is real investigation IMHO


To be honest restricting funding to the kind of symbolic based AI research that is criticized in this discussion might have helped AI more than it hurt , by eventually pivoting the research toward neural networks and backpropagation. I don’t know how much of a good thing would have been if this kind of research continued to be funded fully.


Symbolic AI still alive and kicking, https://arxiv.org/abs/2402.00854 also liking the experiments around Graph Neural Networks and hybrids thereof.


>except detailed webs of statistical probabilities only emits things that "look right" .. not at all the idea of General Artificial Intelligence.

I mean, this is what evolution does too. The variants that 'looked right' but were not fit to survive got weeded out. The variants that were wrong but didn't negatively affect fitness to the point of non-reproduction stayed around. Looking right and being right are not significantly different in this case.


yes, you have made the point that I argue against above. I claim that "looking right" and "being right" are absolutely and fundamentally different at the core. At the same time, acknowledge that from a tool-use, utilitarian, automation point of view, or a sales point of view, results that "look right" can be applied for real value in the real world.

many corollaries exist. "looking right" is not at all General Artificial Intelligence, is my claim yes.


"Being right" seems to be an arbitrary and impossibly high bar. Human at their very best are only "looks right" creatures. I don't think that the goal of AGI is god-like intelligence.


Humans "at their very best" are at least trying to be right. Language models don't - they are not concerned with any notion of objective truth, or even with "looking right" in order to gain social status like some human bullshitter - they are simply babbling.

That this strategy is apparently enough to convince a large number of (supposedly) intelligent people otherwise is very troubling!

Not saying that General AI is impossible, or that LLMs couldn't be a useful component in their architecture. But what we have right now is just a speech center, what's missing is the rest of the brain.

Also, simply replicating / approximating something produced by natural evolution seems to me like the wrong approach, for both practical and ethical reasons: if we get something with >= human-like intelligence, it would be a black box we could never understand how any part of it actually works, and it might be a sentient being capable of suffering.


What makes it "much more obvious that it is possible" to simulate the human brain? If you're thinking of artificial neural nets, those clearly have nothing to do with human intelligence, which was very obviously not learned by training on millions of examples of human intelligence; that would have been a complete non-starter. But that's all that artificial neural nets can do, learn from examples of the outputs of human intelligence.

It is just as clear that there is one more ability that human brains have, than the ability to learn from observations, and that's the ability to reason from what is already known, without training on any more observations. That is how we can deal with novel situations that we have never experienced before. Without this ability, a system is forever doomed to be trapped in the proximal consequences of what it has observed.

And it is just as clear that neural nets are completely incapable of doing anything remotely like reasoning, much as the people in the neural nets community keep trying, and trying. The branch of AI that Lighthill almost dealt a lethal blow to (his idiotic report brought about the first AI winter), the branch of AI inaugurated and championed by McCarthy, Michie, Simon and Newell, Shannon, and others, is thankfully still going and still studying the subject of reasoning- and making plenty of progress, while flying under the hype.


Snaaaake


Whenever I see a product that advertises itself based on the technology it is built on I’m skeptical. Rust doesn’t make a game engine good.


Well it's a game engine and the language that is based on is important, you would for sure pass on a game engine written in Javascript and instead you will choose one that uses technologies you know.

So I think is important to explicitly tell which technologies are you using in a game engine.


Depends, PlayCanvas, BabylonJS and ThreeJS are written in JavaScript.

Although you might say that there is still the C++/Rust code of the browser rendering engine to take into account.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: