The analogy stems from the notion that neural nets are "grown" rather than "engineered". Chris Olah has an old, but good post with some specific examples: https://colah.github.io/notes/bio-analogies/
This was a good read. I was struck by the quantity of nuanced and applied knowhow it took to build SmolLM3. I am curious about the rough cost it took to engineer and train SmolLM3 - at ~400 GPUS for a least a month, and, based on the set of book co-authors, 12 engineers for at least three months. Is $3-5M a fair ballpark number? The complement is how much experience, on average, the team members had doing ML and LLM training at scale before SmolLM3. The book is "up" on recent research, so I am surmising a phd-centric team each with multiple systems built. This is not commodity skill. What the book suggests to me is that an LLM applications start up would best focus on understanding the scope and knowhow for starting from post-training.
I'm a little ways through this and it's great so far, nice job.
One of the reasons people build one though is to learn. Most smart folks are quite aware that the reality of pre-training a real LLM is going to involve some head banging against the wall (ie, things don't go smoothly like "building an llm from scratch" book), and they want to go through the process.
Really impressive writeup. In your opinion, how long will this stay up to date? The field is constantly evolving, do you plan to keep updating this document?
Thanks! I expect the book will remain relevant as long as the Transformers architecture does. That’s why we mostly focus on topics we think will stand the test of time, but let’s see how that plays out :)
“QED and the Men Who Made It” [1] might be close to what you’re after for quantum theory at least. Unlike other popular accounts, it gets quite technical and covers a lot of the historical dead ends that people had during the development of quantum field theory.
Indeed we opted for offline methods like Anchored Preference Optimization as we found in the Open R1 project that doing multi-task RL on small models is quite a hassle to get right. With offline methods, you focus much more on dataset curation / generation, but that still provides faster iteration cycles for the model scale we’re dealing with!
> The absolute best way of doing this is these days is likely through a vision based machine learning model, but that is an approach that is very far away from scaling to processing hundreds of gigabytes of PDF files off a single server with no GPU.
I gave the demo a spin and it’s pretty nice! One thing I noticed is that the avatar doesn’t seem to be aware of it’s surroundings- for example, I asked it why it was wearing a cowboy hat and it was adamant that it wasn’t wearing a hat at all :)
> I expect language models to also get crazy good at mathematical theorem proving
Indeed, systems like AlphaProof / AlphaGeometry are already able to win a silver medal at the IMO, and the former relies on Lean for theorem verification [1]. On the open source side, I really like the ideas in LeanDojo [2], which use a form of RAG to assist the LLM with premise selection.