Goodness, I forgot about TOEFL. That might indeed shape a lot of your early vocabulary choices if you need to get an English certificate (which I suppose would happen during college years, which is also when most of your personal writing style gels together).
And have that be a shell script which starts whatever you need. You'll probably want fsck in there, mount -a, some syslogd, perhaps dbus, some dhcp client, whatever else you need, and finally the getty which is probably a good idea to respawn after it exits. That's usually the job of init so you could well end your rc with exec /sbin/init
OS kernels? Everything from numpy to CUDA to NCCL is using C/C++ (doing all the behind the scene heavy lifting), never mind the classic systems software like web browsers, web servers, networking control plane (the list goes on).
Newer web servers have already moved away from C/C++.
Web browsers have been written in restricted subsets of C/C++ with significant additional tooling for decades at this point, and are already beginning to move to Rust.
For Chrome, I don't know if anyone has compiled the stats, but navigating from https://chromium.googlesource.com/chromium/src/+/refs/heads/... I see at least a bunch of vendored crates, so there's some use, which makes sense since in 2023 they announced that they would support it.
Not in the sense that people who are advocating writing new code in C/C++ generally mean. If someone is advocating following the same development process as Chrome does, then that's a defensible position. But if someone is advocating developing in C/C++ without any feature restrictions or additional tooling and arguing "it's fine because Chrome uses C/C++", no, it isn't.
I've always wondered. If you have fuck you money, wouldn't it be possible to build GPUs to do LLM matmul with 2008 technology. Again, assuming energy costs / cooling costs don't matter.
Building the clean rooms at this scale is a limitation in itself. Just getting the factory setup to and the machines put in so they don't generate particulate matter in operation is an art that compares in difficulty to making the chips themselves.
Energy, cooling, and how much of the building you're taking up do matter. They matter less and in a more manageable way for hyperscalers that have a long established resource management practice in lots of big data centers because they can phase in new technologies as they phase out the old. But it's a lot more daunting to think about building a data center big enough to compete with one full of Blackwell systems there are more than 10 times more performant per watt and per square foot.
> The fundamentals actually haven't changed that much in the last 3 years
Even said fundamentals don't have much in the way to foundations. It's just brute forcing your way using a O(n^3) algorithm using a lot of data and compute.
Brute force!? Language modeling is a factorial time and memory problem. Someone comes up with a successful method that’s quadratic in the input sequence length and you’re complaining…?
I think the current grads are going to be shafted either way. In 5 years, there might be more opening for "fresh" young grads and the companies will prefer them over the young people who're just graduating.
reply