I'm going to assert that Tesla's FSD™ does not, in fact work on city streets and highways.
Or, if you want to loosely define "work", Ernst Dickmanns had self driving in the 80s, and put in on the autobahn in the 90s. I'd rather define it more tightly as "statistically at least as safe to be in _and_ to be near, as a human driver".
Tesla claims to have achieved that, but I don't believe them. That's because the data they report 1) omits a fair bit of critical info, and 2) frequently changes definitions. Both serve to make comparisons difficult. If it was clearly safe, I think they'd put effort into making the comparison transparent.
Bear in mind that Musk has been claiming "Full Self-Driving" since at least 2016, and people involved have asserted that he wasn't wrong, he was lying.
I think WebAssembly could become that sandboxed solution. .NET Blazor WASM is moving away from mono to CoreCLR (just like Unity, with an early preview in late 2026). WASM now has really good SIMD support, native GC, large memory support, and with WebGPU we could finally see some native WASM games running at native speeds.
It's still not JS-level/JS-compatible GC (yet?) and it is still quite low level (more about corralling buffers of bytes than objects, a bit closer to OS-level page management than JS or C# level GC), as it is intended to be lower level than most languages need so that different languages can build different things with it. It is also a tiny stepping stone to better memory sharing with JS APIs (and the eventual goal of WASM "direct DOM"), but still not quite finished on that front as more steps remain.
Simply not true, this info is outdated by a decade.
CoreCLR NativeAOT is already shipping real games on Nintendo, PS5, and Xbox.
JIT isn't allowed on iPhones either, and this is what NativeAOT solves. Also, .NET is moving WASM support to CoreCLR (rather than mono) in an upcoming version as well.
Do you have examples? As far as I'm aware based on current info there's at least one current console vendor that requires all native code to be generated by their SDK.
It was common in the early 2000s for big companies to have large internal IT teams to build "line of business" apps. Then SaaS came along and delivered LoB apps for a fraction of the price and with a monthly subscription.
Looks like we're headed back to the internal IT days of building customized LoB apps.
Or perhaps there will arise a new kind of external service provider that delivers customized SaaS services to those same users, using AI. There's no reason the work has to go back to the internal IT people who were fired long ago.
I think you're basing AI only on modern 2025 LLMs.
If there is a magnitude increase in compute (TPUs, NPUs, etc) over the next 3-5 years then even marginal increases in LLM usability will take white collar jobs.
If there is an exponential increase in power (fusion) and compute (quantum) combined with improvements in robotics and you're in the territory where humans can entirely be replaced in all industries (blue collar, white collar, doctors, lawyers, etc).
So, in doing a bit of research from a link in one of the other comments, this is lcos, levelized cost of storage. I understand that to be roughly equivalent to the marginal cost of using it, including the capex divided over the unit volume. That same article uses $125/kwh as the capex, which is in line with your (and my) expectations of the cost to install.
$65/mwh works out to $0.065/kwh, so that makes sense. Effectively you can read this as "it costs $65/mwh to store and then consume electricity using these batteries"
You’re right, upon further review you can get budget Lifepo4 batteries shipped to your door from Amazon for as low as $75/kwh, which includes cables, a BMS, and various Bluetooth connectivity. So $65/kwh seems fairly reasonable for raw battery capacity in very large quantities.
But now it’s time to better understand why a Powerwall or other wall-mounted units are so much more expensive. I understand UL-listing costs, marketing, warranty, and other things are thrown in, but it’s $75/kwh versus $1000/kwh, a 13x difference.
If even at a $100/kwh price point all homeowners need to get 10-20kwh in batteries just to help peak shave the grid and save tons of money since batteries will be a fraction of the cost of grid power.
To be fair, a single server is way more reliable than cloud clusters.
Just look at the most recent many hour long Azure downtime where Microsoft could not even get microsoft.com back. With that much downtime you could physically move drives between servers multiple times each year, and still have less downtime. Servers are very reliable, cloud software is not.
I'm not saying people should use a single server if they can avoid it, but using a single cloud provider is just as bad. "We moved to the cloud, with managed services and redundancy, nothing has gone wrong...today"
Lol yep that could've been the headline. We plan to add replica servers at some point. This DB is not critical to our product hence the relaxed interim setup.
There's currently no other DA other than Tesla's FSD available in the US that will work on city streets and highways.
reply