When dot com crashed it was similar - compute use didn’t reduce, far from it, it’s just capex crashed to the ground and took 90% of Nasdaq value with it, for a few years.
Stop comparing to dotcom. There’s a big difference between the internet then and the internet now. Ignoring the astounding nature of the AI tech (dotcom isn’t even in the same ballpark), you also have to consider these two things:
1) We didn’t make the world digital native in that era
2) We didn’t connect the whole world to the internet
They won’t have any choice soon enough - all that wishy washy they are not our enemy bullshit goes out the window with the first missile/shell/drone flying over.
If you remember Putin is a spy by training, and a damn good one at that, you must consider spies really don’t want to change things when they are advantageous to them. Right now he knows very well what levers to pull to make things happen the way he wants. He won’t change that.
Anything. Everything. In domains where the search space is small enough to physically enumerate and store or evaluate every option, search is commonly understood as a process solved by simple algorithms. In domains where the search space is too large to physically realize or index, search becomes "intelligence."
E.g. winning at Chess or Go (traditional AI domains) is searching through the space of possible game states to find a most-likely-to-win path.
E.g. an LLM chat application is searching through possible responses to find one which best correlates with expected answer to the prompt.
With Grover's algorithm, quantum computers let you find an answer in any disordered search space with O(sqrt(N)) operations instead of O(N). That's potentially applicable to many AI domains.
But if you're so narrow minded as to only consider connectionist / neural network algorithms as "AI", then you may be interested to know that quantum linear algebra is a thing too: https://en.wikipedia.org/wiki/HHL_algorithm
Grover's algorithm is useful for very few things in practice, because for most problems we have a better technique than checking sqrt(N) of all possible solutions, at least heuristicly.
There is, at present, no quantum algorithm which looks like it would beat the state of the art on Chess, Go, or NP-complete problems in general.
There are about 2^152 possible legal chess states. You cannot build a classical computer large enough to compute that many states. Cryptography is generally considered secure when it involves a search space of only 2^100 states.
But you could build a computer to search though sqrt(2^152) = 2^76 states. I mean it'd be big--that's on the order of total global storage capacity. But not "bigger than the universe" big.
Doing 2^76 iterations is huge. That's a trillion operations a second for two and a half thousand years if I've not slipped up and missed a power of ten.
Maybe 100 years from now we can do 2^18 quantum ops/sec and solve chess in a day, whereas a classical computer could do 2^36 ops/sec and still take longer than the lifetime of the universe to complete.
Google's SHA-1 collision took 2^63.1 hash operations to find. Given that a single hash operation takes more than 1000 cycles, that's only less than three doublings away.
Cryptographers worry about big numbers. 2^80 is not considered secure.
It's early so I'm thinking out loud here but I don't think the algorithm scales like this, does it?
We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.
If an algorithm has a complexity class of O(sqrt(N)), by definition it means that it can do better if run on all 100 elements than by splitting the list into two elements and running it on each 50.
This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.
This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).
In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.
That only helps for a relative small range of N. Chess happens to sort of fit into this space. Go is way out, even a sqrt(N) is still in the "galaxy-sized computer" range. So again, there are few problems for which Grover's algorithms really takes us from practically uncomputable to computable.
Even for chess, 2^76 operations is still waaaaay more time than anyone will ever wait for a computation to finish, even if we assumed quantum computers could reach the OPS of today's best classical computers.
No-one would solve chess by checking every possible legal chess state -- also checking 'all the states' wouldn't solve chess, you need a sequence of moves, and that pushes you up to an even bigger number. But again, you can easily massively prune that, as many moves are forced, or you can check you are in a provable end-game situation.
training an ai model is essentially searching for parameters that can make a function really accurate at making predictions. in the case of LLMs, they predict text.
I'm still waiting for a computer that can make my morning coffee. Until it's there I don't really believe in this whole "computer" or "internet" thing, it's all a giant scam that has no real-world benefit.
Honestly the whole java/kotlin tooling is the worst to pick for mobile dev, and KEEP it after so many other great languages and tools that are out there. I don’t why google didn’t offer at least Go as a native alternative for android dev.
Because adding Go as an official language would be a monumental amount of work for vanishingly little benefit. Remember Android itself is half written in Java, so are you doing JNI calls for everything from Go? That's not fast nor "native". Are you rewriting the framework in Go? That's a crazy amount of effort.
And all for what? To satisfy a hobby itch that has no practical benefit? The problem of app developers is almost never the language. Hell, look at how popular web apps are even though JavaScript is the worst language in any sort of widespread usage. The platform is what gets people excited (or frustrated), not a language.
Because Go only exists due to a trio that doesn't like C++, has fought against Java 20 years ago and lost (Inferno and Limbo), and is pretty much the atenthesis of the feature rich capabilities of Java, Kotlin and C++, the official Android languages.
They hit jackpot with Kubernetes and Docker taking off after their pivot to Go.
That'd require some form of collaborative behavior across internal organizations, and real planning! </s> </disgruntled_xoogler>
I wake up every morning and thank God that Flutter exists. I can target Android without dealing with building on years and years of sloppy work.
Sunk-cost fallacy x politics leads to this never being fixed. I don't think Google can fix it, unless hardware fails entirely. There's been years-long politics that culminated in the hardware org swallowing all the software orgs, and each step along the way involved killing off anything different.
Can they eventually throw away Android and replace it with Fuchsia? In the reporting about Fuchsia that I read ages ago, it sounded like it was intended to be an Android replacement but, looking into it again just now, it seems more like an embedded OS for other non-smartphone hardware -- maybe with some (aspirational?) claims of utility on smartphones and tablets.
Fuchsia has components for running APKs (Android Runner) and Linux binaries (Starnix), but that probably isn't what you meant.
The problem with replacing a UI toolkit - any toolkit - is that any change to the toolkit requires modification of all software, including third-party software. Typically, when an OS wants to provide a new toolkit, they wrap the existing toolkit in new code. For example, on macOS, UIKit wraps AppKit, and on all Apple platforms SwiftUI is a wrapper around AppKit and UIKit (depending on platform). On Windows, every UI toolkit ultimately is creating "windows" as they are understood by USER[0], which creates corresponding objects in CSRSS and/or the NT kernel, which can then be used to draw on or attach to a GPU. The lowest level UI abstraction either OS provides is the objects supported by their oldest toolkit, and the lowest level programming language you can write apps in is whatever can call it.
Linux is a bit different, because it inherits its windowing model from X11. X shipped with no default toolkit and a stable window server protocol that apps could program against directly, in an era where most GUI OSes[1] didn't have 'servers' or 'protocols'. You populated resource files and called the relevant function calls to make things happen, and those function calls became sacrosanct. Even Windows NT couldn't escape this; it still used USER despite USER being years older than NT.
The best you can do is shim the library - write something more lower level than the old junk and then rewrite the old library in terms of the new one. This is what Xwayland does to make X apps work on Wayland; and it's what Apple did (mostly) with Carbon to give a transition path to Mac OS 8/9 apps on OS X. Google could, say, ship a new Android toolkit that doesn't use Java bindings, and then make Android's Java toolkit a shim to the new native toolkit. However, this still means you have to keep the shim around forever, at least unless you want to start having flag dates and cut-offs. For context, Apple didn't kill Carbon until macOS 10.15 Catalina, and if they hadn't refused to ship Carbon on 64-bit Intel, it probably would still be in macOS today.
[0] An interesting consequence of this is that disabling "legacy input" in games turns off the ability to move the application window since all that code is intimately coupled to every app that has to open a top-level (i.e. not a widget) window.
[1] At the time that would be XEROX Star, the Lisa, and the Macintosh
[2] This is also why Apple will never, ever ship an iPad that can run macOS software in any capacity. Even if they were forced to allow root access and everything else macOS can do. The entire point of the iPad is to force software developers to rewrite their apps for touch, and I suspect their original intent was for the Macintosh to go away like the Apple ][ did.
Correct, tl;dr roadkill. I don't mean to be disrespectful, someone anonymous picked a bitter fight about this once. But to your point, its clear there was a larger context that Fuchsia was born from, and the ambitions and commitment to it are greatly different than they were at some previous juncture.