- This does not seem unexpected. Google is panicked about losing the AI race and pushing resources into DeepMind is a logical step to mitigating those fears.
- Google has currently given ~300M to Anthropic and has a partnership with them. I assume Google continues to see potential in both avenues and won't neglect one AI team for the other. I'm guessing that DeepMind will be their primary focus because of the numerous, real-world applications already at play.
- It's tough for me to compare Google DeepMind to OpenAI GPT4. They seem to be very different approaches. Yet, they both have support for language and imagery. So, perhaps they aren't that different afterall?
- Still waiting to hear more from Google on how they plan to leverage their novel PaLM architecture. The API for it was released a month ago, but, to my awareness, has yet to take the world by storm. (Q: Bard isn't powered by PaLM, right?)
Overall, I am not convinced this will be massively beneficial. I don't trust Google's ability to execute at scale in this area. I trust DeepMind's team and I trust Google's research teams, but Google's ability to execute and take products to market has been quite weak thus far. My gut says this action will hamstring DeepMind in bureaucracy.
>> Overall, I am not convinced this will be massively beneficial. I don't trust Google's ability to execute at scale in this area.
Yes the team which literally created transformer and almost all the important open research including Bert, T5, imagen, RLHF, ViT don't have the ability to execute on AI /s. Tell me one innovation OpenAI bought into the field. They are good at execution but i havent seen anything novel coming out of them.
> Yes the team which literally created transformer and almost all the important open research including Bert, T5, imagen, RLHF, ViT don’t have the ability to execute on AI /s.
This, but non-sarcastically. Google has spectacularly, so far, failed to execute on products (even of the “selling shovels” kind, much less end-user products) for generative AI, despite both having lots of consumer products to which it is naturally adaptable and a lot of the fundamental research work in generative AI.
The best explanation is that they actually are, institutionally and structurally, bad at execution in this domain, because they have all the pieces and incentives that rule out most of the other potential explanations for that.
> OpenAI bought into the field. They are good at execution but i havent seen anything novel coming out of them.
Right, OpenAI is good at execution (at least, when it comes to selling-shovels tools, I don’t see a lot of evidence beyond that yet), whereas Google is, to all current evidence, not good at execution in this space.
They're getting Innovator's Dilemma'd, the same way that Bell Labs, DEC, and Xerox did. When you have an exceptionally profitable monopoly, it biases every executive's decision-making toward caution. Things are good; you don't want to upset the golden goose by making any radical moves; and so when your researchers come out with something revolutionary and different you bury it, maybe let them publish a few papers, but certainly don't let it go to market.
Then somebody else reads the papers, decides to execute on it, and hires all the researchers who are frustrated at discovering all this cool stuff but never seeing it launch.
The typical solution to this (assuming there is one internally) is setting up a sub-company and keeping the team isolated from the parent company aka "intrapenuership" but also keeping them well resourced by the parent.
It seems like that's what they were doing with DeepMind for the last decade. But it's also possible DeepMind as an institution lacked the pressure/product sense/leadership to produce consumable products/services. Maybe their instincts were more centered around R&D and being isolated left them somewhat directionless?
So now that AI suddenly really matters as a business, not just some indefinite future potential, Google wants to bring them inside.
They could have created a 3rd entity, their own version of OpenAI, combining DeepMind with some Google management/teams and other acquisitions and spinning it off semi-independently. But this play basically has to be from Google itself for their own reputation's sake - maybe not for practicality's sake but politically/image-wise.
Yeah. It doesn't really work all that well. Xerox tried it with Xerox PARC, Digital with Western Digital, AT&T with Bell Labs, Yahoo with Yahoo Brickhouse, IBM with their PC division, Google with Google X & Alphabet & DeepMind, etc.
Being hungry and scrappy seems to be a necessary precondition for bringing innovative products to market. If you don't naturally come from hungry & scrappy conditions (eg. Gates, Zuckerburg, Bezos, PG), being in an environment where you're surrounded by hungry & scrappy people seems to be necessary.
For that matter, a number of extremely well-resourced startups (eg Color, Juicero, WebVan, Secret, Pets.com, Theranos, WeWork) have failed in spectacular ways. Being well-resourced seems to be an anti-success criteria even for independent companies.
That may have been true in the 70's and 80's. However, I worked for a 2000 person (startup) software company in the 90's that was acquired at 1.8B, another 4000 person (startup) software company in the 90's that was acquired at 3.4B, and then a few years ago, the acquirer of both was itself acquired for 18B.
I survived ALL the layoffs somehow. Boots on the ground agrees with "doesn't really work all that well" but the people collecting rents keep collecting. Given the size all of these received significant DOJ reviews though the only detail I remember is basketball sized court rooms filled with printed paper for the depositions. I'm sure they burned down the Amazon to print all that legalese, speaking of scaling problems.
edit: i take it all back! my memory is not as good as i thought it was re: software companies. i will leave up my sorry list as penance for my crappy recent tech history skills.
Thanks for the comment. Chortle. That's hilarious.
Indeed, you are right on: Legent, Platinum, CA, and Broadcom in order from little fish to big. CA was the second largest software company in the world behind Microsoft then.
The weird part you couldn't see from this telling is that I worked in the Legent office in Pittsburgh, moved to Boston post-CA acquisition and worked in the CA office in Andover. Resigned and went to Platinum in Burlington. Moved to Seattle. Second CA acquisition in 5 years. I should have quit while I was ahead. Moved back to Pittsburgh. Worked in the exact same office I'd worked in 5 years earlier with the same crew. Weird feeling is a mild understatement. I still know people who work for Broadcom now. I should reach out.
i used to read BYTE mag over in the UK in the early 90s before i moved to USA; CA was such a heavy hitter in the early 90s!! i guess it never really was the same in the post-Wang era(s).
The problem with the intrapreneurship idea is that it's really hard to beat desperation as a motivator. I have seen people behave very differently in the context of a startup vs a corporate research lab thanks to this dynamic. Some people thrive in the corporate R&D environment, but the innovator's dilemma eventually gets to their managers.
Cisco has done a great job balancing this, actually - they keep contact with engineers who leave to do startups, and then acquire their companies if they become successful enough to prove the product.
After a bunch of ex-Cisco people ate Cisco’s core router lunch at Juniper, Cisco vowed it would never happen again. Until a bunch of ex-Cisco people ate WebEx’s lunch at Zoom.
Getting a big seed round once makes you want that next round to keep going (and take even more money off the table).
Getting a X-million-per-year budget from a parent company gives you a very different sort of situation. IME this results in less urge to get something out the door and more urge to get "the best thing" built. Shipping early risks your budget in a way that "look at all this cool theoretical progress" doesn't, because the public and press can critique you more directly.
Lack of major owner equity basically means few intrapreneur efforts will succeed unless the 'founder' really couldn't succeed without the daddy company
> But it's also possible DeepMind as an institution lacked the pressure/product sense/leadership to produce consumable products/services. Maybe their instincts were more centered around R&D and being isolated left them somewhat directionless?
It seems like this is more a Google problem than a DeepMind problem though, no? Google created one of the most successful R&D labs for ML/AI research the world has ever known, then failed to have their other business units capitalize on that success. OpenAI observed this gap and swooped in to profit off all of their research outputs (with backing from Microsoft).
IMO what they’re doing here is doubling down on their mistakes: instead of disciplining their other business units for failing to take advantage of this research, they’re forcing their most productive research team to assume responsibility and correct for those failures. I expect this will go about as well as any other instance of subjecting a bunch of research scientists to internal political struggles and market discipline, i.e. very poorly.
They're also paying for their product managers' cancellation culture. (Sorry.) I'm seeing a lot of AI pitch decks; none suggest trusting Google. That saps not only network effects, but what ill term earned research: work done by others on your product. Google pays for all its research and promotion. OpenAI does not.
Are researchers actually frustrated to never see it launch, or are they mostly focused on publishing papers?
I thought OpenAI’s unique advantage over many big tech companies is that they’ve somehow figured out how to fast track research into product, or have researchers much more willing to worry about “production”.
I’m puzzled that stuff like alpha Fold count for nothing in this discussion (having just browsed through most of it).
I saw quotes from independent scientists referring to it as the greatest breakthrough of their lifetime, and I saw similarly strong language used in regard to the potential for good of alpha fold as a product.
So they gave it away, but it is still a product they followed through on and continue to.
Was it wrong of them that they gave it away, and right, that Microsoft’s primary intent with their open AI technology, seems to be to provoke an arms race with google?
Alpha Fold is a game changer, but nowhere near the game changer ChatGPT(4) is, even if ChatGPT was only available for the subset of scientists that benefit from Alpha Fold. We are literally arguing semantics if this is AGI, and you're comparing it to a bespoke ML model that solves a highly specific domain problem (as unsolvable and impressive as it was).
> We are literally arguing semantics if this is AGI,
And if it isn't? Literally every single argument I've seen towards this being AGI is "We don't know at all how intelligence works, so let's say that this is it!!!!!"
> nowhere near the game changer ChatGPT(4) is, even if ChatGPT was only available for the subset of scientists that benefit from Alpha Fold
This is utter nonsense. For anyone who actually knows a field, ChatGPT generates unhelpful, plausible-looking nonsense. Conferences are putting up ChatGPT answers about their fields to laugh at because of how misleadingly wrong they are.
This is absolutely okay, because it can be a useful tool without being the singularity. I'd sure that in a couple of years time, most of what ChatGPT achieves will be in line with most of the tech industry advances in the past decade - pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.
I really wish people would stop projecting hopes and wishes on top of breathless marketing.
I asked GPT-4 to give me a POSIX compliant C port of dirbuster. It spit one out with instructions for compiling it.
I asked it to make it more aggressive at scanning and it updated it to be multi-threaded.
I asked it for a word list, and it gave me the git command to clone one from GitHub and the command to compile the program and run the output with the word list.
I then told it that the HTTP service I was scanning always returned 200 status=ok instead of a 404 and asked it for a patch file. It generated that and gave me the instructions for applying it to the program.
There was a bug I had to fix: word lists aren’t prefixed with /. Other than that one character fix, GPT-4 wrote a C program that used an open source word list to scan the HTTP service running on the television in my living room for routes, and found the /pong route.
This week it’s written 100% of the API code that takes a CRUD based REST API and maps it to and from SQL queries for me on a cloudflare worker. I give it the method signature and the problem statement, it gives me the code, and I copy and paste.
If you’re laughing this thing off as generating unhelpful nonsense you’re going to get blind sided in the next few years as GPT gets wired into the workflows at every layer of your stack.
> pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.
I’m in a BNI group and a majority of these blue collar workers have very little to worry about with GPT right now. Until Boston Dynamics gets its stuff together and the robots can do drywalling and plumbing, I’m not sure I agree with your take. This isn’t coming for the “poorest” among us. This is coming for the middle class. From brand consultants and accountants to software engineers and advertisers.
Software engineers with GPT are about to replace software engineers without GPT. Accountants with GPT are about to replace accountants without GPT.
> Literally every single argument I've seen towards this being AGI is
Here is one: it can simultaneously pass the bar exam, port dirbuster to POSIX compliant C, give me a list of competing brands for conducting a market analysis, get into deep philosophical debates, and help me file my taxes.
It can do all of this simultaneously. I can't find a human capable of the simultaneous breadth and depth of intelligence that ChatGPT exhibits. You can find someone in the upper 90th percentile of any profession and show that they can out compete GPT4. But you can't take that same person and ask them to out compete someone in the bottom 50th percentile of 4 other fields with much success.
Artificial = machine, check.
Intelligence = exhibits Nth percentile intelligence in a single field, check
General = exhibits Nth percentile intelligence in more than one field, check
Maybe it's heavily biased towards programming and computing questions? I've tested GPT-4 on numerous physics stuff and it fails spectacularly at almost all of them. It starts to hallucinate egregious stuff that's completely false, misrepresents articles it tries to quote as references etc. It's impressive as a glorified search engine in those cases but can't at all be trusted to explain most things unless they're the most canonical curriculum questions.
This extreme difficulty in discerning what it hallucinates and what is "true" is what it's most obvious problem is. I guess it can be fixed somehow but right now it has to be heavily fact-checked manually.
It does this for computing questions as well, but there is some selection bias so people tend to post the success-stories and not the fails. However it's less dangerous if it's in computing as you'll notice it immediately so maybe require less manual labour to keep it in check.
Hahaha, if you want nit-picking, all the language tasks chatGPT is good at are strictly human tasks. Not general tasks. Human tasks are all related to keeping humans alive and making more of us, they don't span the whole spectrum of possible tasks where intelligence could exist.
Of course inside language tasks it is as general as can be, yet still needs to be placed inside a more complex system with tools to improve accuracy, LLM alone is like brain alone - not that great at everything.
On the other hand if you browse around the web you will find various implementations of dirbuster, probably in C for sure in C++ which are multi-threaded , it’s not to take away from your experience but I mean, without knowing what’s in the training set it may have already been exposed to what you asked for, even several times over.
I have a feeling they had access to a lot of code on GH, who knows how much code they actually accessed. Copilot for a long time said it would use your code as training data, including context, if you didn’t opt out explicitly, so that’s already millions maybe hundreds of millions of lines of code scraped.
The conspiracy theorist in me wonders if MS just didn’t provide access to public and private code to train on, they wouldn’t have even told Open AI, just said, “here’s some nice data”, it’s all secret and we can’t see the models inputs so I’ll leave it at that. I mean they’ve obviously prepared the data for copilot, so it was there waiting to be trained on.
So yeah I feel your enthusiasm but if you think about it a little more, or maybe not so hard to imagine what you saw being actually rather simple ? Every time I write code I feel kind of depressed because I know almost certainly someone has already written the same thing and that it’s sitting in GitHub or somewhere else and I’m wasting my time.
ChatGPT just takes away the knowing where to find something (it’s already seen almost everything the average person can think of) you want and gives it to you directly. Have you never thought of this already ? Like you knew all the code you wanted already was there somewhere, but you just didn’t have an interface to get to it? I’ve thought about this for quite a while and I knew there would big data people doing experiments who could see that probably 80-90% of code on GitHub is pretty much identical.
> If you’re laughing this thing off as generating unhelpful nonsense you’re going to get blind sided in the next few years as GPT gets wired into the workflows at every layer of your stack.
Okay, now try being a scientist in a scientific field that isn't basic coding.
It's not people laughing at pretences, it's people who know even basic facts about their field literally looking at the output today and finding it deeply, fundamentally incorrect.
I do not believe that is a reasonable threshold for AGI. If it were, I believe a significant % of humans would individually fail to meet the threshold of AGI.
I wonder what your personal success rate would be if we did a Turing test with the “people” who “know basic facts about their field.” If they sat at a computer and asked you all these questions, would you get them right? Or would you end up in slide decks being held up as a reason why misnome doesn’t qualify as AGI?
I find comfort in knowing that it can’t “do science.” There is a massive amount of stuff it can do. I’m hopeful there will be stuff left for humans.
Maybe we’ll all be scientists in 10 years and I won’t have to waste my life on all this “basic coding” stuff.
Absolutely not! I created a powershell script for converting one ASM label format to another for retro game development and i used ChatGPT to write it. Now, it fumbled some of the basic program logic, however, it absolutely nailed all of the specific regex and obtuse powershell commands that i needed and that i merely described to it in plain English.
It essentially aced the "hard parts" of the script and i was able to take what it generated and make it fit my needs perfectly with some minor tweaking. The end result was far cleaner and far beyond what i would have been able to write myself, all in a fraction of the time. This ain't no breathless marketing dude: this thing is the real deal.
ChatGPT is an extremely powerful tool and an absolute game changer for development. Just because it is imperfect and needs a bit of hand holding (which it may not soon), do not underestimate it, and do not discount the idea that it may become an absolute industry disrupter in the painfully near future. I'm excited ...and scared
It does, quite often. Not only that, as you describe. But it does.
For example, I asked it what my most cited paper is, and it made up a plausible-sounding but non-existent paper, along with fabricated Google Scholar citation counts. Totally unhelpful.
Right, i think it's a question of how to use this tool in its current state, including prompting practice and learning its strengths. It can certainly be wrong sometimes, but man, it is already a game changer for writing, coding, and i'm sure other disciplines.
If you're a robotresearcher, maybe try getting it to whip up some ...verilog circuits or something? I don't know much about your field or what you do specifically, but tasks like regular expressions or specific code syntax it is absolutely brilliant at, whatever the equivalent to that is in hardware. ...I've only ever replaced capacitors and wired some guitar pickups.
> it made up a plausible-sounding but non-existent paper, along with fabricated Google Scholar citation counts
I ran into a similar issue: I asked it for codebases of similar romhacks to a project i'm doing, and it provided made up Github repos with completely unrelated authors for romhacks that do actually exist: non-existent hyperlinks and everything.
Now, studying the difference in GPT generations, it seems like more horsepower and more data solves alot of GPT problems and produces emergent capabilities with the same or similar architecture and code. The current data points to this trend continuing. I find it both super exciting and super ...concerning.
This seems like the perfect test, because it's something that does have information on the internet - but not infinite information, and you know precisely what is wrong about the answer.
> I'd sure that in a couple of years time, most of what ChatGPT achieves will be in line with most of the tech industry advances in the past decade - pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets
This is not what any of the US economic stats have looked like in the last decade.
Especially since 2019, the poorest Americans are the only people whose incomes have gone up!
I use ChatGPT daily to generate code in multiple languages. Not only does it generate complex code, but it can explain it and improve it when prompted to do so. It's mind blowing.
FWIW, as a non-pathologist with a pathologist for a father, I can almost pass the pathology boards when taken as a test in isolation. Most of these tests are very easy for professionals in their fields, and are just a Jacksonian barrier to entry. Being allowed to sit for the test is the hard part, not the test itself.
As far as I know, the exception to this is the bar exam, which GPT-4 can also pass, but that exam plays into GPT-4's strengths much more than other professional exams.
What is a Jacksonian barrier to entry? I can't find the phrase "Jacksonian barrier" anywhere else on the internet except in one journal article that talks about barriers against women's participation in the public sphere in Columbia County NY during Andrew Jackson's presidency.
I may have gotten the president wrong (I was 95% sure it's named after Jackson until I Googled it), but the word "Jacksonian" was meant to refer to the addition of bureaucracy to a process to make it cost more to do it, and thus discourage people. I guess I should have said "red tape" instead...
Either it's a really obscure usage of the word or I got the president wrong.
"It's difficult to attribute the addition of bureaucracy or increased costs to a specific U.S. president, as many presidents have overseen the growth of the federal government and its bureaucracy throughout American history. However, it is worth mentioning that Lyndon B. Johnson's administration, during the 1960s, saw a significant expansion of the federal government and the creation of many new agencies and programs as part of his "Great Society" initiative. This expansion led to increased bureaucracy, which some argue made certain processes more expensive and inefficient. But it's important to note that the intentions of these initiatives were to address issues such as poverty, education, and civil rights, rather than to intentionally make processes more costly or discourage people.
Exams are designed to be challenging to humans because most of us don’t have photographic memories or RAM based memory, so passing the test is a good predictor of knowing your stuff, i.e. deep comprehension.
Making GPT sit it is like getting someone with no knowledge but a computer full of past questions and answers and a search button to sit the exam. It has metaphorical written it’s answers on it’s arm.
This is essentially true. I explained it to my friends like this:
It knows a lot of stuff, but it can't do much thinking, so the minute your problem and its solution are far enough off the well-trodden path, its logic falls apart. Likewise, it's not especially good at math. It's great at understanding your question and replying with a good plain-english answer, but it's not actually thinking
That's a disservice to your friends, unless you spend a bunch of time defining thinking first, and even then, it's not clear that it, with what it knows and the computing power it has access to, doesn't "think". It totally does a bunch of problem solving; fails on some, succeeds on others (just like a human that thinks); GPT-4's better than GPT-3. It's quite successful at simple reasoning (eg https://sharegpt.com/c/SCeRkT7 and moderately successful at difficult reasoning (eg getting a solution to the puzzle question about the man, the fox, the chicken, and the grain trying to cross the river. GPT-3 fails if you substitute in different animals, but GPT-4 seems to be able to handle that. GPT-4's passed the bar exam, which has a whole section on logic puzzles (sample test questions from '07: https://www.trainertestprep.com/lsat/blog/sample-lsat-logic-... ).
It's able to define new concepts and new words. It's masters have gone to great lengths to prevent it from writing out particular types of judgements (eg https://sharegpt.com/c/uPztFv1). Hell, it's got a great imagination if you look at all the hallucinations it produces.
All of that sum up to many thinking-adjacent things, if not actual thinking! It all really hinges on your definition of thinking.
exactly. it's almost like say dictionaries are better at spelling bee hence smarter than humans, or that computers can easily beat humans in Tetris and smarter because of that.
That's not a response from someone who wrote the answers on the inside of their elbow before coming to class. That's genuine inductive reasoning at a level you wouldn't get from quite a few real, live human students. GPT4 is using its general knowledge to speculate on the answer to a specific question that has possibly never been asked before, certainly not in those particular words.
It is hard to tell what is really happening. At some level though, it is deep reasoning by humans, turned into intelligent text, and run through a language model. If you fed the model garbage it would spit out garbage. Unlike a human child who tends to know when you are lying to them.
If you fed the model garbage it would spit out garbage.
(Shrug) Exactly the same as with a human child.
Unlike a human child who tends to know when you are lying to them.
LOL. If that were true, it might have saved Fox News $800 million. Nobody would bother lying, either to children or to adults, if it didn't work as well as it does.
>We are literally arguing semantics if this is AGI
It isn't and nobody with any experience in the field believes this. This is the Alexa / IBM Watson syndrome all over again, people are obsessed with natural language because it's relatable and it grabs the attention of laypeople.
Protein folding is a major scientific breakthrough with big implications in biology. People pay attention to ChatGPT because it recites the constitution in pirate English.
This is like all other rocket companies undermining what spacex is doing as not a big deal. You can keep arguing semantics while they keep putting actual satellites and people into orbit every month.
I use chatGPT every day to solve real problems as if it’s my assistant, and most people with actual intelligence I know do as well. People with “experience in the field”, in my opinion can often get a case of sour grapes that they internalize and project with their seeming expertise and go blind to persist some sense of calm to avoid reality.
ChatGPT cannot reason from or apply its knowledge - it is nowhere near AGI.
For example, it can describe concepts like risk neutral pricing and replication of derivatives but it cannot apply that logic to show how to replicate something non-trivial (i.e., not repeating well published things).
The domain is the domain of protein structure, something which potentially has gigantic applications to life. Predicting proteins may yet prove more useful than predicting text.
“Predicting proteins”? I’m a biologist and I can assure you knowing the rough structure of a protein from sequence is nowhere near as important to biology as everyone makes it out to be. It is Nobel prize worthy to be sure but Nobel prizes are awarded once a year not once a century.
Except its not, because they gave it away without any kind of commercialization. Its possible to give something away for free in some context and still have it be a product (Stable Diffusion is doing quite a bit of that, though its very unclear if they’ll be able to do it sustainably), but AlphaFold doesn’t seem to be an example. It seems to be an example of something cool they did that they had no desire to make into a product. Which is great! But isn’t the same as executing on product in a space.
This is hacker news, AlphaFold doesn’t have an app, some obscure GitHub repo, a hyped up website or a bunch of VC backing, so it’s basically a waste of time.
Numerous individuals have since transitioned away from Google, with reports suggesting their growing dissatisfaction as the company appeared indecisive about utilizing their technological innovations effectively.
Moreover, it has been quite some time since Google successfully developed and sustained a high-quality product without ultimately discontinuing it. The organizational structure at Google seems to inadvertently hinder the creation of exceptional products, exemplifying Conway's Law in practice.
Generative AI at its current state is still a very new area of research with many issues including hallucination, bias and legal baggage. So for the first few version we are looking at many new startups like open ai, stability, anthropic etc. It is yet to be seen if any of the new breed of startups actually starts to make sizeable revenue. But again there is nothing defensible here unless all the major labs stop publishing paper.
Uh, you snipped in the middle of a clause so you could argue against something it didn’t say.
Here’s the whole thing (leaving out a parenthetical that isn’t important here):
“Google has spectacularly, so far, failed to execute on products […] for generative AI”
You listed a bunch of products in other domains, some of which are the reasons why it has institutional incentives not to push generative AI forward, even if it also stands to lose more if someone else wins in it.
When did anyone realize that there generative AI was actually a product with wide consumer appeal? Or how many use cases there were for it as an API service? I'd say it wasn't really obvious until around Q4 last year, maybe Q3 at the earliest.
That's a pretty short time ago. So it seems that so far it hasn't really been a failure to execute, but more about problems with product vision or with reading the market right leading to not even attempting to have actual products in this space. That's definitely a problem, but not one that's particularly predictive of how well they'll be able to execute now that they're actually working on products.
The hardware costs alone of running something like GPT 3.5 for real time results is 6-7 figures a year. By the time you scale for user numbers and add redundancy... The infra needs to be doing useful work 24/7 to pay for itself.
It's more than possible Google knows exactly what it can do, but was waiting for it to be financially viable before acting on that. Meanwhile Microsoft has decided to throw money at it like no tomorrow - if they corner the market and it becomes financially viable before they lose that it could pay off. That is a major gamble...
> The hardware costs alone of running something like GPT 3.5 for real time results is 6-7 figures a year.
Can you unpack your thinking there? Even at 5% interest for ownership costs to be six figures a year you're talking about millions of dollars in hardware. Inference is just not that expensive, not even with gigantic models.
To the extent that there is operating cost (e.g. energy)-- that isn't generated when the system is offline.
I don't know how big GPT 3.5 is, but I can _train_ LLaMA 65B on hardware at home and it is nowhere near that expensive.
That's 8 $200k GPUs + all the other hardware + power consumption for one instance. You could run it on cheaper hardware, but then you'll get to nowhere near realtime output which is required for the majority of the use cases not already handled well by much smaller models.
Even if Google/Microsoft are getting the hardware at a 50% reduction (bearing in mind these are already not consumer prices) it gets to $1mn in hardware alone - again for a single instance that can handle one user interacting with it at a time.
It makes a lot of the bespoke usecases people are getting excited about (i.e. anything with data privacy concerns) far from financially viable.
If you want a dedicated instance of full capability ChatGPT for example (32K content) OpenAI are charging $468k for a 3 month commitment / $1,584k for a year.
You can purchase 80GB A100s right now for about $12,5k on the open market. I think the list price is $16k. I don't know what discount the big purchasers see, but 30% should be table stakes (probably explains that $12.5k prices), 50% for the big boys wouldn't be at all surprising to me based on my experience with other computing hardware.
So under the assumption that 8 80GB gpus are required, we're talking about a somewhat more than $100k one time cost (for 8x 80gb A100 plus the host) plus power, not 6-7 figures annually. Huge difference!
Evaluating it in a latency limited regime but without enough workload to enable meaningful batching is truly a worst case. I admit that there are applications where you're stuck with that, but there are plenty that aren't.
Anyone in that regime should try to figure out how to get out of it. E.g. concurrently generating multiple completions can sometimes help you hide latency, at least to the extent that you're regenerating outputs because you were unhappy with the first sample.
> that can handle one user interacting with it at a time.
That bit I don't follow. The argument given there is without batching. You can do N samples concurrently at far less than N times the cost.
There's definitely something amiss. Maybe we're just not seeing the whole picture, but Google has the best potential out there still. Not only vast and fundamental research came out their door (presumably there's more), but they also have their own compute resources and an up-to-date copy of internet.zip and gmail.zip and youtube.zip which they can train on vs what small and stale stuff (compared to Google's data) OpenAI trained their stuff on (like common crawl etc.). What gives, Google? Get on it!
edit: I forgot all about google_maps.zip / waze.gz and all the juicy traffic data coming from android.. which probably already relies heavily on AI
The difference between OpenAI and Google is that the latter's ethical concerns with AI are more deeply held. Google gave us the Stochastic Parrots paper[0] - effectively a very long argument as to why they shouldn't build their own ChatGPT. OpenAI uses ethics as a handwave to justify becoming a for-profit business selling access to proprietary models through an API, citing the ability to implement user-hostile antifeatures as a deliberate prosocial benefit.
To be clear, Google does use AI. They use it so heavily that they've designed four generations of training accelerators. All the fancy knowledge graph features used to keep you from clicking anything on the SERP are powered by large language models. The only thing they didn't do is turn Google Search into a chatbot, at least not until Microsoft and OpenAI one-upped them and Google felt competitive pressure to build what they thought was garbage.
And yes, Google's customers share that belief. Remember that when Google Bard gets a fact about exoplanets wrong, it's a scandal. When Bing tries to gaslight its users into thinking that time stopped at the same time GPT-4's training did, it's funny. Bing can afford to make mistakes that Google can't, because nobody uses Bing if they want good search results. They use Bing if they can't be arsed to change the defaults[1].
[0] Or at least they did, then they fired the woman who wrote it
[1] And yes that is why Microsoft really pushes Bing and Edge hard in Windows.
It was not some anecdotal fact that Bard got wrong, it was during their official public demo. It was a "scandal" because it showed Google was indeed unprepared and had no better product, not even preparing and fact checking their demo before was the cherry on the top.
Ethics is a false excuse because rushing that out show they never cared either. It was just PR and their bluff was called.
Also I skimmed over that Stochastic Paper and I’m unimpressed. I’m unfamiliar with the subject but many points seems unproven/political rather than scientific, with a fixation on training data instead of studying the emerging properties and many opinions notably regarding social activism, but maybe it was already discussed here on HN. Edit: found here: https://news.ycombinator.com/item?id=34382901
> I’m unfamiliar with the subject but many points seems unproven/political rather than scientific
You're exactly the kind of person Stochastic Parrots was trying to warn us about - you bought into the AI hype.
AI are extremely sensitive to the initial statistical conditions of their dataset. A good example of this is image regurgitation in diffusion models: if you include the same image n times in the data set, it gets n times the number of training epochs, and is far more likely to be memorized. Stable Diffusion's propensity to draw bad copies of the Getty Images logo is another example; there's so many watermarks and signatures in the training data that learning how to draw them measurably reduces loss. In my own AI training adventures[0], the image generator I trained loves to draw maps all the time, no matter what the prompt is, because Wikimedia Commons hosts an absolutely unconscionable number of them.
Stochastic Parrots is arguing that we can't effectively filter five terabytes[1] of training set text for every statistical bias. Since HN is allergic to social justice language, I'll put it in terms that are more politically correct here: gradient descent is vulnerable to Sybil attacks. Because you can only scrape content written by people who are online, the terminally online will decide what the model thinks, filtered through the underpaid moderators who are censoring your political opinions on TwitBook.
Of course, OpenAI will try anyway[2]. The best they've come up with is to use RLHF to deliberately encode a center-left bias into a language model that otherwise would be about as far-right as your average /pol/ user. This has helped ChatGPT avoid the fate of, say, Microsoft's Tay; but it is just sweeping the problem under the rug.
The other main prong of Stochastic Parrots is energy usage. The reason why OpenAI hasn't been outcompeted by actual open AI models is because it takes shittons of electricity and hardware to train these things. Stable Diffusion and BLOOM are the biggest open competitors to OpenAI, but they're being funded purely through burning venture capital. FOSS is sustainable because software development is cheap enough that people can do it as volunteer work. AI training is almost the opposite: extremely large capital costs that can only be recouped by the worst abuses of proprietary software.
[0] I am specifically trying to build a diffusion model trained purely on public domain images, called PD-Diffusion.
[1] No problem. We are Google. Five terabytes is so little that I've forgotten how to count that low.
[2] When filtering the dataset for DALL-E 2, OpenAI found that removing porn from the training set made the image generator's biases far worse. i.e. if you asked for a stock photo of a CEO, pre-filter DALL-E would give about 60% male, 40% female examples; post-filter DALL-E would only ever draw male CEOs.
>> To be clear, Google does use AI. They use it so heavily that they've designed four generations of training accelerators.
This +100
Somehow there is a perception that chat bots are the only example of AI research or product that matters and all AI organisations ability will be judged by their ability to create chatbots.
LLMs are the end-game for almost all NLP and CV tasks. You can freely specify the task description, input and output formats, unlike discriminative models. You don't need to retrain, don't need many examples, and most importantly - it works on tasks the developers of the LLM were not aware of at design time - "developer aware generalisation". LLMs are more like new programming languages than applications, pre-2020 neural nets were mostly applications.
> ...nobody uses Bing if they want good search results.
Sadly, I think I'd argue that nobody has good search results anymore. Google's results have been SEO'd to the hilt and most of the results are blog spam garbage nowadays.
> The only thing they didn't do is turn Google Search into a chatbot,
No, they turned google search into what it is now.
For me, trying google bard was an instant reminder of the change in behavior in google search from 15 years ago to today.
We used to have a search that you could give obscure flags to Linux commands and find their documentation or source code. Today we have a google search that often only tell you about how some kardashian or recent political drama is a sounds-alike with the technical term that you were searching for.
GPT4 has some of the same "excessively smart" failure modes, but it (and GPT3.5 for that matter) is so much more useful than bard (which hits the user with "I can't do that dave" 100x more often than chatgpt's already excessive behavior) that they're a useful addition to the toolbox. Too bad the toolbox hardly includes plain search anymore.
OpenAI releasing imperfect products is exactly what they said they would do. We need society to understand what the state and risks are. The 6-month-wait shitstorm is what happens when society gets the merest glimmer of the potential. I applaud them for this, rather than focusing on protecting their brand.
Despite what people often write and believe here, the access controls on PII data at Google are incredibly strict. You can't just arbitrarily train on people's personal data. I know, because when I was there, working on search backend data mining, in order to get access to anonymized search and web logs, I had to sign paperwork that essentially said I'd be taken to the cleaners if I abused the access.
> What gives, Google? Get on it
It's a very difficult decision to intentionally destabilize the space you are the leader in, for all the reasons you can imagine. In a sense, Google needed someone else with nothing to lose to shake up the space. How they execute in the new reality is yet to be seen. The biggest challenge they may have right now isn't technological, but that "ChatGPT" has become a sort of brand, like Kleenex and well, Google.
Meh people would much prefer to be typing their prompts into a Google search box than opening a separate GPT app. I doubt there real issue here is a marketing one. Despite ChatGPT's massive growth numbers the market is pretty immature, it's still very much open and not yet decided.
Many markets had early leaders who got stomped by later entrants.
Social space vs enterprise space. How many companies would want llms integrated with their corporate data, but need trust about the data not being leaked?
Microsoft and Google both have the capability and trust to make this available. When corporates start paying for LLMs, per user, or for applications, both Google and Microsoft and the two companies are in the best position.
All other industries will be users paying for LLMS model access.
1. LLM's don't have a lucrative business model that Google needs.
2. The quality of their language model is really lacking as of now.
You fix 1 and 2, ChatGPT's branding is nothing. Google is the biggest advertisement machine in the world and they can market the hell out of their product. Just see how Chrome gained ground on Firefox for example.
Google is still used several folds more than ChatGPT and if you resolve 1 and 2, Google will make their money and their users have no incentive to go to ChatGPT.
> Despite what people often write and believe here, the access controls on PII data at Google are incredibly strict. You can't just arbitrarily train on people's personal data.
And yet Google is the largest online advertiser in the world. And yet, GMail used to (I don't know if it still does) push ads into people's inboxes.
I have as much belief in their PII controls as in their "Don't be evil" motto.
When you open Gmail, you'll see ads that were selected to show you the most useful and relevant ads. The process of selecting and showing personalized ads in Gmail is fully automated. These ads are shown to you based on your online activity while you're signed into Google. We will not scan or read your Gmail messages to show you ads.
...
To opt-out of the use of personal information for personalized Gmail ads, go to the Ads Settings page
--- end quote ---
They literally train their datasets on people's personal data.
Google could stop sending traffic to webmasters and pivot to directly providing answers based on scraped data long, long ago, but Google knew webmasters would be up in arms over such a blatant bait and switch taking away their traffic and revenue.
OpenAI subverted this by riding on the “open” part of their name at first—before doing a 180-degree turn and selling out to Microsoft.
They could just as easily show ads in answers, the advertisers wouldn’t care. In fact, I can see how a major advertiser would rather prefer that an ad is shown in Google’s own trusted UI rather than on some random website next to who knows what sort of content (that motivation is behind YouTube’s “demonetization”).
I was referring to google‘s one ui but I agree with you. I am wondering if all the going back and forth by not finding what you are looking for increased the ad numbers (even if short term).
The announcement felt cautious and political, like they are running for technological ruler of the world and not a company trying to make money. This is probably why they are not going to not get very far against their competitors despite having so much potential. They care too much about what the EU and governments everywhere think of them now. They are no longer a profit making entity that disrupts and pushes the rules. They are part of maintaining the status quo.
7/8 of the transformer authors are gone, BERT author is at OAI, two first authors of T5 are gone, imagen team left to make their own startup, etc. etc.
95% of those people have left Google because the ethics and safety teams prevented them from releasing any products based on their research. We have those ex-Googlers to thank for ChatGPT, Character.ai, Inceptive, ... which you'll notice are not Google products but rather competitors.
I, for one, appreciate a megacorp purposely sacrificing revenue when they're not confident that the negative externalities of that revenue would be minimized.
Google could have built a search engine where paid results were indistinguishable from organic results, but the negative externalities of that were too great.
Google could have remained in China, but the negative externalities of developing and managing a censorship engine were too great.
Google could have productized AI before the risks were controlled, but they sacrificed revenue and first-mover advantage to be more responsible, and to protect their reputation.
This behavior is so rare, it's hard to think of another megacorp that would do that.
Google's far from perfect, they've made ethical lapses, which their competitors love to yell and scream about, but their competitors wouldn't hold up well under the same scrutiny.
> Google could have built a search engine where paid results were indistinguishable from organic results, but the negative externalities of that were too great.
Have you not used Google search in the past 5 years?
It sounds like you were either not around or didn't use Google in the early 00s. Back then, there was a very clear, bright color difference between ads and organic search results: a yellow bar at the top with at most two ads, and a side bar. But organic results were easy to identify and took up the majority of screen real estate.
Now, when I search any even slightly remotely commercial search term on mobile, about the entire first page and a half of results are ads. Yes, they're identified with a "Sponsored" message, but as you can see from the "evolution" link the other commenter replied, this was obviously done to make the visual treatment between ads and organic results less clear.
The reason I'm thrilled about Google finally getting competition in their bread-and-butter is not because I want them to fail, but I want them to stop sucking so bad. For about the past 10 or so years Google has gotten so comfy with their monopoly position that the vast majority of their main search updates have been extremely hostile to both end users and their advertisers as Google continually demands more and more of "the Google tax" by pushing organic results down the page.
In the meantime I've switched to Bing, not because I think Microsoft is so much better, because I desperately want multiple search alternatives.
To quote from the above, here's what they said in the beginning: "we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers"
The tweet you linked was from an outage that lasted 30 minutes, it's pretty disingenuous of you to try and pass that off as status quo.
I do agree however that the labeling has gotten less prominent over time. I don't however agree that it has become subtle enough to considered indistinguishable from search results.
> I don't however agree that it has become subtle enough to considered indistinguishable from search results.
This is what it looks like on mobile. A tiny "sponsored" text is the only thing that distinguishes ads from search results: https://imgur.com/a/WOk4NdR
Product based on no fundamental innovation is also a path to irrelevance. If there was even something remotely defensible in GPTs then OpenAI would not have sold 50% of company for 10B dollar. Only a matter of time and large about of human in the loop will bring any large transformer model into the same space as shown by recent models like alpaca and vicuña are showing that. Only thing the whole thing has done is no labs will open source any major breakthroughs anymore
> If there was even something remotely defensible in GPTs then OpenAI would not have sold 50% of company for 10B dollar.
If you need 10B dollar to develop your product you have to find it from somewhere. Training an LLM is not something you can do in a garage, bootstrapped.
This is not a typical VC based company. According the HN crowd this is the one company who can execute on AI and challenge Google and all other trillion dollar AI labs. In my opinion they themselves are aware of the fact they are one trick pony. Given how astute a VC SamAltman is, if there was any thing remotely innovative and defensible about the product they would have never done that.
Raising money is ok. Selling 50% of your company for 10B the company when apparantly you have trillion dollar defensible business dont any business sense.
You need money to turn it into a trillion dollar business... Selling equity is how you raise that money.
If you were sure it will be a $1T business you have more reason to sell equity and accelerate the growth of your company, because you know the remaining 50% is going to be so valuable.
These are capitalist enterprises here. I'd argue that product is almost all that matters. Sure someone has to innovate but the final product that can be sold is what keeps people and companies relevant.
I suspect recency-bias may be tripping people up: LLMs and ChatGPT are not the final word in AI, and there is no reason for Google to bet the farm on them.
I wouldn't bet against Google DeepMind originating the next big thing, at the very least, their odds are higher than OpenAIs.
Edit: this may yet turn out to be a Google+ moment, where an upstart spooks Google into thinking it is fighting an existential battle but winds up okay after some major missteps that take years to fix (YouTube comments as a real-name social network. Yuck)
You could say the same about Xerox in the late 70s. And they conclusively showed that they couldn’t execute and squandered all of their amazing original research. Looking at how laughably bad Bard is, Google has a long way to prove they aren’t Xerox 2.0 at this point. I’m amazed that Sundar hasn’t been pushed out yet by Larry and Sergey.
This thread is full of people saying that what Xerox did was some terrible mistake, but I think that it was much better that they could afford to do all this research which spawned a massive industry as a result than had they become this massive monopoly which controlled everything.
If Google spends billions of it's ad money doing original research that spawns a new industry with thousands of companies, that would seem to be a great result to me.
That might be true on a societal level, but is small solace to XRX shareholders, not to mention the many researchers who contributed these brilliant creations only to see them exploited by others while their own company just ignored them and let them die on the vine.
You're reiterating their point. Yeah, Google has competent AI people but that means nothing for their own success if they can't execute. OpenAI has proven that.
> Yes the team which literally created transformer and almost all the important open research including Bert, T5, imagen, RLHF, ViT don't have the ability to execute on AI /s
Yet Google does not have a slam-dunk product despite so many great research results. This looks a gross failure of the CEO, especially given that he's been chanting AI First in the past few years.
I would go on about how much execution matters, but it's not just about execution, cause ChatGPT is actually a better AI than anything Google has put out so far. So unless Google is hiding something amazing...
I'm sure you're right but the only note attached to the author list is in the opposite direction – Tom B Brown has an asterisk with "Work done while at OpenAI".
The original transformer team very much has executed on making successful implementations of transformers ... just not for Google. Clearly something went a bit wrong at Google brain in 2017.
Every single member of the research team that invented the transformer architecture have left Google to go to OpenAI, or make their own startups (character.ai, anthropic, cohere)
It might help to reflect on what the upsides of this have been for OpenAI, re execution.
On the face of it, execution is often all that matters. FB v myspace, AMD v Intel (eventually), Uber v Lyft, MS v Apple (pre 2001), Apple v MS (post 2001) etc.
I think in this context “execute” implies “create traction with a real-world product”. Given that even politicians and comedy shows are talking about ChatGPT, I think it’s fair to acknowledge that Google is lacking in this area.
"Sun Microsystems literally invented Java and has done a ton of open research on RISC, how are they not able to execute as those technologies are exploding"
Outside world is looking at AI innovation only in recent times forgetting the entire journey of last decade. If there was any remotely defensible technology in OpenAI they wouldn't have sold 50% of their company for 10B.
Yes, we saw your other comment stating the same thing.
You are doing a whole lot of tea leave reading with basically zero visibility, which I can’t really reconcile with how absolute you’re being with your language.
You think the researchers who created transformers are going to become a commercial product team and be good at execution?
Google is great at research, one of the best companies in the world. They are also not very good at product. It will not be possible for Google to research their way out of the business problems they’re facing. They may win, but if so it will be because they get good at product, not because the transformers research team comes up with something even more amazing.
PPO for RL. Plus a lot of the people behind the innovations you mention are now at OAI - Lukas who was one of the Transformer authors, Bert paper author, chain of thought prompting author, etc.
But ya their strong point is execution and doing the hundreds of little things that make the model do well and it turns out that that's more important than "novel" ideas
Even in ‘07, Apple had a track record for doing things right, not doing things first.
Current-day Google churns out sterile, uninspiring products, and kills them.
If your argument is “this company is going to act out of character and do something innovative!” then…yeah, sure. That’s a good way to be right, sometimes. Just don’t let everyone see the majority of the time where you’ve been wrong.
Because of course Xerox PARC, which literally invented the GUI, desktop computer, the mouse, freaking Ethernet, etc executed the commercialization of all their innovation flawlessly....
Being able to produce research is a very different skill from being able to produce a very successful product. We have not seen google do that very successfully for over a decade.
DeepMind clearly is a household name. Think of AlphaGo or AlphaFold, those were legendary. Google Brain as well is a household name. Think of the Transformer, or BERT. Those are legendary, as well.
Rlhf wasn't introduced by openai. And GPT is a pretty standard transformer, no? Yes they did it at scale and it speaks volumes on their production skills, but OP was asking about research
GPT is a specific instantiation of a transformer, and doing next token prediction was an openAI thing. Transformers were a big part of it, but GPT was def proposed but openAI
There are plenty of products that launch on top tools and frameworks that are worth far more than the underlying will ever be. OpenAI is creating products, DeepMind was creating tools.
It's not a matter of skill as much as objective. And DeepMind would still be starting at zero if they decide to pivot to products.
"this action will hamstring DeepMind in bureaucracy."
I'm sorry but I fail to see the problem with this. DeepMind has made very impressive demos and papers, but they have yet to add one dollar of revenue to Google's bottom line. Further they have drained billions from Google.
Google has to, somehow, get completely out of the research paper game and into the product game.
Papers have to have little/no impact on perf going forward. Other than a small windfall to goodwill they are a misalignment between the company's goals and those of the employees.
Products Google, products. Unless Larry and Sergey want to turn Google into a non-profit research tank. Which would be fine, but likely with substantially lower headcount. Even they aren't that wealthy.
LOL, if you look at the amount of money Google has poured into Google and how much they got back for their investment, it's laughable.
Things like the Wavenet "contributions" are just Demis paying lip service to the fact that once in a while Google was nudging them to produce something, anything really that was actually useful.
Google putting the extreme amounts of easy dollars they have into things that aren't instantly profitable is very much what the founders said they'd do though
This was paid by Google for unspecified research services. But the way it’s accounted for it’s likely that it was based on some legitimate contribution. It is unlikely it would be structured this way if it was just corporate support.
DeepMind has public financial filings and you can go read the exact language they use to describe the revenue they generate.
> DeepMind has made very impressive demos and papers, but they have yet to add one dollar of revenue to Google's bottom line. Further they have drained billions from Google.
You could say the same about OpenAI and Microsoft, they drained money for years until about 6 months ago when suddenly the partnership started to pay back big style.
OpenAI is still massively unprofitable and MSFT is (rightly IMO) going to invest way more money in them so it’s definitely still a drain. A modest drain relative to MSFTs overall resources
As much as I'd love an OpenAI-style API from Google, I'm not expecting that. It will probably be "profitable" to them in the unseen backend making Search, Google Assistant, etc better. I've been playing with Bard a lot and it's pretty good, but OpenAI's API offering just makes them so much more useful to me since I can use whatever app I want (or even write my own) to consume the product, and it's easy for me to see the value for my dime.
"Papers have little/no impact on perf" - this is a ridiculous and false claim.
Almost every single advancement in any field has come from academia. Sure, it may not be recognized as such by the general public because they aren't experts in the area - but the fact remains that academia is pretty much the only way to progress as a society. Companies just take what academia gives them and make products out of it for their own profit (not to completely trivialize that - it still comes with it's own set of challenges), but the private sector is completely misaligned with making real progress towards hard problems. Deepmind is one of the examples that continues to show this despite being a 'corporate entity' in that the large advancements seen are out of their employment (i.e. giving their excess of capital) of professors at universities who focus on their research.
> Almost every single advancement in any field has come from Academia
This sounds like you need far more evidence. If you say academia as the institution where you share papers, sure but then that’s just a sharing mechanism. Almost like saying all advancements came out of Internet because arxiv is where research is shared.
If you want to say professors and Universities have been heralding AI advancement, that has not been true for at least 10 years possibly more. Moment industry started getting into Academia, Academia couldn’t compete and died out. Even Transformers the founding paper of the modern GPT architectures came out of Google Research. In Vision, ResNet, MaskRCNN to Segment Anything came out of Meta / MIcrosoft. The last great academic invention might have been dropout and even that involved Apple. After that I fail to see Academia coming up with a single invention in ML that the rest of the community instantly adopted because of how good it was.
Huh? None of this is true for a lot of core recent work. A very obvious example is transformers, which did not come out of academic research (or DeepMind for that matter) at all.
I feel like google crossed some point about a decade ago where they stopped making innovative stuff and started focusing on squeezing revenue out of everything else. A bit like when Carly turned HP into a printing/ink racket. Both the decline of google maps and the inability of google to filter out noise from their search results are strong indicators of this for me. Scrambling to field a competing product to maintain relevancy in this emerging market would be consistent with this assessment as well. The old google would have fielded the product first because it was useful, but the current google seems to do it because they don't want to lose revenue.
I don't know if said bureaucracy is a blessing or a curse given Google's track record in product management. If pressed I would bet towards the curse option.
Different people excel at different types of work (particularly where deep experience is the most significant contributor to performance). Tasking academic researchers with building product is the pathway to hell.
The existing, top-performing product teams at Google should be taking that research and building products around it. If Google has any top-performing product teams left, that is...
is this a influx of resources, or consolidation and cutbacks? i read it as google used to have two different ai research teams, and now they have one fewer than they used to.
This reminds me of Nest. When it was separate, it was shipping great hardware and OK software. Then Google appended "Google" in front of it, creating "Google Nest" and kicked off the slow Google Hug of Death™.
The first casualty was Nest shutting down its APIs, cutting off an ecosystem of third party integrations.
The next casualty was replacing the Nest app with the Google Home app. I stopped following Nest after that because I sold all the Nest stuff I owned and replaced it all with HomeKit.
It's astounding how Google keeps doing this, and its shareholders seem to go along with it. I agree, given their track record, its hard to be optimistic about anything Google slaps their name in front of.
What a shortsighted statement for a race that has barely gotten out of the gates. But, if any one company should be panicking then it's OpenAI at the thought of losing their minimal lead and getting crushed by the company, that invented most of the technology they use, put a significant amount of resources behind their AI initiatives.
Google Search had an outage yesterday. Google just underwent its first round of layoffs ever which definitely affects internal morale and makes all employees aware of their company's mortality. Google's CEO was in the news last week for hiding communications while under a legal hold. Google stock tanked with the rushed demo of Bard. And, even if all those things weren't true, Google has continually failed to establish revenue streams independent from ads and continually abandons products that don't meet their expectations. Consumer confidence in new Google product announcements is lower than any other major tech company - the default assumption is that the product will be pulled months/years later.
Microsoft is giving their full support to OpenAI through their 49% partnership. $13B investment compared to Google buying DeepMind for $500M and investing $300M in Anthropic. Microsoft has good working agreements with the US government, a long history of unreasonable support for their flagship products, clawed their way back to being one of the most valuable companies in the world by finding diverse revenue streams, and, frankly, comes across as the wise adult in the room given they already had their day in the sun with legal battles.
I agree completely that if there continue to be marked revolutions in AI that invalidate current SOTA then those innovations are likely to arise from Google's research labs, but from an execution standpoint I have nothing but concerns for Google. It's crazy that I feel they need a second chance in the AI revolution when LLMs originated from inside their org just a few years ago. And it's not like they don't feel similarly - there've been countless articles about "Code Red" at Google as they try to rapidly adjust their strategy around AI.
I think OpenAI has a wider leader than people are acknowledging. It's like everyone was forced to show their AI-hand the last couple of months, in an attempt to appease shareholders, and it seemed like a fair fight until GPT4 hit the ground running. Now we're looking at agents and multi-modal support ontop of $200M/yr revenue when everyone else has no business plan and has yet to announce any looming upgrades. At a certain point, first-mover advantage compounds, the foremost AI app store becomes established, and people building commercial products will become entrenched.
Yeah, fair, the way I expressed myself sounded stupid. What I meant to say was something like: "I don't believe that DeepMind is openly making use of LLM technologies. They're known for their neural networks operating at a pixel-level rather than a token-level. I don't know which of these approaches has more long-term commercial viability."
> This does not seem unexpected. Google is panicked about losing the AI race and pushing resources into DeepMind is a logical step to mitigating those fears.
Google trying to "win" the super-human AGI race is even more flawed than a nation trying to "win" the nuclear arms race.
At least with a nuclear arms race we all die quickly. Super-human AGI will probably just bring about unthinkable levels of suffering before finally killing us all.
And here I thought that Google would achieve AI supremacy because of all the data they have been vacuuming for decades, turns out they haven't even thought to utilize it?
How did they drop the ball so hard? OpenAI has been around for less than a decade and as a smaller team with less resources was able to make a better product.
Though this is usually how it goes - big successful companies begin to bend towards regulatory capture after having their period of upstart growth and disruption. They make as much money as possible for shareholders on their cash cow and its management culture's primary objective to make sure this is not disturbed.
Think about how many decades head start IBM had to perfect search, but search wasn't their core competency.
Delivering advertisements is Google's core competency.
- This does not seem unexpected. Google is panicked about losing the AI race and pushing resources into DeepMind is a logical step to mitigating those fears.
- Google has currently given ~300M to Anthropic and has a partnership with them. I assume Google continues to see potential in both avenues and won't neglect one AI team for the other. I'm guessing that DeepMind will be their primary focus because of the numerous, real-world applications already at play.
- It's tough for me to compare Google DeepMind to OpenAI GPT4. They seem to be very different approaches. Yet, they both have support for language and imagery. So, perhaps they aren't that different afterall?
- Still waiting to hear more from Google on how they plan to leverage their novel PaLM architecture. The API for it was released a month ago, but, to my awareness, has yet to take the world by storm. (Q: Bard isn't powered by PaLM, right?)
Overall, I am not convinced this will be massively beneficial. I don't trust Google's ability to execute at scale in this area. I trust DeepMind's team and I trust Google's research teams, but Google's ability to execute and take products to market has been quite weak thus far. My gut says this action will hamstring DeepMind in bureaucracy.