1. Google had recently exploited their home page to push chrome browser successfully altering the browser market. They pushed anyone visiting Google to chrome with a popup on the home page. The same opportunity was there for G+, but with updates from friends.
2. Everyone already had a Google account and many millennials were using Google Talk at the time. It appeared Google could undermine the network effects.
3. The UI of G+ appeared better
4. Facebook had released the newsfeed otherwise known as ‘stalker mode’ at the time and people recoiled at the idea of broadcasting their every action to every acquaintance. The circles idea was a way of providing both privacy and the ability to broadcast widely when needed.
5. Google had tons of money and devoted their world class highly paid genius employees to building a social network.
You can see parallels to each of these in AI now. Their pre existing index of all the world’s information, their existing search engine that you can easily pop an LLM in, the huge lead in cash, etc. They are in a great position but don’t underestimate their ability to waste it.
Google definitely benefited from being able to push Chrome on the homepage, but it was also a bit of a layup given every other browser completely sucked at the time. Chrome said that browsing the Internet didn't have to be slow and caught MS+Mozilla with their pants down. Safari is still working on pulling theirs back up.
> Safari is still working on pulling theirs back up.
not sure about this take, given that chrome‘s rendering engine was famously based on Safari‘s - WebKit - before they forked it (Blink). V8 was indeed faster than Safari‘s JS engine at the time. However, today, Safari is objectively faster in both rendering (WK) and JS performance (JSCore).
They caught up in performance but failed at what Apple was historically good at, vertical integration. Safari still sucks, and nobody talks about it because nobody uses it.
I'm with you on this. I've been an early paid Antigravity IDE user. Their recent silent rug pull on quotas, where without any warning you get rate-limited for 5 days in the middle of code refactoring, enrages users, not simply making them unsatisfied with the product. It actually makes you hate the evil company.
So is Gemini tbh. It's the only agent I've used that gets itself stuck in ridiculous loops repeating "ok. I'm done. I'm ready to commit the changes. There are no bugs. I'm done."
Google somehow manages to fumble the easiest layups. I think Anthropic et al have a real chance here.
Google's product management and discipline are absolute horsesh*t. But they have a moat and its extreme technical competence. They own their infra from the hardware (custom ASICs, their own data centers, global intranet, etc.) all the way up to the models and product platforms to deploy it in. To the extent that making LLMs work to solve real world problems is a technical problem, landing Gemini is absolutely in Google's wheelhouse.
Just imagine how things change when Google realizes they can leverage their technical competenence to have Gemini build competent product management (or at least something that passes as comparatively competent since their bar is so low).
You are stating generalities when more specific information is easily available.
Google has AI infrastructure that it has created itself as well as competitive models, demonstrating technical competence in not-legacy-at-all areas, plus a track record of technical excellence in many areas both practical and research-heavy. So yes, technical competence is definitely an advantage for Google.
I use Claude every day. I cannot get Gemini to do anything useful, at all. Every time I've tried to use it, it has just failed to do what was required.
Three subthreads up you have someone saying gemini did what claude couldn't for them on some 14 year old legacy code issue. Seems you can't really use peoples prior success with their problem as an estimate of what your success will be like with your problem and a tool.
People and benchmarks are using pretty specific, narrow tests to judge the quality of LLMs. People have biases, benchmarks get gamed. In my own experience, Gemini seems to be lazy and scatter-brained compared to Claude, but shows higher general-purpose reasoning abilities. Anthropic is also obviously massively focusing on making their models good at coding.
So it is reasonable that Claude might show significantly better coding ability for most tasks, but the better general reasoning ability proves useful in coding tasks that are complicated and obscure.
Hard to bet against Hassabis + Google's resources. This is in their wheelhouse, and it's eating their search business and refactoring their cloud business. G+ seemed like a way to get more people to Google for login and tracking.
Thats pretty telling that on the search's / ad placement on the web where it matters, OAI has had no impact or its muted and offset by continued market power / increased demand for Google's ad-space on the web.
A couple months ago things were different. Try their stronger models. Gemini recently saved me from a needle in a haystack problem with buildpacks and Linux dependencies for a 14-year-old B2B SaaS app that I was solving a major problem for, and Gemini figured out the solution quickly after I worked on it for hours with Claude Code. I know it's just one story where Gemini won, and I have really enjoyed using Claude Code, but Google is having some success with the serious effort they're putting into this fight.
They recently replaced “define: word” (or “word meaning”) results with an “ai summary” and it’s decidedly worse. It used to just give you the definition(s) and synonyms for each one. Now it gives some rambling paragraphs.
My google gives me the data from oup for word meaning, and doesn't show any AI. It opens up the translator for word meaning language. It is really fast and convenient.
I think they had no choice but to release that AI before it was ready for prime time. Their search traffic started dropping after ChatGPT came out, and they risked not looking like a serious player in AI.
I thought it was a far superior UI to facebook when it launched. I tried to use it but the gravity of the network effect was too strong on facebook's side.
In the end I'd rather if both had failed. Although one can argue that they actually did. But that's another story.
I very much wanted Google Plus to succeed. Circles was a great idea in my opinion. Google Plus profiles could be the personal home page for the rest of us but of course, Google being Google...
That being said, tying bonuses for the whole company on the success of Google+ was too much even for me.
Everything was obviously DOA after it dies. I also thought it wouldn't last but it wouldn't be the first or last tech company initiative that lived on long after people thought it would die. Weird things happen. "Obviously" isn't a good filter.
It was a little different. Facebook was eventually (after harvard only) wide open for college email holders so it wasn't some exclusive club that kept people you want to be in their out. It kept your parents out though and your lame younger sibling. You could immediately use it with your friends. No invite nonsense like with g+.
This is the siren song of llm. "Look how much progress we made"
Effort increases as time to completion decreases. The last 10% of the project takes 90% of the effort as you try to finish up, deploy,integrate and find the gaps.
Llms are woefully incapable of that as that knowledge doesn't exist in a markdown file. It's in people's heads and you have to pry it out with a crowbar or as happens to so many projects, they get released and no one uses it.
See Google et Al. "We failed to find market fit on the 15th iteration of our chat app, we'll do better next time"
I have to stretch your analogy in weird ways to make it function within this discussion:
Imagine two people who have only sat in a chair their whole lives. Then, you have one of them learn how to drive a car, whereas the other one never leaves the chair.
The one who learned how to drive a car would then find it easier to learn how to run, compared to the person who had to continue sitting in the chair the whole time.
I've found AI handy as a sort of tutor sometimes, like "I want to do X in Y programming language, what are some tools / libraries I could use for that?" And it will give multiple suggestions, often along with examples, that are pretty close to what I need.
It'll probably look like the code version of this, an image run through a LLM 101 times with the directive to create a replica of the input image: https://www.reddit.com/r/ChatGPT/comments/1kbj71z/i_tried_th... Despite being provided with explicit instructions, well...
People are still wrongly attributing a mind to something that is essentially mindless.
They do okay-ish for things that don't matter and if you don't look that hard. If you do look, the "features" turn out to be very limited, or not do what they claim or not work at all.
It’s still a collaborative and iterative process. That doesn’t mean they don’t work. I don’t need ai to one shot my entire job for it to be crazy useful.
If you find it helpful, that's fine. I like it as spicy autocorrect, and turn it off when I find it annoying.
I actually do look into what people do because as much fun as being a hater is, it's important not to get lost in the sauce.
From what I've seen, it's basically all:
1. People tricking themselves into feeling productive but they're not, actually
2. People tricking themselves into feeling productive but they're actually doing sloppy work
3. Hobby or toy stuff
4. Stuff that isn't critical to get right
5. Stuff they don't know how to judge the quality of
6. ofc the grifters chasing X payouts and driving FOMO
7. People who find it kinda useful in some limited situations (me)
It has its uses for sure, but I don't find it transformative. It can't do the hard parts and for anything useful, I need to check exactly what it did, and if I do that, it's much faster to do myself. Or make a script to do it.
Sure, if all you ask it to do is fix bugs. You can also ask it to work on code health things like better organization, better testing, finding interesting invariants and enforcing them, and so on.
I have some healthy skepticism on this claim though. Maybe, but there will be a point of diminishing returns where these refactors introduce more problems than they solve and just cause more AI spending.
Code is always a liability. More code just means more problems. There has never been a code generating tool that was any good. If you can have a tool generate the code, it means you can write something on a higher level of abstraction that would not need that code to begin with.
AI can be used to write this better quality / higher level code. That's the interesting part to me. Not churning out massive amounts of code, that's a mistake.
Microsoft will be an excellent real-world experiment on whether this is any good. We so easily forget that giant platform owners are staking everything on all this working exactly as advertised.
Some of my calculations going forward will continue to be along the lines of 'what do I do in the event that EVERYTHING breaks and cannot be fixed'. Some of my day job includes retro coding for retro platforms, though it's cumbersome. That means I'll be able to supply useful things for survivors of an informational apocalypse, though I'm hoping we don't all experience one.
There's an interesting phenomenon I noticed with the "skeptics". They're constantly using what-ifs (aka goalpost moving), but the interesting thing is that those exact same what-ifs were "solved" earlier, but dismissed as "not good enough".
This exact thing about optimisation has been shown years ago. "Here's a function, make it faster". With "glue" to test the function, and it kinda worked even with GPT4 era models. Then came alphaevolve where google found improvements in real algorithms (both theoretical i.e. packing squares and practical i.e. ML kernels). And yet these were dismissed as "yeah, but that's just optimisation, that's easyyyy. Wake me up when they write software from 0 to 1 and it works".
Well, here we are. We now have a compiler that can compile and boot linux! And people are complaining that the code is unmaintainable and that it's slow / unoptimised. We've gone full circle, but forgot that optimisation was easyyyy. Now it's something to complain about. Oh well...
I use LLM’s daily and agents occasionally. They are useful, but there is no need to move any goal posts; they easily do shit work still in 2026.
All my coworkers use agents extensively in the backend and the amount of shit code, bad tests and bugs has skyrocketed.
Couple that with a domain (medicine) where our customer in some cases needs to validate the application’s behaviour extensively and it’s a fucking disaster —- very expensive iteration instead of doing it well upfront.
I think we have some pretty good power tools now, but using them appropriately is a skill issue, and some people are learning to use them in a very expensive way.
I find that chat is pretty good when you're describing what you want to do, for saying "actually, I wanted something different," or for giving it a bug report. For making fine adjustments to CSS, it would be nice if you could ask the bot for a slider or a color picker that makes live updates.
It doesn't really matter for hobby projects or demos or whatever, but there's this whole group who thinks they can yell at the computer and have a business fall out and no.
I agree but want to interject that "code organization " won't matter for long.
Programming Languages were made for people. I'm old enough to have programmed in z80 and 8086 assembler. I've been through plenty of prog.langs. through my career.
But once building systems become prompting an agent to build a flow that reads these two types of excels, cleans them,filters them, merges them and outputs the result for the web (oh and make it interactive and highly available ) .
Code won't matter. You'll have other agents that check that the system is built right, you'll have agents that test the functionality and agents that ask and propose functionality and ideas.
Most likely the Programming language will become similar to the old Telegraph texts (telegrams) which were heavily optimized for word/token count. They will be optimized to be LLM grokable instead of human grokable.
What you’re describing is that we’d turn deterministic engineering into the same march of 9s that FSD and robotics are going through now - but for every single workflow. If you can’t check the code for correctness, and debug it, then your test system must be absolutely perfect and cover every possible outcome. Since that’s not possible for nontrivial software, you’re starting a march of 9s towards 100% correctness of each solution.
That accounting software will need 100M unit tests before you can be certain it covers all your legal requirements. (Hyperbole but you get the idea) Who’s going to verify all those tests? Do you need a reference implementation to compare against?
Making LLM work opaque to inspection is kind of like pasting the outcome of a mathematical proof without any context (which is almost worthless AFAIK).
There are certainly people working on making this happen. As a hobbyist, maybe I'll still have some retro fun polishing the source code for certain projects I care about? (Using our new power tools, of course.)
The costs for code improvement projects have gone down dramatically now that we have power tools. So, perhaps it will be considered more worthwhile now? But how this actually plays out for professional programming is going to depend on company culture and management.
In my case, I'm an early-retired hobbyist programmer, so I control the budget. The same is true for any open source project.
My unpopular opinion is AI sucks at writing tests. Like, really sucks. It can churn out a lot of them, but they're shitty.
Actually writing good tests that exercise the behavior you want, guard against regressions, and isn't overfitted to your code is pretty difficult, really. You need to both understand the function and understand the structure to do it
Even for hobby projects, it's not great. I'm learning asyncio by writing a matrix scraper and writing good functional tests as you go is worth it to make sure you actually do understand the concepts
And what happens when these different objectives conflict or diverge ? Will it be able to figure out the appropriate trade-offs, live with the results and go meta to rethink the approach or simply delude itself ? We would definitely lose these skills if it continues like this.
Big companies are handcuffed by Innovators Dilemna etc.
reply