It's in Google's interest that coming generations will find any URL as weird as we find an IPv6 address. Everybody should use their search^W profit generation engine.
> When Google announced in 2018 that it was shutting down goo.gl, the company encouraged developers to migrate to Firebase Dynamic Links (FDL) — which has also since been deprecated.
Why wouldn't they just change the backend and leave the service alive for the end users? It seems nuts to give up all that sweet sweet browsing data.
Seriously... if they retire it, make the backend read-only, that way it can be highly optimized, and ran with minimal costs (from a mammoth company's perspective).
I don't know, make it an interview question and deploy the best answer? They put more effort into tortuting aspirants than to EOL-ing some of their cheap-ass services in a reasonable way.
The problem is that Google infra requires everything running to be new. There is a build horizon of 6 months. Everything built with code older than 6 months is not able to run on Borg. And since Google deprecates many internal infra tools/libraries routinely, a team is required to make sure the service remains up-to-date. Google doesn't want to pay for such maintenance.
Note that when Google made a blog post telling people to migrate from goo.gl to the also now deprecated Firebase Dynamic Links, the post states explicitly[1]:
"While most features of goo.gl will eventually sunset, [bold]all existing links will continue to redirect to the intended destination.[/bold]"
Every time I change a ULR (or a set of URLs), I put a test for the redirect into my end-2-end tests, which run once per day. So I know all my URLs will work forever.
I have not thought about it for years now. Just checked for my first ever Show HN from 10 years ago:
The URL has long changed, but the redirect still works. Phew :) So all seems to be good. Here's to the next 10 years!
Google should do the same. Set up a seperate server for the redirect service itself. And then I guess they have multi project end-2-end tests running somewhere in their infrastructure. Just add testing this service and thats it. Amount of work per year to keep it up should be less than an hour, right?
The sad truth is that no one is getting a promotion to staff for just maintaining a service.
I wish this wasn’t so. At a previous job I had a VP tell me that my team was like a public utility and I took that as a compliment. Later my boss explained they were saying that they only noticed my team when something was broken. Sort of explained my lack of career progression in retrospect.
> On August 25th, 2025, Firebase Dynamic Links will shut down. All links served by Firebase Dynamic Links (both hosted on custom domains and page.link subdomains) will stop working and you will no longer be able to create new links.
> I bet they could have 2 interns porting the thing to Google App Engine and then migrate the database
How can you possibly have this assessment without looking at the code/infra?
There are many things that affect cost beyond the visible features. The project isn't in a vacuum. It's interlocked with their other services infrastructure.
You can judge Google however you want, but they're not stupid or amateurs. These types of announcements immensely damages their image and affect their customers, if they could avoid it easily as you imagine, why would they not?
They've built the service and run it for many years for billions of people. A more realistic guess would be that for whatever reason, the price is higher than what's visible on the surface and they're not willing to pay it.
Just to be clear, I'm not saying they can port everything to it, but only the basic functionality to not let the links die (then progress with it)
> These types of announcements immensely damages their image and affect their customers, if they could avoid it easily as you imagine, why would they not?
You're assuming they care. And the answer of how much they care is: can this be used to further my (that is, an engineer or manager) promotion? If not then no
you are assuming hidden costs, I am assuming hidden incentives. It’s not that they are stupid or incompetent, but bad incentives within the org can and do produce stupid outcomes.
If they used AWS, this would have no code and no maintenance: host the bucket out of S3 and enable redirects.
GCP doesn’t support that, but they could get pretty close using a cloud function - stick with the Python stdlib & SQLite or DBM for the mappings or use an Apache redirect map, and you’d have many years before you need to touch it again.
> These types of announcements immensely damages their image and affect their customers, if they could avoid it easily as you imagine, why would they not?
I believe they don't care. What are you gonna do, boycott them?
>These types of announcements immensely damages their image and affect their customers, if they could avoid it easily as you imagine, why would they not?
What's happened here is that you've erroneously assumed there's a good reason. It's fun to hold nonsense like this up against testimony from the ministers and officials at the Horizon enquiry, all of whom can be relied upon to say that "with the benefit of hindsight" obviously what they did was wrong but insist that they were too stupid to realise there was a problem and thought they were powerless to do anything.
Remember on average the other humans are just as stupid and lazy as you are. Most often there aren't "good reasons" for what happened, if there are even reasons at all.
I wonder if there's a story here involving a URL shortener service having hidden costs? I can imagine there being something in the abuse space that makes it feel more expensive than just the hosting costs to operate.
Google the company was designed with really high coordination requirements, which has made the marginal coordination cost of adding a new engineer higher than the value they add.
Having products scale through time is an engineering problem, and they seem to not be able to recognize it as so.
As long as they don't understand this, they won't be able to expand their product offering (and thus Revenue) significantly faster than their headcounts.
Many years ago, there was an industrial group (?, maybe just a campaign.. can't remember the details) promised to provide protection/transfer service if one of their member shutdown.
Tried to search the news, can't find any reference to that.
Like many have said, its a shame they refuse to maintain minimal requirements to keep the links working.
Google offers cloud services. It’s like AWS saying they won’t spare some ec2 instances to keep some links working. If Google knew how to use their own cloud products then they could deploy some instances, failover, and monitoring and leave it alone, and also dogfood their own cloud products.
I host my own URL shortening service. It's 2 data columns in SQLite and a few lines of Javascript.
The reason I host my own is that specifically Google taught me not to trust the longevity of cloud hosted services. So I didn't trust tinyurl.com or whatever to be there in future. Ty Google for confirming the wisdom in that decision.
That is sad, especially because I think that it is not a service that would take that much effort to keep up.
've seen things you people wouldn't believe... Attack ships on fire off the shoulder of Orion... I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain...
Not sure if this is a general remonstrance about Google not caring about permanent URLs (pURLs) or a very specific reference: In Knitting for beginners, 3rd edition (Imagine Publishing, 2015), the basic knitting techniques have a link to accompanying video demonstrations on Youtube, and they used Google's link shortener. Of course they cover the purl stitch, but the pURL for the purl video will now be broken by Google. For anyone googling this in the future, here's the redirect: