Google's shortened goo.gl links will stop working next month

215 mobilio 183 7/25/2025, 2:25:30 PM theverge.com ↗

Comments (183)

edent · 16h ago
About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...

Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...

And for what? The cost of keeping a few TB online and a little bit of CPU power?

An absolute act of cultural vandalism.

toomuchtodo · 16h ago
https://wiki.archiveteam.org/index.php/Goo.gl

https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)

How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

(edit: i see jaydenmilne commented about this further down thread, mea culpa)

progbits · 10h ago
They appear to be doing ~37k items per minute, with 1.6B remaining that is roughly 30 days left. So that's just barely enough to do it in time.

Going to run the warrior over the weekend to help out a bit.

pentagrama · 13h ago
Thank you for that information!

I wanted to help and did that using VMware.

For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.

Project list: https://imgur.com/a/peTVzyw

Current project: https://imgur.com/a/QVuWWIj

jlarocco · 13h ago
IMO it's less Google's fault and more a crappy tech education problem.

It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?

And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.

justin66 · 12h ago
> It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.

Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.

dingnuts · 9h ago
Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.
nly · 8h ago
CANs usually have complex hashy URLs, so you still have the compactness problem
gmerc · 13h ago
Ahh classic free market cop out.
FallCheeta7373 · 12h ago
if the smartest among us publishing for academia cannot figure this out, then who will?
hammyhavoc · 1h ago
Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.

I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.

kazinator · 12h ago
Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.

The authors just had their heads too far up their academic asses to have heard of this.

epolanski · 16h ago
Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).

Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.

whatevaa · 15h ago
Citations are citations, if it's a link, you link to it. But using shorteners for that is silly.
ceejayoz · 15h ago
It's not silly if the link is a couple hundred characters long.
IanCal · 14h ago
Adding an external service so you don’t have to store a few hundred bytes is wild, particularly within a pdf.
ceejayoz · 14h ago
It's not the bytes.

It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.

SR2Z · 13h ago
I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.

This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.

This kind of luddite behavior sometimes makes using this site exhausting.

jtuple · 12h ago
Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.

Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?

Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.

Incipient · 5h ago
>Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!

ceejayoz · 13h ago
> I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.

This is by no means a universal experience.

People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.

SR2Z · 12h ago
And how many of those people then proceed to type those links into their web browsers, shortened or not?

Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.

ceejayoz · 12h ago
> And how many of those people then proceed to type those links into their web browsers, shortened or not?

That probably depends on the link's purpose.

"The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.

epolanski · 9h ago
So he has a computer and can click.

In any case a paper should not rely on an ephemeral resource like internet links.

Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.

JumpCrisscross · 8h ago
I’m unconvinced the researchers acted irresponsibly. If anything, a Google-shortened link looks—at first glance—more reliable than a PDF hosted god knows where.

There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.

ycombinatrix · 7h ago
The Google shortened link just redirects you to the PDF hosted god knows where...
andrepd · 13h ago
I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.
SR2Z · 13h ago
> People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.

andrepd · 7h ago
Very much an xkcd.com/2501 situation
reaperducer · 9h ago
This kind of luddite behavior sometimes makes using this site exhausting.

We have many paper documents from over 1,000 years ago.

The vast majority of what was on the internet 25 years ago is gone forever.

eviks · 42m ago
What a weird comparison. Do we have the vast majority of paper documents from 1,000 years ago?
epolanski · 9h ago
25?

Try going back by 6/7 years on this very website, half the links are dead.

leumon · 13h ago
which makes url shorteners even more attractive for printed media, because you don't have to type many characters manually
epolanski · 14h ago
Fix that at the presentation layer (PDFs and Word files etc support links) not the data one.
ceejayoz · 14h ago
Let me know when you figure out how to make a printed scientific journal clickable.
epolanski · 9h ago
Scientific journals should not rely on ephemeral data on the internet. It doesn't even matter how long the url is.

Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.

diatone · 14h ago
Take a photo on your phone, OS recognises the link in the image, makes it clickable, done. Or, use a QR code instead
ceejayoz · 13h ago
jeeyoungk · 12h ago
This is the answer; turns out that non-transformed links are the most generic data format, without any "compression" - QR codes or a third-party-intermediary - needed.
eviks · 51m ago
> And for what? The cost of keeping a few TB online and a little bit of CPU power?

For the immeasurable benefits of educating the public.

zffr · 15h ago
For people wanting to include URL references in things like books, what’s the right approach to take today?

I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades

toomuchtodo · 15h ago
https://perma.cc/

It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).

(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)

ruined · 15h ago
perma.cc is an interesting project, thanks for sharing.

other readers may be specifically interested in their contingency plan

https://perma.cc/contingency-plan

Hyperlisk · 15h ago
perma.cc is great. Also check out their tools if you want to get your hands dirty with your own archival process: https://tools.perma.cc/
whoahwio · 15h ago
While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here
toomuchtodo · 15h ago
If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.

This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.

whoahwio · 14h ago
This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument. My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.
toomuchtodo · 13h ago
Point taken.
edent · 15h ago
The full URl to the original page.

You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.

A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.

firefax · 15h ago
>The full URl to the original page.

I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.

Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.

grapesodaaaaa · 13h ago
I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.

We’ve learned over the years that they can be unreliable, security risks, etc.

I just don’t see a major use-case for them anymore.

danelski · 15h ago
Real URL and save the website in the Internet Archive as it was on the date of access?
kazinator · 15h ago
The act of vandalism occurs when someone creates a shortened URL, not when they stop working.
djfivyvusn · 16h ago
The vandalism was relying on Google.
toomuchtodo · 16h ago
You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.
api · 16h ago
The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.

The simplicity of the web is one of its virtues but also leaves a lot on the table.

lubujackson · 2h ago
Truly, the most Googly of sunsets.
QuantumGood · 10h ago
When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.
justinmayer · 10h ago
In the first segment of the very first episode of the Abstractions podcast, we talked about Google killing its goo.gl URL obfuscation service and why it is such a craven abdication of responsibility. Have a listen, if you’re curious:

Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33

Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...

crossroadsguy · 14h ago
I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.
BobaFloutist · 14h ago
I mean preferably do both, right? The URL is better for however long it works.
SoftTalker · 14h ago
We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.
SirMaster · 14h ago
Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?
jeffbee · 15h ago
While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.
asdll · 5h ago
> An absolute act of cultural vandalism.

It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.

nikanj · 14h ago
The cost of dealing and supporting an old codebase instead of burning it all and releasing a written-from-scratch replacement next year
mrcslws · 16h ago
From the blog post: "more than 99% of them had no activity in the last month" https://developers.googleblog.com/en/google-url-shortener-li...

This is a classic product data decision-making fallacy. The right question is "how much total value do all of the links provide", not "what percent are used".

bayindirh · 16h ago
> The right question is "how much total value do all of the links provide", not "what percent are used".

Yes, but it doesn't bring in the sweet promotion home, unfortunately. Ironically, if 99% of them doesn't see any traffic, you can scale back the infra, run it in 2 VMs, and make sure a single person can keep it up as a side quest, just for fun (but, of course, pay them for their work).

This beancounting really makes me sad.

quesera · 15h ago
Configuring a static set of redirects would take a couple hours to set up, and literally zero maintenance forever.

Amazon should volunteer a free-tier EC2 instance to help Google in their time of economic struggles.

bayindirh · 15h ago
This is what I mean, actually.

If they’re so inclined, Oracle has an always free tier with ample resources. They can use that one, too.

socalgal2 · 14h ago
If they wanted the sweat promotion they could add an interstitial. Yes, people would complain, but at least the old links would not stop working.
ahstilde · 16h ago
> just for fun (but, of course, pay them for their work).

Doing things for fun isn't in Google's remit

kevindamm · 16h ago
Alas, it was, once upon a time.
morkalork · 15h ago
Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.
jsperson · 13h ago
I’m still sore about reader. Gap has never been filled for me.
ceejayoz · 16h ago
It used to be. AdSense came from 20% time!
HPsquared · 15h ago
Indeed. I've probably looked at less than 1% of my family photos this month but I still want to keep them.

No comments yet

sltkr · 15h ago
I bet 99% of URLs that exist on the public web had no activity last month. Might as well delete the entire WWW because it's obviously worthless.
firefax · 15h ago
> "more than 99% of them had no activity in the last month"

Better to have a short URL and not need it, than need a short URL and not have it IMO.

fizx · 15h ago
Don't be confused! That's not how they made the decision; it's how they're selling it.
esafak · 15h ago
So how did they decide?
nemomarx · 15h ago
I expect cost on a budget sheet, then an analysis was done about the impact of shutting it down
sltkr · 15h ago
You can't get promoted at Google for not changing anything.
SoftTalker · 14h ago
From Google's perspective, the question is "How many ads are we selling on these links" and if it's near zero, that's the value to them.
esafak · 15h ago
What fraction of indexed Google sites, Youtube videos, or Google Photos were retrieved in the last month? Think of the cost savings!
nomel · 15h ago
Youtube already does this, to some extent, by slowly reduce the quality of your videos, if they're not accessed frequently enough.

Many videos I uploaded in 4k are now only available in 480p, after about a decade.

handsclean · 15h ago
I don’t think they’re actually that dumb. I think the dirty secret behind “data driven decision making” is managers don’t want data to tell them what to do, they want “data” to make even the idea of disagreeing with them look objectively wrong and stupid.
HPsquared · 15h ago
It's a bit like the the difference between "rule of law" and "rule by law" (aka legalism).

It's less "data-driven decisions", more "how to lie with statistics".

FredPret · 15h ago
"Data-driven decision making"
JimDabell · 15h ago
Cloudflare offered to keep it running and were turned away:

https://x.com/elithrar/status/1948451254780526609

Remember this next time you are thinking of depending upon a Google service. They could have kept this going easily but are intentionally breaking it.

fourseventy · 15h ago
Google killing their domains service was the last straw for me. I started moving all of my stuff off of Google since then.
nomel · 15h ago
I'm still shocked that my google voice number still functions after all these years. It makes me assume it's main purpose is to actually be an honeypot of some sort, maybe for spam call detection.
joshstrange · 15h ago
Because IIRC it’s essentially completely run by another company (I want to say Bandwidth?) and, again my memories might be fuzzy, originally came from an acquisition of a company called Grand Central.

My guess is it just keeps chugging along with little maintenance needed by Google itself. The UI hasn’t changed in a while from what I’ve seen.

No comments yet

hnfong · 14h ago
Another shocking story to share.

I have a tiny service built on top of Google App Engine that (only) I use personally. I made it 15+ years ago, and the last time I deployed changes was 10+ years ago.

It's still running. I have no idea why.

coryrc · 14h ago
It's the most enterprise-y and legacy thing Google sells.
throwyawayyyy · 15h ago
Pretty sure you can thank the FCC for that :)
mrj · 15h ago
Shhh don't remind them
kevin_thibedeau · 14h ago
Mass surveillance pipeline to the successor of room 641A.
thebruce87m · 13h ago
> Remember this next time you are thinking of depending upon a Google service.

Next time? I guess there’s a wave of new people that haven’t learned that that lesson yet.

jaydenmilne · 16h ago
ArchiveTeam is trying to brute force the entire URL space before its too late. You can run a Virtualbox VM/docker image (ArchiveTeam Warrior) to help (unique IPs are needed). I've been running it for a couple months and found a million.

https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

pimlottc · 15h ago
Looks like they have saved 8000+ volumes of data to the Internet Archive so far [0]. The project page for this effort is here [1].

0: https://archive.org/details/archiveteam_googl

1: https://wiki.archiveteam.org/index.php/Goo.gl

localtoast · 15h ago
Docker container FTW. Thanks for the heads-up - this is a project I will happily throw a Hetzner server at.
wobfan · 15h ago
Same here. I am geniunely asking myself for what though. I mean, they'll receive a list of the linked domains, but what will they do with that?
localtoast · 15h ago
It's not only goo.gl links they are actively archiving. Take a look at their current tasks.

https://tracker.archiveteam.org/

fragmede · 14h ago
save it, forever*.

* as long as humanly possible, as is archive.org's mission.

hadrien01 · 9h ago
After a while I started to get "Google asks for a login" errors. Should I just keep going? There's no indication on what I should do on the ArchiveTeam wiki
ojo-rojo · 15h ago
Thanks for sharing this. I've often felt that the ease by which we can erase digital content makes our time period susceptible to a digital dark ages to archaeologists studying history a few thousand years from now.

Us preserving digital archives is a good step. I guess making hard copies would be the next step.

AstroBen · 15h ago
Just started, super easy to set up
cpeterso · 16h ago
Google’s own services generate goo.gl short URLs (Google Maps generates https://maps.app.goo.gl/ URLs for sharing links to map locations), so I assume this shutdown only affects user-generated short URLs. Google’s original announcement doesn’t say as such, but it is carefully worded to specify that short URLs of the “https://goo.gl/* format” will be shut down.

Google’s probably trying to stop goo.gl URLs from being used for phishing, but doesn’t want to admit that publicly.

growthwtf · 14h ago
This actually makes the most logical sense to me, thank you for the idea. I don't agree with the way they're doing it of course but this probably is risk mitigation for them.
jedberg · 16h ago
I have only given this a moment's thought, but why not just publish the URL map as a text file or SQLLite DB? So at least we know where they went? I don't think it would be a privacy issue since the links are all public?
DominikPeters · 16h ago
It will include many URLs that are semi-private, like Google Docs that are shared via link.
ryandrake · 15h ago
If some URL is accessible via the open web, without authentication, then it is not really private.
bo1024 · 15h ago
What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.
prophesi · 15h ago
Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.
hombre_fatal · 15h ago
Sure, but you have to make executive decisions on the behalf of people who aren't experts.

Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.

People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.

bo1024 · 15h ago
I'm not seeing why there's a clear line where GET cannot be authentication but POST can.
prophesi · 14h ago
Because there isn't a line? You can require auth for any of those HTTP methods. Or not require auth for any of them.
wobfan · 10h ago
I mean, going by that argument a username + password is also just obfuscation. Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.
charcircuit · 15h ago
Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.
high_na_euv · 16h ago
So exclude them
ceejayoz · 16h ago
How?

How will they know a short link to a random PDF on S3 is potentially sensitive info?

Nifty3929 · 15h ago
I'd rather see it as a searchable database, which I would think is super cheap and no maintenance for Google, and avoids these privacy issues. You can input a known goo.gl and get it's real URL, but can't just list everything out.
growt · 15h ago
And then output the search results as a 302 redirect and it would just be continuing the service.
devrandoom · 16h ago
Are they all public? Where can I see them?
jedberg · 16h ago
You can brute force them. They don't have passwords. The point is the only "security" is knowing the short URL.
Alifatisk · 16h ago
I don't think so, but you can find the indexed urls here https://www.google.com/search?q=site%3A"goo.gl" it's about 9,6 million links. And those are what got indexed, it should be way more out there
sltkr · 14h ago
I'm surprised Google indexes these short links. I expected them to resolve them to their canonical URL and index that instead, which is what they usually do when multiple URLs point to the same resource.
ElijahLynn · 16h ago
OMFG - Google should keep these up forever. What a hit to trust. Trust with Google was already bad for everything they killed, this is another dagger.
phyzix5761 · 15h ago
People still trust Google?
spankalee · 13h ago
As an ex-Googler, the problem here is clear and common, and it's not the infrastructure cost: it's ownership.

No one wants to own this product.

- The code could be partially frozen, but large scale changes are constantly being made throughout the google3 codebase, and someone needs to be on the hook for approving certain changes or helping core teams when something goes wrong. If a service it uses is deprecated, then lots of work might need to be done.

- Every production service needs someone responsible for keeping it running. Maybe an SRE, thought many smaller teams don't have their own SREs so they manage the service themselves.

So you'd need some team, some full reporting chain all the way up, to take responsibility for this. No SWE is going to want to work on a dead product where no changes are happening, no manager is going to care about it. No director is going to want to put staff there rather than a project that's alive. No VP sees any benefit here - there's only costs and risks.

This is kind of the Reader situation all over again (except for the fact that a PM with decent vision could have drastically improved and grown Reader, IMO).

This is obviously bad for the internet as a whole, and I personally think that Google has a moral obligation to not rug pull infrastructure like this. Someone there knows that critical links will be broken, but it's in no one's advantage to stop that from happening.

I think Google needs some kind of "attic" or archive team that can take on projects like this and make them as efficiently maintainable in read-only mode as possible. Count it as good-will marketing, or spin it off to google.org and claim it's a non-profit and write it off.

Side note: a similar, but even worse situation for the company is the Google Domains situation. Apparently what happened was that a new VP came into the org that owned it and just didn't understand the product. There wasn't enough direct revenue for them, even though the imputed revenue to Workspace and Cloud was significant. They proposed selling it off and no other VPs showed up to the meeting about it with Sundar so this VP got to make their case to Sundar unchallenged. The contract to sell to Squarespace was signed before other VPs who might have objected realized what happened, and Google had to buy back parts of it for Cloud.

gsnedders · 5h ago
To some extent, it's cases like this which show the real fragility of everything existing as a unified whole in google3.

While clearly maintenance and ownership is still a major problem, one could easily imagine deploying something similar — especially read-only — using GCP's Cloud Run and BigTable products could be less work to maintain, as you're not chasing anywhere near such a moving target.

rs186 · 11h ago
Many good points, but if you don't mind me asking: if you were at Google, would you be willing to be the lead of that archive team, knowing that you'll be stuck at this position for the next 10 years, with the possibility of your team being downsized/eliminated when the wind blows slightly in the other direction?
spankalee · 6h ago
Definitely a valid question!

Myself, no, for a few reasons: I mainly work on developer tools, I'm too senior for that, and I'm not that interested.

But some people are motivated to work on internet infrastructure, and would be interested. First, you wouldn't be stuck for 10 years. That's not how Google works (and you could of course quit) you're supposed to be with a team a minimum of 18 months, and after that, transfer away. A lot of junior devs don't care that much where they land, the archive team would have to be responsible for more than just the link shortener, so it might be interesting to care for several services from top to bottom. SWEs could be compensated for rotating on to the archive team, and/or it could be part-time.

I think the harder thing is getting management buy-in, even from the front-line managers.

romaniv · 12h ago
URL shorteners were always a bad idea. At the rate things are going I'm not sure people in a decade or two won't say the same thing about URLs and the Web as whole. The fact that there is no protocol-level support for archiving, versioning or even client-side replication means that everything you see on the Web right now has an overwhelming probability to permanently disappear in the near future. This is an astounding engineering oversight for something that's basically the most popular communication system and medium in the world and in history.

Also, it's quite conspicuous that 30+ years into this thing browsers still have no built-in capacity to store pages locally in a reasonable manner. We still rely on "bookmarks".

davidczech · 15h ago
I don't really get it, it must cost peanuts to leave a static map like this up for the rest of Google's existence as a company.
nikanj · 14h ago
There’s two things that are real torture to google dev teams: 1) Being told a product is completed and needs no new features or changes 2) Being made to work on legacy code
hinkley · 15h ago
What’s their body count now? Seems like they’ve slowed down the killing spree, but maybe it’s just that we got tired of talking about them.
theandrewbailey · 15h ago
hinkley · 14h ago
Oh look it’s been months since they killed a project!
codyogden · 10h ago
Because there's not much left to kill.
cyp0633 · 16h ago
The runner of Compiler Explorer tried to collect the public shortlinks and do the redirection themselves:

Compiler Explorer and the Promise of URLs That Last Forever (May 2025, 357 points, 189 comments)

https://news.ycombinator.com/item?id=44117722

krunck · 15h ago
Stop MITMing your content. Don't use shorteners. And use reasonable URL patterns on your sites.
Cyan488 · 15h ago
I have been using a shortening service with my own domain name - it's really handy, and I figure that if they go down I could always manually configure my own DNS or spin up some self-hosted solution.
musicale · 15h ago
hinkley · 15h ago
That needs a chart.
pentestercrab · 15h ago
There seems to have been a recent uptick in phishers using goo.gl URLs. Yes, even without new URLs being accepted by registering expired domains with an old reference.
ccgreg · 8h ago
Common Crawl's count of unique goo.gl links is approximately 10 million. That's in our permanent archive, so you'll be able to consult them in the future.

No search engine or crawler person will ever recommend using a shortener for any reason.

pluc · 16h ago
Someone should tell Google Maps
david422 · 14h ago
Somewhat related - I wanted to add short urls to a project of mine. I was looking around at a bunch of url shorteners - and then realized it would be pretty simple to create my own. It's my content pointed to my own service, so I don't have to worry about 3rd party content or other services going down.
Brajeshwar · 16h ago
What will it really cost for Google (each year) to host whatever was created, as static files, for as long as possible?
malfist · 16h ago
It'd probably cost a couple tens of dollars, and Google is simply too poor to afford that these days. They've spent all their money on AI and have nothing left
rsync · 11h ago
A reminder that the "Oh By"[1] everything-shortener not only exists but can be used as a plain old URL shortener[2].

Unlike the google URL shortener, you can count on "Oh By" existing in 20 years.

[1] https://0x.co

[2] https://0x.co/hnfaq.html

xutopia · 14h ago
Google is making harder and harder to depend on their software.
christophilus · 14h ago
That’s a good thing from my perspective. I wish they’d crush YouTube next. That’s the only Google IP I haven’t been able to avoid.
andrii9 · 15h ago
Ugh, I used to use https://fuck.it for short links too. Still legendary domain though.
pkilgore · 15h ago
Google probably spends more money a month than what it would take to preserve this service on coffee creamer for a single conference room.
throwaway81523 · 7h ago
Cartoon villains. That's what they are.
gedy · 15h ago
At least they didn't release a 2 new competing d.uo or re.ad, etc shorteners and expect you to migrate
micromacrofoot · 16h ago
This is just being a poor citizen of the web, no excuses. Google is a 2 trillion dollar company, keeping these links working indefinitely would probably cost less than what they spend on homepage doodles.
charlesabarnes · 15h ago
Now I'm wondering why did chrome change the behavior to use share.google links if this will be the inevitable outcome
mymacbook · 9h ago
Why is everyone jumping on the blame the victims bandwagon?! This is not the fault of users whether they were scientists publishing papers or the fault of the general public sharing links. This is absolutely 100% on Alphabet/Google.

When you blame your customer, you have failed.

eviks · 23m ago
They weren't customers since they didn't buy anything, and yes, as sweet as "free" is, it is the fault of users to expect free to last forever
ChrisArchitect · 15h ago
Discussion on the source from 2024: https://news.ycombinator.com/item?id=40998549
ChrisArchitect · 15h ago
Noticed recently on some google properties where there are Share buttons that it's generating share.google links now instead of goo.gl.

Is that the same shortening platform running it?

ourmandave · 16h ago
A comment said they stopped making new links and announced back in 2018 it would be going away.

I'm not a google fanboi and the google graveyard is a well known thing, but this has been 6+ years coming.

goku12 · 15h ago
For one, not enough people seem to be aware of it. They don't seem to have given that announcement the importance and effort it deserved. Secondly, I can't say that they have a good migration plan when shutting down their services. People scrambling like this to backup the data is rather common these days. And finally, this isn't a service that can be so easily replaced. Even if people knew that it was going away, there would be short-links that they don't remember, but are important nevertheless. Somebody gave an example above - citations in research papers. There isn't much thought given to the consequences when decisions like this are taken.

Granted that it was a free service and Google is under no obligation to keep it going. But if they were going to be so casual about it, they shouldn't have offered it in the first place. Or perhaps, people should take that lesson instead and spare themselves the pain.

pfdietz · 15h ago
Once again we are informed that Google cannot be trusted with data in the long term.
fnord77 · 15h ago
quesera · 4h ago
From the 2018 announcement:

> URL Shortener has been a great tool that we’re proud to have built. As we look towards the future, we’re excited about the possibilities of Firebase Dynamic Links

Perhaps relatedly, Google is shutting down Firebase Dynamic Links too, in about a month (2025-08-25).

insane_dreamer · 16h ago
the lesson? never trust industry
Bluestein · 15h ago
Another one for the Google [G]raveyard.-
lrvick · 15h ago
Yet another reminder to never trust corpotech to be around long term.