As a paid product, has anyone used Raindrop as well and have opinions/comparisons? And on the self hosted side, vs Hoarder?
I’ve been considering switching from Raindrop to a self hosted option, but while I like self hosting I’m also leaning towards just paying someone to handle this particular service for me.
exhilaration · 9m ago
I've never heard of raindrop and it looks cool but I see the .ru in one of their screenshots -- are they based in Russia? Any concerns with doing business with a Russian company, in the context of sanctions etc.?
toomuchtodo · 29m ago
I pay for Raindrop, very useful to have someone else run it, minimal cost.
flashblaze · 32m ago
I have been using Raindrop and like it quite a bit
carlosjobim · 1h ago
I tried Raindrop, but it was not usable to me because it constantly logged you out.
regularjack · 1h ago
I also use raindrop, but been looking at self-hosted alternatives as raindrop does not encrypt the data, so I can't use it for work stuff.
daniel31x13 · 2h ago
Hello everyone, I’m the main developer behind Linkwarden. Glad to see it getting some attention here!
Some key features of the app (at the moment):
- Text highlighting
- Full page archival
- Full content search
- Optional local AI tagging
- Sync with browser (using Floccus)
- Collaborative
Also, for anyone wondering, all features from the cloud plan are available to self-hosted users :)
browningstreet · 1h ago
Suggestion/request:
What I'd really love is a super compact "short-name only" view of links. Just words, not lines or galleries. For super-high content views.
Ahh, yes, you can reduce it to names with a lot of columns. In my personal ideal, I've love to store a short-name for a link and have no boxes. Personally, I've always wanted links to be like the tag cloud in pinboard and to have a page with multiple tags/categories.
I'd also love a separation of human tags and AI tags (even by base or stem), just in case they provided radically different views, but both were useful.
EDIT:
Just did a quick look in the documentation, is there a native or supported distinction between links that are like bookmarks and links that are more content/articles/resources?
dikdok · 1h ago
> Full page archival
Does it grab the DOM from my browser as it sees it? Or is it a separate request? If so, how does it deal with authentication?
daniel31x13 · 1h ago
So there are different ways it archives a webpage.
It currently stores the full webpages as a single html file, a screenshot, a pdf, a read-it-later view.
Aside from that, you can also send the webpages to the Wayback Machine to take a snapshot.
To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
dikdok · 1h ago
> To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
Just an image? So no full text search?
No comments yet
yapyap · 1h ago
Very very neat!
a question arose for me though: if the AI tagging is self hostable as well, how taxing is it for the hardware, what would the minimum viable hardware be?
daniel31x13 · 1h ago
Thanks! A lightweight model like the phi3:mini-4k is enough for this feature.[1]
It’s worth mentioning that you can also use external providers like OpenAI and Anthropic to tag the links for you.
Is there any software that can provide verified, trusted archives of websites?
For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.
The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.
It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.
Thoughts?
shrinks99 · 52m ago
Webrecorder's WACZ signing spec (https://specs.webrecorder.net/wacz-auth/latest) does some of this — authenticating the identity of who archived it and at what time — but the rest of what you're asking for (legitimacy of the content itself) is an unsolved problem as web content isn't all signed by its issuing server.
In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.
ibaikov · 1h ago
Recently started selfhosting it. I like it. I tried hoarder, but it was overcomplicated and consumed way more resources. Now it got MCP, so I might use it with n8n, we'll see.
A couple improvements I'd like:
I want drag-and-drop link saving.
If I add a reddit link, it doesn't import the reddit thread title, it uses reddit's title in linkwarden (Reddit - the heart of the internet). Same goes for a few other websites like gitlab.
I'd like an MCP.
Resource usage optimization: while it is smaller than karakeep/hoarder, for me it consumes 500-950MB ram, and I have only 500 links added.
sloped · 1h ago
This looks nice, I like how many of these tools have been surfacing. I recently started using https://readeck.org/, which aims to solve some of the same problems and really like it. Much better than a "bookmark" tool for things like articles.
My two favorite parts of Readeck are:
- it provides a OPDS catalog of your saved content so you can very easily read things on your e-book reader of choice. I use KOReader on a Kindle and have really enjoyed reading my saved articles in the backyard after work.
- you can generate a share link. I have used this to share some articles behind paywalls with friends and family where before I was copying and pasting content into an email.
FireInsight · 3h ago
No experience with this yet, but looking to upgrade from Linkding. Main features I'm looking forward to is syncing the bookmarks with native browsers bookmarks through Floccus, and being able to make highlights on the articles I save.
fuzzy2 · 1h ago
Started using it a while back. Works rather well, even though some minor UX quirks exist. Self-hosting is easy, too, with Docker Compose. If you're in the market for a web-accessible bookmark manager, maybe give it a go!
I like hoarder(karakeep). It's got an API and mcp server as well to play with now locally and self hosted. I'll check this out as well.
human_llm · 3h ago
This looks interesting. How feature-crippled is the self hosted version?
dugite-code · 2h ago
Not at all as far as I am aware. I use floccus to sync my bookmarks to it and it does the job quite well
xnx · 2h ago
I have yet to find anything that has the effort vs. results benefit of CTRL+S -> "Webpage, Single File (*.mhtml)". Even works on mobile.
FireInsight · 2h ago
Tagging, full-text search, page highlights, a nice UI,... You might call that bloat, I don't. Besides, I could not find any equivalent to ctrl-s the webpage on mobile Firefox.
xnx · 2h ago
> I could not find any equivalent to ctrl-s the webpage on mobile Firefox.
True. There used to be an extension that enabled the hidden code path, but that stopped working years ago. I switched to Kiwi browser.
belter · 3h ago
As of this moment...This post has 4 points and 2 comments...How does it make to number 3 on HN page?
A4ET8a8uTh0_v2 · 3h ago
Velocity. Obviously, I don't really know and speculating only. Still, the project does look nice. I personally use archivebox, but I will admit this looks a lot more polished.
I understood an open source project need revenue to survive, but the reason why this project grew so large is because of the self-hostable nature, and the push of the cloud offering is the opposite of that.
I really hope this is not the first steps towards enshittification...
ctxc · 1h ago
Nah, I just see this as a sustainable way to keep the project alive :)
I’ve been considering switching from Raindrop to a self hosted option, but while I like self hosting I’m also leaning towards just paying someone to handle this particular service for me.
Some key features of the app (at the moment):
- Text highlighting
- Full page archival
- Full content search
- Optional local AI tagging
- Sync with browser (using Floccus)
- Collaborative
Also, for anyone wondering, all features from the cloud plan are available to self-hosted users :)
What I'd really love is a super compact "short-name only" view of links. Just words, not lines or galleries. For super-high content views.
https://blog.linkwarden.app/releases/2.8#%EF%B8%8F-customiza...
I'd also love a separation of human tags and AI tags (even by base or stem), just in case they provided radically different views, but both were useful.
EDIT: Just did a quick look in the documentation, is there a native or supported distinction between links that are like bookmarks and links that are more content/articles/resources?
Does it grab the DOM from my browser as it sees it? Or is it a separate request? If so, how does it deal with authentication?
It currently stores the full webpages as a single html file, a screenshot, a pdf, a read-it-later view.
Aside from that, you can also send the webpages to the Wayback Machine to take a snapshot.
To archive pages behind a login or paywall, you can use the browser extension, which captures an image of the webpage in the browser and sends it to the server.
Just an image? So no full text search?
No comments yet
a question arose for me though: if the AI tagging is self hostable as well, how taxing is it for the hardware, what would the minimum viable hardware be?
It’s worth mentioning that you can also use external providers like OpenAI and Anthropic to tag the links for you.
[1]: https://docs.linkwarden.app/self-hosting/ai-worker
For example, we can go to the Wayback Machine at archive.org to not only see what a website looked like in the past, but prove it to someone (because we implicitly trust The Internet Archive). But the Wayback Machine has deleted sites when a site later changes its robots.txt to exclude it, meaning that old site REALLY disappears from the web forever.
The difficulty for a trusted archive solution is in proving that the archived pages weren't altered, and that the timestamp of the capture was not altered.
It seems like blockchain would be a big help, and would prevent back-dating future snapshots, but there seem to be a lot of missing pieces still.
Thoughts?
In some of the case studies Starling (https://www.starlinglab.org/) has published, they've published timestamps of authenticated WACZs to blockchains to prove that they were around at a specific time... More _layers_ of data integrity but not 100% trustless.
A couple improvements I'd like: I want drag-and-drop link saving.
If I add a reddit link, it doesn't import the reddit thread title, it uses reddit's title in linkwarden (Reddit - the heart of the internet). Same goes for a few other websites like gitlab.
I'd like an MCP.
Resource usage optimization: while it is smaller than karakeep/hoarder, for me it consumes 500-950MB ram, and I have only 500 links added.
My two favorite parts of Readeck are:
- it provides a OPDS catalog of your saved content so you can very easily read things on your e-book reader of choice. I use KOReader on a Kindle and have really enjoyed reading my saved articles in the backyard after work.
- you can generate a share link. I have used this to share some articles behind paywalls with friends and family where before I was copying and pasting content into an email.
https://www.linkace.org/ (my fave)
https://github.com/sissbruecker/linkding
https://github.com/jonschoning/espial
https://motd.co/2023/09/postmarks-launch/
https://betula.mycorrhiza.wiki/
https://linkhut.org/
https://readeck.org/en/
True. There used to be an extension that enabled the hidden code path, but that stopped working years ago. I switched to Kiwi browser.
I understood an open source project need revenue to survive, but the reason why this project grew so large is because of the self-hostable nature, and the push of the cloud offering is the opposite of that.
I really hope this is not the first steps towards enshittification...