Show HN: I rewrote my Mac Electron app in Rust
TLDR; rebuilding in Rust was the right move.
So we rewrote the app with Rust and Tauri and here are the results:
- App size is 83% smaller: 1GB → 172MB - DMG Installer is 70% smaller: 232MB → 69.5MB - Indexing files is faster: A 38-minute video now indexes in ~3 minutes instead of 10-14 minutes - Overall more stability (old app used to randomly crash)
The original version worked, but it didn't perform well when you tried indexing thousands of images or large videos. We lost a lot of time struggling to optimize Electron’s main-renderer process communication and ended up with a complex worker system to process large batches of media files.
For months we wrestled with indecision about continuing to optimize the Electron app vs. starting a full rebuild in Swift or Rust. The main thing holding us back was that we hadn’t coded in Swift in almost 10 years and we didn’t know Rust very well.
What finally broke us was when users complained the app crashed their video calls just running in background. I guess that’s what happens when you ship an app with Chromium that takes up 200mb before any application code.
Today the app still uses CLIP for embeddings and Redis for vector storage and search, except Rust now handles the image and video processing pipeline and all the file I/O to let users browse their entire machine, not just indexed files.
For the UI, we decided to rebuild it from scratch instead of porting over the old UI. This turned out well because it resulted in a cleaner, simpler UI after living with the complexity of the old version.
The trickiest part of the migration was learning Rust. LLMs definitely help, but the Rust/Tauri community just isn’t as mature compared to Electron. Bundling Redis into the app was a permissioning nightmare, but I think our solution with Rust handles this better than what we had with Electron.
All in, the rebuild took about two months and still needs some more work to be at total parity with its Electron version, but the core functionality of indexing and searching files is way more performant than before and that made it worth the time. Sometimes you gotta throw away working code to build the right thing.
AMA about Rust/Tauri migration, Redis bundling nightmares, how CLIP embeddings work for local semantic search, or why Electron isn't always the answer.
It looks like your UI needs are pretty simple while computation is complex so the extra QA tradeoff would still be worth it for you. I'm just wondering if my experience was unusual or if rendering differences are as common as they felt to me.
Also, did you go Tauri 2.0 or 1.0? 2.0 released its first stable release while I was mid-stream on v1, and migration was a nightmare/documentation was woefully inadequate. Did they get the docs sorted out?
Polyfills fix most of the things and we are running automated end to end test on Linux, which catches most of the issues.
IMO the most difficult thing is figuring out how far the users are behind with their webview version, mostly on Linux and macOS. Windows has done thinga right with their WebView2 implementation
And the performances of webkitgtk are horrible on Linux.
The main drawback, of course, is that it ships a browser with every app.
On the contrary if it's a large app that the user spends lots of time in, then the performance overhead might well be worth it for the user.
Imagine in the first case that it requires a base load of 10 units of energy to run and gives 2 units of output, while in the second it still costs 10 units of base load energy, but now it gives 100 units of output. The base load becomes relatively irrelevant.
TBH, a lite weight polyfill for most system webview would be refreshing change to all the spa frameworks out there.
Edit: It looks like Tauri uses the following platform webview features.
https://github.com/tauri-apps/wry?tab=readme-ov-file#platfor...
More on the stack and our initial issues can be read here: https://kreya.app/blog/how-we-built-kreya/#cross-platform-gu... (from 2021)
With the Electron version of the app, we had issues running our bundled binaries on Macs with Intel chip. That caused us so many headaches that we decided for the rebuild on Tauri that we wanted to focus on one platform first (Macs with Apple chip) before supporting other platforms.
We went with Tauri 1.4 and no issues so far. Will have to check out the docs for 2.0 migration and see what that looks like.
In particular, rendering and crashing issues specific to Linux have been blockers, but Tauri 1.x also has other rendering issues on Linux that 2.0 fixed. There's little to no guidance on what's causing the stability and new rendering problems or how to fix them.
The app I worked on was a launcher that installed and managed content for an app, and the launcher invoked the app through command-line flags. Those flags arbitrarily fail to be passed in Tauri 1.x but work as expected in Tauri 2.x, but nobody we asked about it knows why.
My app needs wysiwyg editors, and JS is full of them.
Would you have any webpage or product info, possibly with screenshots?
We're building an in-house DCC with egui so I'm curious.
I can't remember why I wanted to migrate to 2.0 now, but there was a nice-to-have that I couldn't do in 1.4. I ended up abandoning the 2.0 migration after a slew of cryptic errors, took a step back, and decided I'd be better off using Electron for my project. My app is at heart a rich UI text editor and none of the computation is that expensive. With all the value add coming from the interface, optimizing for consistency there feels right.
With Electron's UI powered by the same browser across platforms, you end up with a much more consistent experience. Makes sense to optimize for that.
This is our #1 frustration with Tauri. The OS-provided system webviews are not stable, repeatable, consistent platforms to build upon.
Tauri decided that a key selling point of their platform was that Tauri builds won't bundle a browser runtime with your application. Instead, you wind up with whatever your operating system's browser runtime is. Each OS gets a different runtime.
Sounds nice on paper, but that has turned into a massive headache for us.
Safari and Edge have super finicky non-standard behavior, and it sucks. Different browser features break frequently. You're already operating in such a weird way between the tight system sandboxing and CORS behaviors (different between each browser), the subtle differences are death by a thousand cuts. And it never seems to stop stacking up. These aren't small CSS padding issues, but rather full-blown application behavior breakages. Even "caniuse.com" is wrong about the compatibility matrix with built-in web views.
To be fair, we're using advanced browser features. Animation, 2D contexts, trying to do things like pointer lock. But these are all examples of things that are extremely different between each web view.
All of this has doubled (quadrupled - with dev and prod builds behaving so differently! - but that's another story) the amount of physical testing we have to do. It takes so much time to manually test and ship. When we were building for the web, this wasn't an issue even if people used different browsers. The webviews have incredibly different behavior than web browsers.
Their rationale for using OS-provided system webviews instead of a bundled runtime baked into the installer at build time is that it would save space. But in reality all it has done is created developer frustration. And wasted so much freaking time. It's the single biggest time sink we have to deal with right now.
We were sold on Tauri because of Rust, but the system browser runtime is just such a bad decision. A self-imposed shotgun wound to the chest.
The Tauri folks have heard these complaints, and unfortunately their approach to solving it is to put Servo support on the roadmap. That's 1000% not the right fix. Servo is not even a production-ready platform. We just want Chrome.
Please just let us bundle a modern chrome with our apps. It's not saving anyone any headache with smaller programs and installer sizes. Games are already huge and people tolerate them. Lots of software is large. It's accepted, it's okay, it's normal. We have a lot of space, but we don't have a lot of time. That's the real trade off.
I want to use Rust. I want to use Chrome.
I hope the Tauri devs are reading this. It's not just from me. This is the general community consensus.
Built-in webviews are not the selling point for Tauri. Rust is.
I tried to use Bevy (since we also use 3D) and that wasn't ready for prime time.
I thought about Iced and Imgui and several other Rust frameworks, but given our experience with Bevy we shied away from it.
We figured we'd be able to move faster and rely on a lot of existing tooling. That's been true for the most part.
So use Electron and FFI, it's not that hard
please, no.
I wish software companies had to pay the hardware they require for their users, then we would have devs using Rust instead of JS and optimizing using ASM just to save parts of cents per instance. And we wouldn't see companies like MS kill well designed and performed native apps for a electron app
No comments yet
You were wise. That's the biggest issue plaguing the project right now.
> curious about Rust integration though
Tauri is written in 100% native Rust, so you write Rust for the entire application backend. It's like a framework. You write eventing and handlers and whatever other logic you want in Rust and cross-talk to your JavaScript/TypeScript frontend.
It feels great working in Rust, but the webviews kill it. They're inferior browsers and super unlike one another.
If Tauri swapped OS webviews for Chromium, they'd have a proper Electron competitor on their hands.
I don’t quite understand why you have that issue in the first place. The fact they use the system webview is front, left and center on their website. It’s like you decided to use a fork because of the decorations on the back, and now complain that it’s pointy and the developers should just make it a spoon instead.
My read on it is that they didn’t understand the implications of using system webviews.
And possibly they expected Tauri would insulate them from cross-system differences without a lot of exploration.
Tauri needs a big fat warning label.
> This year we've got a lot of exciting innovations in store for you, like CEF and SERVO based webviews...
From their discord.
HN discussion: https://news.ycombinator.com/item?id=43518462
I've worked on large consumer-facing web-apps where we had a dedicated QA team (and/or contracting firm) that runs visual regression testing on multiple platforms and browser versions. As a solo developer, I have no interest in being that team for my hobby project. So the tradeoff with Tauri for me was "accept that I will ship obvious UI bugs" vs "accept that I will ship a bloated binary."
Reading anecdata on forums, it seems like the only people who get up in arms over an extra 200MB are HN readers, and my app isn't really targeted at them.
It bas always been a fallacy though, as with CSS the end result can depend on the DPI scaling and the size of the display (unless you make sure it doesn't, but then you need to test different setups to be sure).
I never managed to find a set of CSS properties which made it look good in Chrome tho. And if it was a more serious project I'd probably have used SVGs instead of Unicode characters.
I think it's good to be wary of overly sensitive advice that regular users don't care about. But would a regular user realize they have 10 electron apps running and their ram is maxed out all the time?
The argument against Electron isn't just a single bloated binary, but everyone shipping an app that uses way more RAM than necessary.
In other circumstances too though, it's not great UX to demand your users quit your app once they're done with it because it eats too many resources just being idle in the background. It's an issue I have with both Electron, where idle apps waste tonnes or RAM, and with many Rust UI frameworks, where an immediate-mode architecture means they'll be consuming CPU on the background.
NSApplicationDelegate's -(BOOL)applicationShouldTerminateAfterLastWindowClosed:(NSApplication *)sender; method exists so an application can, uhh, automatically terminate after the last window closes.
https://developer.apple.com/documentation/appkit/nsapplicati...
It's not 100% consistent but if you look at Apple's applications based on single window (calculator, system preferences, etc), closing the window quits the app.
I can’t find obvious reference or when Apple started changing this, but it seems related to background app killing that is done now as well. I’m still not sure how I feel about it, but historically that wasn’t common for Mac apps.
Nowadays, apps closing themselves or being closed by the OS automatically is reasonable in a lot of cases, but Electron apps tend to hit the cases where it still is valuable to operate with the classic NeXT/OS X document-based app paradigm.
Take pride in your work and respect the people using it.
I think the person you're responding to didn't actually measure if their claim is true.
Tauri however, is.
It's like developing sophisticated websites for IE6-era web, with ActiveX and Java applets and this new "ajax" thing on the horizon that sure sounds nice but it'll be a decade before you can actually use it for most of your users.
The very core basics are essentially the same because yea - it's just a web browser. An <h1> will be bigger than a <p>. But they are regularly multiple years out of date, have WILDLY different security and native-access models, version specific bugs, initialization and threading requirements, performance tradeoffs, styling quirks, and you might have hundreds or thousands of versions to test against which you cannot reasonably test against because they are frequently tied to specific operating system versions that you can no longer download and install, or require hardware you do not have.
So yea. IE6-era stuff. Not an exaggeration at all.
For simple stuff they work just fine, performance is generally more than good enough, and they start up faster, use fewer resources, and lead to a much smaller install. They're entirely reasonable choices. But once you push the edges of the envelope they're an absolute nightmare, and that is the entire reason Electron exists. That is what caused Electron to become the giga-powerhouse that it is now. It solved that problem, at relatively high cost, but it is incredibly obviously worth it to anyone who has dealt with native webviews in complicated ways.
Modern browsers are a completely different game, in comparison - far more consistent and up to date in aggregate. They're utterly incomparable.
What a waste of time it was fighting for Web freedom.
I also feel like I will have to, yet again, trot out the comment from a Slack dev that explains why they moved _from_ per-platform webviews to Chromium. This isn't new ground being charted, plenty of companies and teams have been down this path and Electron exists for a reason.
(I am not saying Electron is _good_, I am saying that Tauri isn't the holy grail people make it out to be)
Regardless, your point stands: it's a bundled Chromium on all platforms
It took me 2 seconds to find in Google, and you're splitting hairs if you think it being macOS-only was the point of my comment. Their second bullet point is just as true today as it was back then.
Additionally, the issues people find with WebkitGTK/Tauri aren't always web related, usually moreso Linux related (weird blank screens, issues with rendering certain stacked items, etc).
I have one such device and Firefox and Chrome also run on it. They're slow but still usable.
Good luck "testing" your video conferencing app on webkitgtk - it doesn't support webrtc! It is still useful to test your error page I suppose.
Note that this is one example among many of missing features, bugs and/or horrible performance.
Here's a preview: no notifications, no :has, no TLA.
(Not blaming the epiphany devs for the situation here to be clear)
It proves what everyone knows: that there's no reason WebRTC can't work in Tauri/Linux environments.
It also proves the point here: there are legitimate issues with the system-provided webview approach that are not always apparent.
No comments yet
I am wondering why rendering differences between different platforms are such an issue? When building web apps, you face the same challenges, so I would assume it wouldn't be much different.
Tauri does not bundle chrome with your app. This makes the bundle size much smaller. But the tradeoff is you end up rendering in whatever the default web view browser is. On Mac this will be some version of Safari (depending on MacOS version), and on Windows it will be some recent-ish Edge thing. Tauri actually has a nice page breaking this down: https://v2.tauri.app/reference/webview-versions/
This also means that a new OS release can change how your app is rendering, so a user can conceivably have a UI bug appear without updating your app.
Electron apps often fully share codebase with the Web apps, so on the app backend you implement native functionality and communicate with your app via IPC.
The question is not if Electron feels better for developers because it renders consistently.
The question is if that matters. Is it a big issue? Does any user actually care?
They build in Chrome and test with Chrome and then the test of the week they whine about Firefox and Safari.
No only were there UI inconsistencies, but Safari lags behind chrome with things like the Popover API and the build/codesign/CD ecosystem for Tauri is incredibly scattered.
When I was using it, IAPs were still not really an option for Tauri or at least I could not find any docs or resources about it.
Of course not, it's only for Mac. If they were to support Windows and Linux, they probably would not have published this post.
Cross-platform UI is hard, even harder if you want to keep almost the exact same UI, same feature set across platforms, and potentially an online version. People moved from native applications to Qt to web stack for a reason.
Saying this as someone who works at a company that develops cross-platform desktop application that has millions of users. I can't imagine what my job would be like if we were using any other solution.
Chromium is superior to the native web view unless you have latest version of Windows or Mac.
No comments yet
“We moved from X to Y and were so in love.” posts are often postcards from the honeymoon.
This applies to every open source project. The owners control what will be merged upstream and the direction the project will go in.
Monocultures are great, as long they are the one we bet on.
Likewise I guess there is no problem that game developers mainly care about Windows, Proton is open source, so no big deal, why bother.
The browser is the actual product. An open source browser engine lowers the barrier of entry of creating new browsers.
>Likewise I guess there is no problem that game developers mainly care about Windows, Proton is open source, so no big deal
Which is why Valve recommends game developers to target Windows and use Proton for compatibility. Having one platform to target simplifies developers lives. Before developers were making bad ports to Linux because they did not have the resources to properly support another tech stacks. The value of developers being able to target a single platform can not be understated.
Though this is fundamentally a different situation as the leading implementation is closed source and is more capable.
>pretending to be "native".
The code is native. Just because something uses a library to call platform code it doesn't mean it isn't native. By that logic programs that use qt are not native because they use a cross platform api.
The Linux kernel is an implementation detail on Android, there is nothing about Linux exposed as official userspace API.
Any use of Linuxisms on Android apps is done at user's own peril and possible kick out of PlayStore.
>The Linux kernel is an implementation detail on Android
The kernel is such an important part of an operating system, you can't really ignore it as a developer even if technically it may be an implementation detail.
>Any use of Linuxisms on Android apps is done at user's own peril and possible kick out of PlayStore.
Sure, but Linux's ABI is stable and even in a world where things are moved to Zircom starnix was made to support that same ABI.
Tauri uses the OS engine which means Windows uses Edge presumably and Mac uses Safari's Webkit so you're going to have rendering differences and feature differences there.
But Tauri is just a wrapper around WebKit, which is written mostly in C++.
Yes, it would be nice if the full stack is memory safe, but that isn't a good reason to not write your own code in a memory safe language.
Honestly if there was an Electron without Node.js which would use literally any compiled language (although Rust is probably too low level), it would've been more tolerable.
Electron is also way more mature, but Tauri is improving.
https://developer.mozilla.org/en-US/docs/Web/API/MediaStream
https://microscopic-view.jgarrettcorbin.com
I got tied up with other projects while I was trying to navigate the submission process (it was my first time), so it's not up yet, but I'd be happy send you a build if you want to check it out.
Also, what is your recommendation for finding a cheap usable microscope? My brief forays to aliexpress have just resulted in frauds and trash.
This is the one I use. It is surprising good for the price. It just behaves as a regular webcam.
I primarily use it for micro soldering, so your mileage may vary, but it is very good for the price. I got it on Amazon where I believe they have an official store.
Key metrics could include:
- Target bundle size
- Memory usage (RAM)
- Startup time
- CPU consumption under load
- Disk usage
- e.t.c.
Additionally, for frameworks like Tauri, it would be useful to include a WebView compatibility matrix, since the rendering behavior and performance can vary significantly depending on the WebView version used on each platform (e.g., macOS WKWebView vs. Windows WebView2, or Linux GTK WebKit). This divergence can affect both UI fidelity and performance, so capturing those differences in a visual format or table could help developers make more informed choices.
Electron comes out looking competitive at runtime! IMO people over-fixate on disc space instead of runtime memory usage.
Memory Usage with a single window open (Release builds)
Windows (x64): 1. Electron: ≈93MB 2. NodeGui: ≈116MB 3. NW.JS: ≈131MB 4. Tauri: ≈154MB 5. Wails: ≈163MB 6. Neutralino: ≈282MB
MacOS (arm64): 1. NodeGui: ≈84MB 2. Wails: ≈85MB 3. Tauri: ≈86MB 4. Neutralino: ≈109MB 5. Electron: ≈121MB 6. NW.JS: ≈189MB
Linux (x64): 1. Tauri: ≈16MB 2. Electron: ≈70MB 3. Wails: ≈86MB 4. NodeGui: ≈109MB 5. NW.JS: ≈166MB 6. Neutralino: ≈402MB
I'd say that the biggest hurdle for that sort of thing is just the documentation or examples of how to do things online - because Electron is the one everyone seems to use and has the most collective knowledge out there.
There’s absolutely no way Tauri apps take 25s to launch. Source: I’ve played with Tauri on Linux. This is an order of magnitude off.
[1] https://rubymamistvalove.com/block-editor
We kept the GUI as a web spa app (using Inferno) and wrote two small native apps with C# and Swift that would load a webview and other duties. App download size and memory consumption was reduced by like 90%. We also moved distribution and updates to the app stores of each platform.
It was a great decision.
This was an app offered for free to some customers of the company. If the app had been the main commercial product we would have obviously opted for a better solution than distributing through stores or using Squirrel.
Back in 2018 we needed a server[1] that would notify Squirrel for the udpates. Squirrel worked ok on macOS but it was particularly bad on Windows. I don't remember the details... iirc Squirrel installed the actual executable in some weird folder and users would never be able to find the app if they deleted the link from the desktop.
[1] https://github.com/ArekSredzki/electron-release-server
- Fan of the perpetual fallback licensing. Though $99 is a high barrier but i'm guessing you are targeting more creators/studios vs a more general consumer target (would think more like $20-25 for general consumer).
- You mention performance in this post but not at all on the landing page. The 38 minute video in the minutes would be very important to know for many potential customers. Would want benchmarks on various machines and info like parallel task processing, impact of and requirements around vram, etc. I would want an insight into what processing hundreds to thousands of hours of video is going to look like.
- I am curious how (and shocked) that electron itself was somehow responsible for a processing bottleneck going from 10-14 minutes to 3 minutes. Wasn't electron just responsible for orchestrating work to CLIP and likely ffmpeg? How was so much overhead added?
- I built (but never released) a similar type media search tool but based on transcriptions in Electron and didn't run into many performance issues
- Usually a lot of the motivation for building in Electron in the first place (or Tauri) would be cross platform, why mac only (especially for something like bulk media processing where nvidia can shine)
- I too recently went down the path of hadn't coded in Swift in 10 years and also investigated Tauri. I opted for Swift (for a new app not a re-write of something) and it has been mostly a pleasure so far and a dramatic improvement compared to my last swift app from around 2014 or so)
- If the LP section about active users is true it sounds like you've already found some modest success (congrats). Did you already have relationships/audience around the studio/creator space? Would be interested to hear about the marketing
That's cool you built a similar tool - what kept you from releasing it?
Plan is to ship a Windows and Linux version in the next few months if there's enough demand.
We've gotten our users through various launches on HN and reddit with some minimal linkedin promotion. It's been mostly word of mouth, which has been very promising to see.
Re: the electron and video processing performances - there's a lot to dive into. I don't claim to be an Electron expert, so maybe it was our misuse of workers that created a bottleneck. As part of the migration to rust we also implemented scene detection to help reduce the number of frames we indexed and this helped reduce processing loads a lot. We also added some GPU acceleration flags on ffmpeg that gave us meaningful gains. Batching processing image embedding generation was also a good improvement to a point, before it started crashing our model instance.
I like the narrative, BTW, on why you needed to port your app.
As far as porting over goes, we are much happier maintaining the new version.
The smaller bundle size with Tauri and blazing speed are well worth the effort.
No one in the Windows developer community takes WinUI seriously, we all know the mess it has been since Project Reunion was announced in 2020.
There is a reason WPF regained its status back in BUILD 2024.
And last time I worked with it, it seemed much easier than previous versions.
If you have complex UI, then, Tauri is better
I'm dragging my feet about porting my Python Qt app to Rust, because I feel that no Rust GUI library is as rich as Qt and I know that I'll get stuck with that at some point.
The only challenge was my lack of familiarity with Rust. Even if you're starting off with a "JS first app", Tauri often requires dropping into Rust for anything even slightly native, such as file system access (eg. managing config files for Claude, Witsy, or code editors), handling client lifecycle actions like opening, closing, and restarting, or installing local MCP servers.
1. https://ninja.ai
The only downside from my point of view is the large installer size for Electron apps, but it hasn't been a big issue for our users (because they will need to download quite a bit of other stuff like npm packages to actually build apps with dyad)
My app is built with tauri too. It supports all kinds of images - - JPEG - PNG - TIFF - WEBP - BMP - ICO - GIF - AVIF - HEIC/HEIF and RAW images from various camera manufacturers.
The image reading and processing (for exporting images) is all done on the rust side. These are the crates i use - image - libheif-rs -> to read HEIF/HEIC images - rawler -> to read JPEGs embedded inside RAW images - libraw -> to convert RAW images to JPEGs and PNGs - rexiv2 -> to read image exif data
I use the candle crate to download the CLIP model and generate index pairs for images. I store the faiss indexes in a file on the file system.
I am using the app personally for about a month and it feels amazing to use something you have built yourself.
I hope to add an image editor to the app in the future so that I have my own app management and editing software which is enough for my ametuer needs.
Any kind of feedback would be most welcome.
Does your product have docs/a support forum/other place these kinds of details would be covered?
If you aren't doing anything crazy you could probably just get away with storing them all in a memory mapped file.
see https://www.boringcactus.com/2025/04/13/2025-survey-of-rust-...
if you open in new tab or copy/paste in new tab it does not.
After realizing there was no demo I was looking for a way to contact you directly with a few sample images, but can't find contact information on the website.
Consider adding a demo and contact info.
Otherwise, the app is looking solid. This seems like a great use of AI.
De-duplicating images is on our roadmap. Shoot me your contact info at hello@desktopdocs.com. Would love to see if we can help.
I would want to use a demo version (could be with limited functionality) before paying $99 upfront! Not a demo video...
I don't know a technical details but maybe Sqlite would be the best next step of slim down?
I noticed screenshots on the page are displayed croped on Chrome Android (yeah I know).
The short version is that Flutter's lack of rich text editing solutions at the time made it a non-starter. It's a common problem in the Flutter ecosystem from what I've seen, there's often 0 or only 1 quality package for many "advanced" desktop use cases.
I've found that the GUI library I tried (fyne with go) was mobile-first, so some desktop things e.g. file-open dialogs didn't have the functionality I expected (the "dialog" was actually drawn within the same window as the application window). Flutter is mobile first too IIUC.
Outside of Qt, languages like rust and go don't have a good solid desktop GUI development option.
I do so with Rust also with the package flutter_rust_bridge which works great, I'm working on a mobile app that also simultaneously works on web and desktop when I tested it, even all the Rust parts.
Maybe in some cases, but I kind of doubt this statement in general. I just tried a Flutter demo from their official site and text selection doesn't even _work_ correctly.
https://flutter.github.io/samples/web/simplistic_editor/
I'll copy-paste a few lines of the example sentence, double click on one of the middle lines to start selecting by word (which it doesn't seem to even do), and then highlighting starts on the top line instead of the line I selected.
In general the flutter apps always feel janky and second-class to the platform they're on, because they never fully implement the exact behavior of each platform they run on.
1. Did you consider WAILS(Go)?
2. Did you consider ColPali ?
3. Are you planning to launch for other platforms (Linux/Windows) if so how are planning to achieve self-updates & signing the binary to prevent false detections from AV.
Thank you.
[0] https://napi.rs/
- https://crates.io/crates/lancedb - https://crates.io/crates/usearch - https://crates.io/crates/simsimd
search and simsimd are fast and lightweight, but I'd advise to use lancedb if you're a bit new to Rust as the other two are a bit trickier to handle due to the C dependency (e.g. usearch needs Vec::with_capacity and will otherwise crash, etc).
And then, you take the result of this query and can combine it with a sqlite `in` query.
Or you use SQLite with a vector search extension: https://crates.io/crates/rig-sqlite
Aren't vector searches usuaully just like nearest values with some distance calculation? Are they not all implemented the same way?
You mentioned "VSS" in another comment, which if that was my sqlite-vss extension, I more-or-less abandoned that extension for sqlite-vec last year. The VSS extension was more unstable and bloated (based on Faiss), but the new VEC extension is much more lightweight and easy to use (custom C vector functions). The new metadata column support is also pretty nice, and can be used for push-down filtering.
That being said, sqlite-vec still doesn't have ANN yet (but can brute-force ~10k's-100k of vectors quick enough), and I've fallen behind on updates and new releases. Hoping to push out a desperately-needed update this week
link: https://alexgarcia.xyz/sqlite-vec/
And there are a few other SQLite-based vector search extensions/tools out there, so the ecosystem is general might be worth another look.
How have the users perceived the new version so far? Are there positive feedback? Any new complaints due to the parity issues? Or in general, how is your team measuring success of the UI? From the post, it sounds like the users have a way to provide feedback and your team has a way to engage with them, which is wonderful. So I'm curious to learn.
[1] https://crates.io/crates/burn
[2] https://github.com/huggingface/candle
Interestingly, burn supports candle as a backend.
I was just looking into this today!
The options I've found, but yet to evaluate:
- TorchScript + tch = Use `torch.jit.trace` to create a traced model, load with tch/rust-tokenizers
- rust-bert + tch = Seems to provide slightly higher-level usage, also use traced model
- ONNX Runtime - Convert (via transformers.onnx) .pt model to .onnx encoder and decoder, then use onnxruntime+ndarray for inference
- Candle crate - Seems to have the smallest API for basic inference, and AFAIK can load up models saved with model.save() without conversion or other things
These are the different approaches I've found so far, but probably missed a bunch. All of them seem OK, but on different abstraction-levels obviously, so depends on what you want ultimately. If anyone know any other approach, would be more than happy to hear about it!
Candle is a great choice overall (and there are plenty of examples) but performance is slightly worse compared to tch.
Personally, if I can get it done with candle that's what I do. It's also pretty neat for serverless.
If I can't, I check if I can convert it to onnx without extra work (or if there is an onnx available).
As a last resort, I think about shipping torchlib via tch.
I also work with an Electron app and we also do local embeddings and most of the CPU intensive work happens in nodejs addons written in Rust and using Neon (https://neon-rs.dev very grateful for this lib). This is a nice balance for us.
Im super curious why you picked Redis over something more typical (SQLite springs to mind)
What was the advantage of doing this that made it worth the pain?
And how come you don't charge any VAT or GST ?
Hum it's not really common to not offer refunds for software licenses. And you might have chargebacks anyways.Have you investigated multimodal embeddings from other models? CLIP is out of date, to put it mildly.
Given the importance to your business, it may be worthwhile into finetuning a modern native multimodal model like Gemma 3 to output aligned embeddings, albeit model size is a concern.
Any lessons learned, particularly to leveraging LLMs to complete this transition could give a boost to people contemplating leaving electron behind or even starting a new project with Tauri.
What were your most valuable resources when moving from Electron to Tauri?
Are there any guides out there on the equivalencies between the two systems?
Can you recommend useful resources?
It's probably okay on Windows though as the backend is different, but that's part of the problem.
can you explain how indexing is done and how rust helped in this case?
Another question - how long did it take for you to rewrite your app?
Show HN: I made a Mac app to search my images and videos locally with ML
https://news.ycombinator.com/item?id=40371467
May 15,2024 | 173 comments
There's also no point having a native UI on macOS any more. Apple ruined it themselves by switching to flat design, and making their own UIs and apps an uncanny valley between macOS and iPadOS. There's no distinct native look-and-feel on macOS any more.
The move to rust freed us up to focus more on feature development than configs and setup. It was surprising because I thought learning rust would set us back much longer, but the trade-off was worth it.
your application is now at the whim of version breaks by the OS browser.
I mean, I knew Electron was heavy, but holy cow that is HEAVY. No wonder that despite CPUs getting fast and RAM being 10x the size it used to be that software keeps feeling slower than ever. So you weren't talking RAM size, you were talking the size of the app itself? 1GB? That used to be the size for AAA games.
this is good work & massive. nicely done
I'd love to write more about bundling redis binaries into these apps soon. There isn't a lot written about it now (at least that I could find) and it was a lot of trial and error to get it working.
In our case, the bottleneck was related to how big the app was to start and how much we could optimize it to index media files for local AI search.
Also it's a bit ambiguous if it searches documents? All the screenshots of are of image search, but the features say you can search inside PDFs and docs, though "All Your Files" says images and videos only-
after Electron, flutter maybe comes second for multi platform thingy
sometimes following the crowds of wisdom is alright until that same majority of people decide to ship browser engine with apps because they don't want to learn another technology is astonishing
I don’t want to comment on the technology choices specifically here, but in general the whole “we rewrote our app in X and now it’s better” is essentially a fact of life no matter the tech choices, at least for the first big rewrite.
First, you’re going to make better technical choices overall. You know much better where the problems lie.
Second, you’re rarely going to want to port over every bit of technical debt, bugs, or clunky UX decisions (with some exceptions [1]), so those things get fixed out of the gate.
Finally, it’s simply invigorating to start from a (relatively) clean slate, so that added energy is going to feel good and leave you in a sort of freshly-mowed greenfield afterglow. This in turn will motivate you and improve your work.
The greenfield effect happens even on smaller scales, especially when trying out a new language or framework, since you’re usually starting some new project with it.
[1] A good example of the sort of rewrite that _does_ offer something like an apples-to-apples comparison is Microsoft’s rewrite of the TypeScript compiler (and type checker, and LSP implementation) from TypeScript to Go, since they are aiming for 1-to-1 compatibility, including bugs and quirks: https://github.com/microsoft/typescript-go
For desktop apps UI quality and rendering speed is paramount. There's a lot of stuff buried inside Chrome that makes graphics fast, for example, deep integration with every operating systems compositing engine, hardware accelerated video playback that is integrated with the rendering engine, optimized font rendering... a lot of stuff.
If your Rust UI library is as advanced and well optimized as Blink, then yes, maybe. But that's pretty unlikely given the amount of work that goes into the Chrome graphics stack. You absolutely can beat Chrome in theory, by avoiding the overhead of the sandbox and using hardware features that the web doesn't expose. But if you just implement a regular ordinary UI toolkit with Rust, it's not necessarily going to be faster from the end user's perspective (they rarely care about things like disk space unless they're on a Windows roaming account and Electron installed itself there).
The fact that you just draw on the screen instead of doing whatever html parsing / DOM/IR is probably doing it? And doing rendering on the gpu means extra delay in the processing moving from cpu to gpu and being a frame behind because of vsync.
For any non-trivial case where I can enable GPU acceleration for an app, it's been anywhere from equivalent to much more responsive.
What apps have you experienced delays with by enabling GPU acceleration?
"Write your frontend in JavaScript, application logic in Rust, and integrate deep into the system with Swift and Kotlin."
"Bring your existing web stack to Tauri or start that new dream project. Tauri supports any frontend framework so you don’t need to change your stack."
"By using the OS’s native web renderer, the size of a Tauri app can be little as 600KB."
So you write your frontend with familiar web technology and your backend in Rust, although it's all running in one executable.
I am curious if it would be all that much worse if your backend was also JavaScript, let's say in Node.js, but it certainly depends on what that back end is doing.
However, you could use Rust compiled to WASM in an Electron app, therefore the two aren’t even mutually exclusive.
I took that as the somewhat the point, and I think it was insightful. Your app will still be worse, but worse as result of your poor technology choices, not the arguments made here. Put together it may still be a bad move, but you would still get the greenfield effect.
It _would_ be bigger and eat more RAM and CPU. But that does not imply "shittier".
There are parameters like dev time, skills available in the market, familiarity, the JS ecosystem etc that sometimes outweigh the disadvantage of being bigger/slower.
You're pointing out the disadvantages in isolation which is not a balanced take.
Not really. Nobody is rewriting GUI apps in Assembly, the reasons are obvious.
A sensible take wouldn't pick one or the other as unilaterally better regarding the abstract context of what a good product is. The web as a platform is categorically amazing for building UIs, and if you chose continued to choose it as the frontend for a much more measurably performant search backend, that could be a fantastic product choice, as long as you do both parts right.
And this isn't Rust zealotry! I think this goes for any memory-safe AoT language that has a good ecosystem (e.g. Go or C#): why use JavaScript when other languages do it better?
Sounds like Rust zealotry to me, followed by a mild attempt to walk it back.
https://devblogs.microsoft.com/typescript/typescript-native-...
[1]: Specifically, Go community was trained for the longest time not to make backward-incompatible API updates so that helps quite a bit in consistency of dependencies across time.
I have used golang in the past and I was not am still not a fan. But I recently had to break it out for a new project. LLMs actually make golang not a totally miserable experience to write, to the point I’m honestly astonished that people have found it pleasant to work with before they were available. There is so much boilerplate and unnecessary toil. And the LLMs thankfully can do most of that work for you, because most of the time you’re hand-crafting artisanal reimplementations of things that would be a single function call in every other language. An LLM can recognize that pattern before you’ve even finished the first line of it.
I’m not sure that speaks well of the language.
"I have never understood why people want to use C for programming outside of learning m. I have written PDP11, Motorola 6800, 8086 assembly professionally and to this day I feel like they would slow me down. I have used C in the past and I was not am still not a fan. But I recently had to break it out for a new project. Turbo C actually make C not a totally miserable experience to write, to the point I’m honestly astonished that people have found it pleasant to work with before they were available. There is so much boilerplate and unnecessary toil. And Turbo C with a macro library thankfully can do most of that work for you, because most of the time you’re hand-crafting artisanal reimplementations of things that would be a single function call in every other language. A macro can recognize that pattern before you’ve even finished the first line of it. I’m not sure that speaks well of the language."
They are enormously powerful tools. I cannot imagine LLMs not being one of the primary tools in a programmer's toolbox, well... for as long as coding exists.
Most of the “interesting” logic I write is nowhere close to autocompleted successfully and most of it needs to be thrown out. If you’re spending most of your days writing glue that translates one set of JSON documents or HTTP requests into another I’m sure they’re wildly useful.
Even if we take the narrow use case of boilerplate glue code that transforms data from one place to another, that encompasses almost all programs people write, statistically. There was a running joke at Google "we are just moving protobufs." I would not call this "fancy autocomplete."
My emulator runs BBC Basic, Zork, Turbo Pascal, etc, etc, but when it is used to run a vintage C compiler from the 80s it gives the wrong results.
Can an LLM help me identify the source of this bug? No. Can I say "fix it"? No. In the past I said "Write a test-case for this CP/M BDOS function, in the same style as the existing tests" and it said "Nope" and hallucinated functions in my codebase which it tried to call.
Basically if I use an LLM as an auto-completer it works slightly better than my Emacs setup already did, but anything more than that, for me, fails and worse still fails in a way that eats my time.
These are all things I've done successfully with ChatGPT o1 and o3 in a 7.5kloc Rust codebase.
I find the key is to include all information which may be necessary to solve the problem in the prompt. That simple.
https://github.com/skx/cpmulator/issues/234#issuecomment-291...
But I'm not optimistic; all previous attempts at "identify the bug", "fix the bug", "highlight the area where the bug occurs" just turn into timesinks and failures.
I suggested in my initial comment I'd had essentially zero success in using LLMs for these kind of tasks, and your initial reply was "I've done it, just give all the information in the prompt", and I guess here we are! LLMs clearly work for some people, and some tasks, but for these kind of issues I'd say we're not ready and my attempts just waste my time, and give me a poor impression of the state of the art.
Even "Looking at this project which areas of the CP/M 2.2 BIOS or BDOS implementations look sucpicious?", "Identify bugs in the current codebase?", "Improve test-coverage to 99% of the BIOS functionality" - prompts like these feel like they should cut the job in half, because they don't relate to running specific binaries also do nothing useful. Asking for test-coverage is an exercise in hallucination, and asking for omissions against the well-known CP/M "spec" results in noise. It's all rather disheartening.
Break it down. Tell the LLM you're having trouble figuring out what the compiler running under the emulator is doing to trigger the issue, what you've done already, and ask for it's help using a debugger and other tools to inspect the system. When I did this o1 taught me some new LLDB tricks I'd never seen before. That helped me track down the cause of a particularly pernicious infinite recursion in the geometry processing code of a CAD kernel.
> Even "Looking at this project which areas of the CP/M 2.2 BIOS or BDOS implementations look sucpicious?", "Identify bugs in the current codebase?", "Improve test-coverage to 99% of the BIOS functionality" - prompts like these feel like they should cut the job in half, because they don't relate to running specific binaries also do nothing useful.
These prompts seem very vague. I always include a full copy of the codebase I'm working on in the prompt, along with a full copy of whatever references are needed, and rarely ask it questions as general as "find all the bugs". That is quite open ended and provides little context for it to work with. Asking it to "find all the buffer overflows" will yield better results. As it would with a human. The more specific you can get the better your results will be. It's also a good idea to ask the LLM to help you make better prompts for the LLM.
> Asking for test-coverage is an exercise in hallucination, and asking for omissions against the well-known CP/M "spec" results in noise.
In my experience hallucinations are a symptom of not including the necessary relevant information in the prompt. LLM memories, like human memories, are lossy and if you force it to recall something from memory you are much more likely to get a hallucination as a result. I have never experienced a hallucination from a reasoning model when prompted with a full codebase and all relevant references. It just reads the references and uses them.
It seems like you've chosen a particularly extreme example - a vintage, closed-source, binary under an emulator - didn't immediately succeed, and have written off the whole thing as a result.
A friend of mine only had an ancient compiled java app as a reference, he uploaded the binary right in the prompt, and the LLM one-shotted a rewrite in javascript that worked first time. Sometimes it just takes a little creativity and willingness to experiment.
Friction is the birth place of evolution.
I do like automating all the endless `Result<T, E>` plumbing, `?` operator chains, custom error enums, and `From` conversions. Manual trait impls for simple wrappers like `Deref`, `AsRef`, `Display`, etc. 90% of this is structural too, so it feels like busy work. You know exactly what to write, but the compiler can't/won’t do it for you. The LLM fills that gap pretty well a significant percentage of the time.
But to your original point, the LLM is very good at autocompleting this type of code zero-shot. I just don't think it speaks ill of Rust as a consequence.
Sure, they got better. But at the outset they were a pretty poor value proposition.
Not true for Go 1.22 toolchains. When you use Go 1.21-, 1.22 and 1.23+ toolchains to build the following Go code, the outputs are not consistent:
BTW, I haven't found an AI system can get the correct output for the following Go code:
Not sure why you feel smug about knowing such a small trivia, ‘gofmt’ would rewrite it to semicolon anyway.
mrustc also does not implement a borrow checker.
I have been using various LLMs extensively with Rust. It's not just borrow checker. The dependencies are ever-changing too. Go and Python seem to be the RISC of LLM targets. Comparatively, the most problematic thing about generated Go code is the requirement of using every imported package and declared symbol.
Not surprising at all; I keep pointing out that the language benchmarking game is rarely, if at all, reflective of real-world usage.
Any time you point out how slow JS is someone always jumps up with a link to some benchmark showing that it is only 2x slower than Go (or Java, or whatever).
The benchmarks game, especially in GC'ed languages, are not at all indicative of real-world usage of the language. Real world usage (i.e. idiomatic usage) of language $FOO is substantially different from the code written for the benchmarks games.
Perhaps when you write "idiomatic usage" you mean un-optimized.
Idiomatic Go leans on value types, simple loops and conditionals, gives you just enough tools to avoid unnecessary allocations, doesn't default to passing around pointers for everything, gives you more control over memory layout.
JS runtimes have to do a lot of work in order to spit out efficient code. It also requires more specialized knowledge from programmers to write fast JS.
I think esbuild and hugo are two programs that showcase this pretty well. Esbuild specifically made a splash in the JS world.
https://github.com/microsoft/typescript-go/discussions/categ...
Hejlsberg also says in this video, about 3.3x performance is from going native and the other 2-3x is by using multithreading. https://www.youtube.com/watch?v=pNlq-EVld70&t=51s
I have used its it came out so I do know what it is, but I have people ask if they should write their new program in TypeScript, thinking this is something they can write it in and then run it.
My usage of it is limited to JavaScript, so I see it as adding a static typing layer to JavaScript, so that development is easier and more logical, and this typing information is stripped out when transpiled, resulting in pure JavaScript which is all the browser understands.
The industry calls it a programming language, so I do too just because this is not some semantic battle I want to get into. But in my mind it's not.
There's probably a word for what it is, I just can't think of it.
Type system?
And I don't understand a "10x speedup" on TypeScript, because it doesn't execute.
I can understand language services for things like VS Code that handle the TypeScript types getting 10x faster, but not TypeScript itself. I assume that is what they are talking about in most cases. But if this logic isn't right, let me know.
Theoretically(!) using TS over JS may indirectly result in slightly better perf though because it nudges you towards not mutating the shape of runtime objects (which may cause the JS engine to re-JIT your code). The extra compilation step might also allow to do some source-level optimizations, like constant folding. But I think TS doesn't do this (other JS minifiers/optimizers do though).
No comments yet
Languages like Python and Ruby are much much slower than that (CPython is easily 10x slower than V8) and people don't seem to care.
Typescript's sweet spot is making existing Javascript codebases more manageable.
It can also be fine in I/O-heavy scenarios where Javascript's async model allows it to perform well despite not having the raw execution performance of lower-level languages.
We should state: "this is true and really works, just remember language was likely only part of that".
It's rather "embrace" instead of "beware".
A hacker blog I read regularly made a challenge about the fastest tokenizer in any language. I just had learned basic Rust and decided why the heck not. I spent 15 minutes with a naive/lazy approach and entered the result. It won second place, where the third place was a C implementation and the first place was highly optimized assembler.
This is not nothing and if I had written this in my main language (python) I wouldn't even have made the top 10.
So if you want a language where the result is comparably performant while giving you some confidence in how it is not behaving, Rust is a good choice. But that isn't the only metric. People understanding the language is also important and there other languages shine. Everything is about trade offs.
Smart choice.
Alex Chen is also known as „Alexander Hipp“ or „Felix Mueller“
Though the concept is interesting - I don‘t like bullshit marketing like „Trusted by Professionals worldwide“ if I can uncover the real deal within seconds
No comments yet
Flutter would be much better choice for such a desktop app, for example.
It's all trade-offs, unfortunately.
No comments yet
There were also some subtle issues like the child windows leaking a lot of memory in the GTK implementations when idle (something like spamming gobjects).
I wouldn't touch Qt even with a ten foot pole, too much bloated for me.
Who are they? as far as I see ProductHunt with only 11 votes.
Interesting approach to digital asset management. After watching the demo video I wanted to trial the app.
Wish I could, but it’s purchase only.
One criticism of mobile app stores is that they don’t provide the option of paid major updates, and thereby strongly push adopting a subscription model.
Could you please review the rules and stick to them? We'd appreciate it because we're trying for something quite different here.
Mostly because of very rich ecosystem of packages.