Show HN: Plexe – ML Models from a Prompt (github.com)
Show HN: Pinggy – A free RSS reader for the web (pinggy.com)
Show HN: MP3 File Editor for Bulk Processing (cjmapp.net)
Show HN: Paladin – An AI trigger to fix your sh*t production bugs
Enter Paladin, a tool I built that automatically sends you a pull request to fix bugs shortly after they occur.
A little over a month ago, I posted my hacky but effective AI setup for fixing production bugs to Reddit (https://redd.it/1jibmtc). 300+ devs messaged me or commented wanting to try it out, so I’ve been spending the past weeks refining it into Paladin, and excited to release it today!
How it works:
Paladin hooks into your application’s error handling with an SDK, triggering a “run” when an exception is thrown. During the run, Paladin pulls your code on Github and uses LLMs to fix the error, sending you the fix as a PR over Slack in ~90 seconds. Here’s a two minute demo: https://youtu.be/0bm8nq99Nrw.
In early testing, Paladin solves over 55% of real production errors on the first try and makes useful progress on many others. It’s able to do well by supplying deep context to the LLMs: the stack trace, execution state, repo code, and more. When it works well, it allows you to fix bugs more quickly, meaning less downtime for users and saved engineering time.
Eliminating context switching has been an unexpected win for me, because I work best in long, focused stretches. When a bug hits affecting real users, I have to drop everything mid-feature to stash changes, debug, and mentally shift contexts, and then try to return. I’ve found PR reviews and tweaks to be much less disruptive.
Getting started (Free, no card required) 1. Sign up at https://app.paladin.run/signup 2. Follow instructions to connect your Github and Slack (or just email) 3. Choose and install the correct SDK into your app 4. Configure to send errors to Paladin 5. Done!
Paladin supports React, React Native, Laravel, Flutter, Django, Node, Next, Vanilla Javascript, Express, FastAPI, PHP, Vanilla Python, Nest, Vue, Android, iOS, Rails, Flask, and many more thanks to Sentry’s MIT licensed client SDKs (your errors do not go to Sentry, they are just used to capture errors). If you have a client and server, I’d start with your server.
Notes on privacy, performance, and future plans below:
Paladin will never abuse repo access for any type of training or sharing, and only pulls it for making fixes. An LLM provider (Google/Anthropic/OpenAI) processes part of your code, so if you can’t use tools like Cursor/Windsurf, you probably can’t use Paladin.
On performance, my personal set is admittedly very limited, but I think the performance makes sense to me given current bests on benchmarks like Aider Polyglot and SWE-bench Verified. I’d expect these numbers to get much better as models progress. I’d also expect Paladin to fall short where current frontier LLMs do: uncommon frameworks, libraries or languages.
In the future, I am planning on having two usage options: - Free: if you bring your own OpenRouter API key - Paid: if Paladin pays for the model costs
Really looking forward to hearing feedback and ideas!
No comments yet