Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
In addition to what's already in the thread, I assume by now somebody has vibecoded an agent to scan GitHub for bounties and then automatically vibe up a corresponding solution. Will be a fun source of spam for anyone who wants to do the right thing and pay people for good work.
BoorishBears · 22m ago
I recently got my first AI generated PR for a project I maintain and it was honestly a little stressful.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component being instantiated had been replaced comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough, I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
hazzamanic · 51m ago
Having once used the Notion API to build an OPEN API doc generator, I pity whoever takes this on. The API was painful to integrate with, full of limitations and nowhere near feature parity with the Notion UI itself
No comments yet
zwnow · 1h ago
> Please only apply if you have taken time to explore the Importer codebase, as well as the Notion API.
Suddenly 5k$ does not sound as good
cybrox · 46m ago
Why? It doesn't say you need to have extensive experience with them. I would assume this is mostly to dissuade applicants that are not aware of the potential challenges ahead.
zwnow · 11m ago
This "exploring" can take tremendous amounts of time, depending on the complexity of these APIs. My time is worth a lot to myself. I am not going to spend many hours for a chance of winning 5k$. If this takes a week off of my free time its not worth 5k to me.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component being instantiated had been replaced comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough, I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
No comments yet
Suddenly 5k$ does not sound as good