U.S. government takes 10% stake in Intel (cnbc.com)
150 points by givemeethekeys 1h ago 123 comments
Writing Micro Compiler in OCaml (2014) (troydm.github.io)
3 points by notagoodidea 3d ago 0 comments
Qoder Quest Mode: Task Delegation to AI Agents
52 firasd 6 8/22/2025, 3:00:53 PM qoder.com ↗
Concrete example of the sort of work I’ve been delegating: a user-reported issue like this one in Nacos https://github.com/alibaba/nacos/issues/13678.
My workflow now looks like this:
Spec-first, co-authored with the agent
- I start by pasting the user’s GitHub issue/feature request text verbatim into the chat.
- The agent extracts requirements and proposes a structured spec (inputs/outputs, edge cases, validation).
- I point out gaps or constraints (compatibility, performance, migration); the agent updates the spec.
- We iterate 1–3 rounds until the spec is tight. That spec becomes the single source of truth for the change.
After that, the agent processes the task:
1) Action flow: It plans To‑dos from the agreed spec, edits code across files, and shows a live diff view for each change.
2) Validation: It runs unit tests and a full compile/build, then iterates on failures until green.
3) Task report: I get a checklist of what changed, what tests ran, and why the solution converged.
Engineering details that made this work in a real codebase
- Codebase‑aware retrieval: Beyond plain embeddings, it combines server-side vector search with a local code graph (functions/classes/modules and their relationships). That surfaces call sites and definitions even when names/text don’t match directly.
- Repo Wiki: It pre-indexes architectural knowledge and design docs so queries like “where does X get validated?” don’t require expensive full-text scans every time.
- Real-time updates: Indexing and graph stay in sync with local edits and branch changes within seconds, so suggestions reflect the current workspace state.
- Autonomous validation: It tests and build steps run automatically, failures are fixed iteratively, and only then do I review diffs.
- Memory: It learns repo idioms and past errors so repeated categories of fixes converge faster.
What went well
- For several recent fixes, the first change set passed tests and compiled successfully.
- The agent often proposed adjacent edits (docs/tests/config) I might have postponed, reducing follow-up churn.
- Less context switching: The “spec → change → validate” loop happens in one place.
Where it needed human oversight
- Ambiguous specs. If acceptance criteria are fuzzy, the agent optimizes for the wrong target. Co-authoring the spec quickly fixes this.
- Flaky tests or environment-specific steps still need maintainer judgment.
- Non-functional constraints (performance, API stability, compatibility) must be stated explicitly.
I’m also interested in perspectives from OSS maintainers and others who have tried similar setups—what evidence would make AI‑assisted PRs acceptable, and where these approaches tend to break (for example monorepos, cross‑language boundaries, or test infrastructure).