Show HN: I built an AI agent that helps me invest

4 haniehz 2 7/19/2025, 1:46:26 PM github.com ↗
A while back, I built a simple app to track stocks. It pulled market data and generated daily reports based on my risk tolerance. Basically a personal investment assistant. It worked well enough that I kept going.

Now, the same framework helps me with real estate: comparing neighborhoods, checking flood risk, weather patterns, school zones, old vs. new builds, etc. It’s a messy, multi-variable decision—which turns out to be a great use case for AI agents.

Instead of ChatGPT or Grok 4, I use mcp-agent, which lets me build a persistent, multi-agent system that pulls live data, remembers my preferences, and improves over time.

Key pieces: • Orchestrator: picks the right agent or tool for the job • EvaluatorOptimizer: rates and refines the results until they’re high quality • Elicitation: adds a human-in-the-loop when needed • MCP server: exposes everything via API so I can use it in Streamlit, CLI, or anywhere • Memory: stores preferences and outcomes for personalization

It’s modular, model-agnostic (works with GPT-4 or local models via Ollama), and shareable.

Let me know what you all think!

Comments (2)

axezing121321 · 5h ago
Here’s a full example from ARC Builder:

GOAL: Should we adopt 4-day work weeks? • Premises: • Burnout reduces long-term productivity • Productivity is currently stable but at risk • No legal constraints to switching • Constraints: Deliverables must not drop • Rule Applied: Cost-benefit logic (wellbeing vs output risk) • Bias Check: Potential optimism bias flagged • Confidence Level: Medium • Contrast Case: Reverses conclusion if burnout removed

The output includes conflict checks, rule reference, and audit trail. I’m curious how others here feel about symbolic scaffolds like this for alignment or decision evaluation?

axezing121321 · 5h ago
Happy to elaborate on how ARC OS works — It parses subjective input into logic trees with assumptions, conflict checks, bias flags, and reasoning trails.

It’s symbolic only (no LLMs), designed for alignment auditing, law/policy frameworks, and decision explainability.

If anyone wants an example, I can post a breakdown here.