Show HN: I built an AI agent that helps me invest
Now, the same framework helps me with real estate: comparing neighborhoods, checking flood risk, weather patterns, school zones, old vs. new builds, etc. It’s a messy, multi-variable decision—which turns out to be a great use case for AI agents.
Instead of ChatGPT or Grok 4, I use mcp-agent, which lets me build a persistent, multi-agent system that pulls live data, remembers my preferences, and improves over time.
Key pieces: • Orchestrator: picks the right agent or tool for the job • EvaluatorOptimizer: rates and refines the results until they’re high quality • Elicitation: adds a human-in-the-loop when needed • MCP server: exposes everything via API so I can use it in Streamlit, CLI, or anywhere • Memory: stores preferences and outcomes for personalization
It’s modular, model-agnostic (works with GPT-4 or local models via Ollama), and shareable.
Let me know what you all think!
GOAL: Should we adopt 4-day work weeks? • Premises: • Burnout reduces long-term productivity • Productivity is currently stable but at risk • No legal constraints to switching • Constraints: Deliverables must not drop • Rule Applied: Cost-benefit logic (wellbeing vs output risk) • Bias Check: Potential optimism bias flagged • Confidence Level: Medium • Contrast Case: Reverses conclusion if burnout removed
The output includes conflict checks, rule reference, and audit trail. I’m curious how others here feel about symbolic scaffolds like this for alignment or decision evaluation?
It’s symbolic only (no LLMs), designed for alignment auditing, law/policy frameworks, and decision explainability.
If anyone wants an example, I can post a breakdown here.