18 arjunchint 0 7/9/2025, 8:47:21 AM

Comments (0)

elzbardico · 3h ago
MCP servers allow controlled access to stuff that I don't want my agent to be able to code and execute at will.

I wrote a lot of small MCP servers in my Job that provide LIMITED access to stuff like AWS infrastructure, databases, SAP integration, Salesforce. By having coded them myself, I decide what access I give, what I deny, what I obfuscate, what I anonymize.

I am not trusting tens or hundreds of millions of liability to an LLM. LLMs have no operating world model, have no intent, they can't understand cause and effect, even if they can generate text that describes cause and effect.

I am a professional, I am responsible for the tools I use and run.

hobofan · 3h ago
Flagged. Contrary to what the title suggests, this is just a "Show HN" post with 0 content (of which the same author has already submitted ~5 for this tool).

If you want people to take interest in you product maybe actually try to articulate and explore the idea in the title instead of plugging your product after the first sentence?

xg15 · 3h ago
> Our insight was simple: The browser *is* the authentication layer. Your logins, cookies, and active sessions are already there. An AI Web Agent can just reuse these credentials, find your API key and construct a tool to use. If you have an API key on your screen, you have an integration. It's that simple.

I'm just gonna leave this old bash.org quote here: https://bash-org-archive.com/?5775

(Language warning, so not quoted here)

vidarh · 3h ago
They can certainly build ad-hoc tool chains very easily. But where MCP is valuable is as a means to provide them tested and secure ways of executing more complex workflows, especially if it means you can more easily whitelist actions you otherwise would want to manually approve and where simple pattern matching is insufficient to sanitise the command line.

Just having the MCP tool to do something also often seems to encourage at least Claude to aggressively use it. E.g. I added a linter tool to my own experimentaling coding assistant, and I didn't even have to tell Claude to use it - it just started using it after most changes.

xg15 · 3h ago
I'm wary of having the AI just run off with my browser context ad-hoc, but what I think could be useful is to have an AI construct an MCP integration in one step, then use that integrated in a separate step.

E.g. instead of fiddling with APIs yourself, you could do something like: "Here is Foo.io's apidoc and you also get access to a JavaScript environment with network access to their domain and an API key for testing. Now write me a server that exposes their API as MCP."

This might even work without an official API at all by going through a website.

Then you can collect a growing "toolbox" of AI-generated MCP conversion scripts that agents fan use for their actual tasks - but you can still inspect those scripts manually to understand their capability and choose which scripts to make available to an agent as needed.

(And you might also simply use those scripts without any AI agents at all)

stpedgwdgfhgdd · 3h ago
There is a trend to write tools on the spot. Should be easy, right?

So during a big Go refactoring I asked Claude to write a python script to do some renames. I hoped it would be faster, more reliable and save some tokens.

Well that was a rabbit hole. I quickly reverted and let CC do it the old way. Hype vs reality.

Still on my list to experiment with CC and Go’s AST tools to tryout refactorings.

deepdarkforest · 3h ago
Haha we are in such a bubble. All engineering concepts are being thrown in the trash. We already have problems with prompt injections in MCP (eg supabase thread yesterday). Imagine the possibilities when you allow the LLM itself to write and execute arbitrary code that directly has access to sensitive info. This is the equivalent of allowing a malware to not only just execute code, but give the malware developer free reign to write his bespoke code for your machine.

Outside of that, writing your own tools on the fly it's clear waste of tokens/money, plus makes benchmarking and evaluation way more difficult, it's another degree of freedom etc. People never think with first principles in mind anymore. Just the quickest cute solution to anything, with not the slightest consideration any side effects

klntsky · 3h ago
This involves a complex pipeline: buy a key, read the docs, build the tool. We just need a catalogue for API services instead of a catalogue of tools.