I’m Maxime — a product builder and former Head of Product at Qonto (think Brex for Europe, ~$6B). I recently started something new called Well (https://wellapp.ai/), where we deploy autonomous agents (via remote browsers or Chrome extensions) to collect supplier invoices on behalf of founders. It saves a lot of brain cycles for busy operators.
Over the years, I've built many integrations — some with OAuth2, others via RPA when no official interfaces existed. But with this new generation of agents acting on behalf of users, I’m starting to wonder: are we heading into a collision course with web defenses not designed for this class of automation?
I’ll soon be releasing a fleet of agents operating across the web. Not bots scraping content — but personalized actors doing legitimate tasks for authenticated users. Yet they often trigger anti-bot systems or get blocked alongside actual bad actors. On the flip side, I worry about overwhelming sites that aren’t prepared.
So here’s my question:
Is there an emerging standard or protocol (like robots.txt for crawlers) to handle this kind of agent-based usage? Something that lets site owners opt in, opt out, or at least signal expectations?
Would love to hear if anyone’s seen serious work or proposals around this — or if you're solving a similar problem in your vertical.
Thanks!
vitarnixofntrnt · 18h ago
Yeah, check out my post about a optical illusion captcha idea.
I’m Maxime — a product builder and former Head of Product at Qonto (think Brex for Europe, ~$6B). I recently started something new called Well (https://wellapp.ai/), where we deploy autonomous agents (via remote browsers or Chrome extensions) to collect supplier invoices on behalf of founders. It saves a lot of brain cycles for busy operators.
Over the years, I've built many integrations — some with OAuth2, others via RPA when no official interfaces existed. But with this new generation of agents acting on behalf of users, I’m starting to wonder: are we heading into a collision course with web defenses not designed for this class of automation?
I’ll soon be releasing a fleet of agents operating across the web. Not bots scraping content — but personalized actors doing legitimate tasks for authenticated users. Yet they often trigger anti-bot systems or get blocked alongside actual bad actors. On the flip side, I worry about overwhelming sites that aren’t prepared.
So here’s my question: Is there an emerging standard or protocol (like robots.txt for crawlers) to handle this kind of agent-based usage? Something that lets site owners opt in, opt out, or at least signal expectations?
Would love to hear if anyone’s seen serious work or proposals around this — or if you're solving a similar problem in your vertical.
Thanks!