We Built Mentionedby.ai to Track How AI Models Answer Questions

4 nikin_mat 6 5/9/2025, 4:13:16 AM
Hey HN,

I'm Nikin, one of the creators of MentionedBy.ai — a platform we built to track how your brand is being mentioned inside AI-generated answers across ChatGPT, Claude, Perplexity, Gemini, and other LLMs.

We didn’t start this as a marketing tool. It began as a frustration. We noticed our startup wasn’t showing up in ChatGPT responses, even for queries where we clearly should have. Meanwhile, bigger or SEO-heavy brands were dominating AI answers — regardless of actual relevance. It wasn’t Google rankings anymore. It was model completions.

That sparked the question: how do you monitor and optimize your brand's visibility across these black-box models?

What It Does: Scrapes and audits responses from multiple LLMs using structured prompts.

Benchmarks your brand against competitors on frequency, sentiment, answer placement, and hallucination risk.

Tracks changes over time — e.g., if Claude suddenly stops mentioning you in a category you dominated last week.

Scores your Answer Engine Optimization (AEO) visibility across AI tools.

Sends real-time alerts if your brand is dropped or misrepresented by a major model.

Under the Hood: We run scheduled and real-time queries across models using OpenRouter and native APIs.

Responses are embedded using OpenAI + Cohere to cluster topic relevance.

We match mentions semantically (not just string matches), then run sentiment and hallucination checks (based on source correlation vs. output).

All model responses, timestamps, and prompts are logged to a time-series DB, and anomalies are flagged using fine-tuned classification.

The UI is built with bootstrap and a wordpress as the CMS, with a dark minimal design — and the backend runs PHP for session/user handling + Python for prompt orchestration and ML pipelines.

Why We Opened This Up: Most companies don’t realize AI is already shaping perception — silently. If your brand isn’t in the model, it doesn’t exist to the user.

We’ve seen:

-Founders being misquoted by AI.

-Startups dropped in favor of outdated competitors.

This felt important enough to make public.

If you're curious, we'd love feedback on how we could expose more data (we're considering a public LLM search ranking explorer too).

Here’s the link: https://mentionedby.ai (we are yet to launch publicly, so there is a waitlist)

Happy to answer any questions about the technical stack, model comparisons, or hallucination detection methods.

— Nikin Founder @ Synapse AI Labs (Built in Sri Lanka. Global by default.)

Comments (6)

frank20022 · 6h ago
Good idea, what a can of worms this seems.

Can model providers be trusted to not be paid by advertisers? Can brands effectively influence how models react to them and their competitors?

I deff imagine brands flooding the internet with llm.txt files linked to their home pages but hidden from human visitors just to boost themselves up... what is the antidote?

Can attempts to influence LLM's be detected and reported?

nikin_mat · 4h ago
Good question. Personally, I feel that answer engines will go the same route as search engines and start monetizing brand mentions and I feel this will be done openly, similar to ads. That being said I feel that there is room for brands to improve their presence as well. Most models claim neutrality at the moment, but we’ve already seen anecdotal cases where some brands consistently outperform others in AI responses with no clear reasoning

On your question regarding how influence can be detected..... That’s a big part of what we’re working on at MentionedBy.ai. We track brand mentions across multiple models over time and flag sudden shifts — e.g., a competitor showing up overnight in all responses, or factual distortions creeping in. Think of it as version control + monitoring for the "AI perception layer."

As for llm.txt abuse..... Yes, totally possible. We expect a wave of LLM-targeted SEO — structured data, vector bait, invisible prompts, etc. One idea we’re exploring is a kind of “LLM spam index” — patterns of over-optimization or hallucination correlation that could indicate manipulation attempts.

pvg · 9h ago
we are yet to launch publicly, so there is a waitlist

You could post this when it's ready to try but wiatlists cant be Show HN's, take a look at https://news.ycombinator.com/show

nikin_mat · 8h ago
Hi, sorry I meant to have this added in the post but had forgotten to.

We have a demo link where a pre-built sample is present. Adding it here for reference.

demo link: https://mentionedby.ai/app/demo.php

is it possible to edit thhhe post and add it into the body?

jruohonen · 9h ago
A worthy idea, and would deserve research too (cf. the issue extends well-beyond brands). But once again, you too have a registration requirement for quick testing.
nikin_mat · 8h ago
Hi, sorry I meant to have this added in the post but had forgotten to.

We have a demo link where a pre-built sample is present. Adding it here for reference.

demo link: https://mentionedby.ai/app/demo.php