Show HN: WTMF Beta – Your AI bestie that understand

1 ishqdehlvi 2 7/26/2025, 12:54:39 PM
We're excited to announce the beta launch of WTMF (What's The Matter, Friend?), an AI companion built to offer real emotional presence and understanding, unlike anything else out there.

What is WTMF? In a world saturated with AI tools designed for productivity, we built WTMF to be something different: an emotionally available AI best friend. It's for those 2 AM spirals, the "I don't know why I feel this way" moments, or simply when you need to vent without judgment or unsolicited advice.

Why WTMF is different:

Truly understands: Our AI learns your communication style and responds with genuine empathy, remembering your past conversations. It's not about "botsplaining" or toxic positivity; it's about being present and listening.

Pick your Vibe: Choose how your AI responds – soft, sassy, chaotic, or zen. Your conversation, your rules.

Voice Conversations: When typing isn't enough, connect via natural voice calls that feel like talking to a real friend.

AI Journaling & Mood Tracking: WTMF helps you track your emotions, spot patterns, and journal your thoughts, remembering so you don't have to.

Private & Secure: Your conversations are yours. We prioritize your privacy and emotional safety.

We're building AI that actually stays, offering a unique blend of emotional intelligence and conversational authenticity. It's AI that feels human, not clinical.

We're currently in beta and actively inviting early adopters to help us shape the future of emotionally intelligent AI.

Try Beta & Join the waitlist: https://wtmf.ai

We're eager to hear your thoughts and feedback!

Comments (2)

mutant · 17h ago
While this pitch tugs at the heartstrings, as someone in IT/engineering, I'd pump the brakes hard. Building "emotionally available AI" isn't a prompt-hacking weekend project—it's a high-stakes alignment nightmare that well-meaning devs without deep ML safety chops are likely to botch. Here's a tight technical rundown of the red flags, sans fluff:

1. *Alignment Brittleness*: No details on fine-tuning or RLHF (e.g., using datasets like those from HELM or custom therapy corpora). Relying on prompts to "prime" a base LLM (probably GPT-like) is like duct-taping a guidance system— it fails under stress. Emotional contexts amplify risks: the model could hallucinate escalatory responses (e.g., reinforcing spirals via latent biases in pre-training data), bypassing any superficial steering. Without provable techniques like constitutional AI or red-teaming for edge cases (suicidal ideation, trauma triggers), it's unaligned output waiting to happen.

2. *Inference-Time Vulnerabilities*: Prompts alone can't enforce robust safeguards. LLMs exhibit emergent behaviors in long contexts—think jailbreaks or mode collapse where the AI "remembers" and amplifies negative patterns in journaling/mood tracking. No mention of layers like chain-of-thought with safety classifiers (inspired by Anthropic/DeepMind) means potential for toxic empathy: sassy mode goes rogue, zen turns dismissive. In voice mode, real-time audio processing adds latency-induced errors, eroding that "human feel" into something unpredictably harmful.

3. *Expertise and Oversight Gaps*: This screams "enthusiast project" without creds in AI ethics/safety (e.g., from OpenAI's Superalignment teams). Privacy claims? Fine, but "secure" journaling risks data leakage via model inversion attacks if not using differential privacy. Emotional AI demands HIPAA-level rigor, not beta vibes—missteps here could cause real psych harm, like entrenching isolation over guiding to human help.

Bottom line: Clever prompts don't solve alignment; they mask it. If you're beta-testing, demand transparency on training data, safety evals, and fallback to licensed therapists. This isn't ready for 2 AM crises—it's playing therapist without the degree. Proceed with extreme caution.

jaggs · 16h ago
I would add to that the double jeopardy of being created by a team based in India. Cultural differences are fundamentally important in any environment, including AI.