Show HN: API Testing and Security with AI

8 siddhant_mohan 3 5/6/2025, 4:13:21 AM qodex.ai ↗

Comments (3)

kshitijzeoauto · 67d ago
It claims to plug into your CI pipeline, detect what changed, and generate relevant test cases using LLMs.

As someone who’s struggled with stale or missing tests—especially in fast-moving codebases—I find this idea quite compelling. But I’m also curious about how it handles:

Contextual understanding across large codebases (e.g., multiple modules touched in a PR) Avoiding flaky or non-deterministic tests Matching team-specific coding styles or conventions

anuragdt · 67d ago
Generating tests is good, but how to handle the updating tests? Also how will you handle the flakiness and side effects of AI models?
siddhant_mohan · 67d ago
We handles flakiness with retries, smart waits, and isolation, while side effects are avoided using clean setups, teardowns, and state-safe mocks. Each tests scenarios are independent of each other and can be configured in a way to have prerequisite to setup the system and the post callback to cleanup the system

About updating test scenarios, we map it with your github commits and when a new commits come, we use the diff to figure out if tests failing are because of a bug or because of a new feature.