Show HN: I created an AI-powered Python testing suite that writes its own tests
*The Problem*
As a developer, I've always found writing and maintaining a robust test suite to be one of the most time-consuming and challenging aspects of software development. It's often difficult to think of all the possible edge cases and to ensure that your tests are actually effective at catching bugs.
*The Solution*
To address this, I've created an MCP server that leverages both Google's Gemini AI and BAML (Boundary ML) to provide a suite of intelligent testing tools. The server is built on the FastMCP framework and can be easily integrated into your existing workflow.
*Technical Deep Dive*
Here's a breakdown of the key features and how they work:
* *Hybrid AI Approach:* The project uses a hybrid AI approach that combines the strengths of both BAML and Gemini. BAML is used for structured test generation, ensuring that the output is always in a consistent and parseable format. Gemini is used for its powerful language understanding capabilities, which allows it to generate creative and challenging test cases.
* *Intelligent Unit Test Generation:* The unit test generator uses AI to create a comprehensive suite of tests for your Python code. It automatically identifies edge cases, error conditions, and other potential sources of bugs. The generated tests are written using the `unittest` framework and include proper assertions and error handling.
* *AI-Powered Fuzz Testing:* The fuzz tester uses AI to generate a diverse range of inputs to test the robustness of your functions. It can generate everything from simple edge cases to malformed data and large inputs, helping you to identify potential crashes and other unexpected behavior.
* *Advanced Coverage Testing:* The coverage tester uses a combination of AST analysis and AI-powered test generation to achieve maximum code coverage. It identifies all possible branches, loops, and exception paths in your code and then generates tests to cover each of them.
* *Intelligent Mutation Testing:* The mutation tester uses a custom AST-based mutation engine to assess the quality of your existing test suite. It generates a series of small, syntactic changes to your code (mutations) and then checks to see if your tests are able to detect them. This helps you to identify gaps in your test coverage and to improve the overall effectiveness of your tests.
*Call to Action*
I'm still actively developing the project, and I would love to get your feedback. You can find the source code on GitHub: https://github.com/jazzberry-ai/python-testing-mcp
I'm particularly interested in hearing your thoughts on the following:
* Are there any other testing tools that you would like to see added to the suite? * Have you found any interesting bugs or edge cases using the tool? * Do you have any suggestions for improving the prompts or the AI models?
Thanks for reading, and I look forward to hearing from you!
No comments yet