Ask HN: What's the worst part of web E2E testing?

3 enekesabel 2 7/20/2025, 4:34:55 PM
Hi everyone!

I'm a front-end engineer with ~10 years of experience, but also a quality freak. I always end up becoming the testing evangelist at every company I join, taking up the task of cleaning up the test architecture and fighting for quality. Thanks to this "hobby," I've had my fair share of writing e2e tests using various frameworks (Codeception, Cypress, Playwright), but I've never felt satisfied.

To me, it feels like we are trapped between two extremes: platforms that demand building an enterprise-grade test pyramid, and a bunch of no-code/low-code AI tools that promise magic but just dumb down the process and produce unmaintainable garbage.

This has led me to start a side project to see if I can find a better way to tackle these common pain points. To help me get a clearer picture for my project, I want to know if I'm the only one seeing this. I'd appreciate it if you could share your thoughts on the following:

- Why are we stuck in a loop of brittle tests? One small UI change, and locators start breaking everywhere. Should we just accept this fragility as the price for E2E testing, or are we doing something fundamentally wrong?

- Why do AI testing tools treat us like we're dumb? The choice seems to be between a shallow assistant that only covers the happy path or a black box that says, "just trust me." As a professional, I want to think, solve hard problems, and do the job an AI can't. Where are the tools that augment expertise instead of replacing it with a superficial outcome?

- How can we keep test code clean and scalable? There are awesome patterns like the Page Object Model or theScreenplay Pattern, but they require a huge upfront time investment and software design skills. Most of the teams I worked with didn't even know them or didn't want to walk the extra mile. They usually just copy-paste test code until it becomes unmaintainable, and then stop testing for good.

Apart from this "questionnaire" I would also love to hear any stories, anecdotes, and just your overall feeling about the state of e2e testing and your relation with it.

Thanks in advance for your time and insights!

Comments (2)

austin-cheney · 3h ago
Common pain points for me with test automation:

* slow performance of the baseline application

* flaky tests. This be tests that focus on the wrong dynamic data in a static way or timing issues that introduce race conditions again the tests

* poor negative tests and false positives

* unreliable communication between the test harness and the test subject or communication with interference from the test subjects other communication channels

testl33t · 1h ago
This depends a lot on how an organizations engineering teams are structured. But, here's some tips:

1. Put your E2E tests in the same solution as the project under test if you can. If the product changes in a way that your tests start "failing", then make sure the developer tasked with changing the product also changes the test. Existing tests should pass should be part of some kind of definition of done. This also makes incorporating E2E test to a CI/CD pipeline much easier then keeping them a separate repo.

2. Understand the waiting game. Have at least have one person on the team who deeply understands UI race conditions and how to handle them. "Auto waiting" features like in Playwright and other frameworks are great, until they don't work the way you want them to :). I much prefer the flexibility of the explicit wait pattern in WebDriver. And, if you're any good at what you do, rolling your own "autowait" that's tailored to your specific loading strategies is not rocket surgery.

3. Choose the right tools. Webdriver.io is miles ahead of Playwright and Cypress in terms of testing frameworks. It has the flexibility of Selenium Webdriver behind it, the performance of the new WebDriver bidi apis, while also having all the utility of other frameworks. Such as API testing, recording videos, etc... those bells and whistles marketed by Playwright were solved long before Playwright hit the community.

On that same note, make sure you choose the right language. If your front-end is written in Typescript, then use Typescript. There's plenty of "back-end" functionality in node.js. I have no idea why I still see teams who write massive test suites on Java or C#, but barely scratch the surface in terms of the features they use for this type of programming. E2E tests are pretty simple if you follow one of the patterns you mentioned. (Btw, if you're building an SPA, then the Page Object Model is not what you want. Prefer a Component Object Model instead. It's the same thing, but focused on smaller, reusable components, rather than pages. I see this alot, and authors don't understand why they can't port part of one page to another area in the code where they need it. It's because they failed to understand the component based nature of modern SPA front-ends)

4. Parallel by default. I've written frameworks that run 600+ tests in under 20 minutes. Based on you're infra, you should be able to scale tests at the click of the button. Technically, it's possible to run 100s, if not 1000s of tests at the same time. This drastically stops wasting everyone's time waiting for the results of the "the big regression run". It also forces you to maintain the rule of totally independent tests. Make sure your tests don't depend on database state (always create new, or properly seed the db on demand) and you're gold.

5. Invest in your teams training. Far too often, automation goes to a QA member that's just technical enough to make something happen, but not technical enough to understand how easy some things are to, say a software engineer. The QA person watches one course, maybe copies some code from GitHub repos, or reads some blog post. They then spread what they learned around to the rest of the team. The rest of the team "seems" to become productive at building this suite out. But, without anyone digging into what these tools do, how they work, or that has ever even made a web app themselves, that suite will be toast in a couple years. Make sure that at least one of those QA people have written web app before. Doesn't have to be a full blown thing, just that they understand where business logic occurs, what these frameworks do, how easy it is to add data-testids, etc...