close

DEV Community

Cover image for I Let AI Write My Entire Test Suite — Here's What It Missed
Luthfi Ferdian
Luthfi Ferdian

Posted on

I Let AI Write My Entire Test Suite — Here's What It Missed

Introduction

As an SDET, writing test cases is one of my core responsibilities. What we test and how we test it directly shapes the quality of what we ship. So when AI tools started promising to automate test case generation, I had to try it. I believe writing the test case 100% manually will be outdated and counterproductive, so I gave an AI my PRD and TCD, and asked it to generate the full suite. What came back in seconds would have taken me two to three days. Some of it was genuinely impressive — but some of it would have let real bugs slip into production.

What AI Got Right

The result stuns me. I wouldn't say we can trust it with 100% of the creation. Yes AI can also be wrong, yes AI can put its incorrect assumptions on the table, and yes absolutely it can create a very generic test case that could slow the test case creation rather than speed it up. As far as I know, AI is mostly about context. Less context means your result will be generic as per AI's assumptions. Too much context on the other hand may also reduce AI's accuracy and results as too many things it should consider. Not to mention not all the context is actually needed to perform the action. With proper context that we give to the AI, I would say at least 80% of the test case creation effort could be done by the AI.

What AI Missed

Though it helps a lot, I found that human intervention is still required (at least at this time 😂). Test cases generated by the AI still lack in some areas. First is the integration of the current PRD with the whole system. Many PRDs are essentially about enhancing current capabilities of our product. It means enhancing some part of the large system it is in. AI still faces difficulty in understanding how the current PRD could benefit the whole system and how the whole flow could be affected by it. Second is generating the proper test steps and test data. Since it is difficult for the AI to understand the whole system flow, sometimes it means very generic test steps. Steps that fail to capture the personalized flow of the system. Test data on the other hand is really important to the testing activities. Without it, it is simply impossible to do the testing. Some test cases need specific test data to be set. AI still lacks knowledge on how to prepare the test data.

The Bottom Line

So should you use AI to write your test cases? Absolutely — but treat it as a first draft, not a final product. Here's my rule of thumb: let AI handle the happy-path and standard validation cases (it's great at those), but always review what it generated. AI got me about 80% of what I need. That last 20% — the part that requires system knowledge, domain context, and real test data — is exactly where SDETs prove their value. The testers who learn to work with AI on the 80% and focus their energy on the 20% will be the ones who thrive.

Call to Action

Not leveraging AI in your daily work as an SDET would be a missed opportunity. Even though there's room for improvement, AI already saves significant time on the repetitive parts of test creation. Learning to work with it — trying, iterating, and figuring out where it fits in your workflow — is a process that I find joy in. How's your experience using AI in your daily tasks?

Top comments (0)