
“GenAI won’t replace testers—but testers who can use and implement GenAI effectively will replace those who can’t.”
That quote from Michael Larsen, Development Test Engineer at ModelOp, captures the central theme of his recent feature in The New Stack: the growing importance of human discernment in AI-powered testing environments.
In the article, titled Building Trust in AI-Driven QA: Ensuring Transparency and Explainability with GenAI, Larsen brings a practical and grounded perspective to the conversation around GenAI in software testing.
While many are focused on speed and automation, Larsen reminds us that AI is “frequently wrong”—and worse, “so confident in its wrongness.” Without careful oversight, this can lead to a phenomenon he calls “Automated Irresponsibility,” where AI-generated outputs appear complete and correct, masking bugs, bias, or false positives that a human would have caught.
A Case for Human-Centered AI
Larsen draws attention to the gaps GenAI still struggles with: nuance, shifting context, and edge cases. These are precisely the areas where human testers add irreplaceable value. He advocates for a culture of skepticism and inquiry, encouraging testers to approach GenAI like a “bright but unreliable assistant” — capable of incredible support, but not yet ready to take the wheel.
The article also highlights the importance of transparency and clear communication in how organizations deploy AI tools. Passing off AI-generated work as human-made, Larsen warns, might offer short-term efficiency—but it erodes trust over time.
AI Governance in the Real World
These insights go beyond QA. They speak directly to what ModelOp helps enterprises achieve every day: building effective, responsible AI systems with the right balance of automation and accountability.
ModelOp’s AI Governance software gives organizations visibility into how, where, and to what extent GenAI and other models are being used across the enterprise. We help companies move fast—without cutting corners on oversight, compliance, or transparency.
Larsen’s contribution underscores what we believe at ModelOp: the future of enterprise AI doesn’t belong to machines alone. It belongs to the people and teams who know how to use them well.