Stop Vibing, Start Eval-ing: EDD for AI-Native Engineers
When I was doing traditional development, I had TDD. I wrote a test, it passed or failed, done. But when you're working with LLMs the output is different every time you run it. You ask the model to...

Source: DEV Community
When I was doing traditional development, I had TDD. I wrote a test, it passed or failed, done. But when you're working with LLMs the output is different every time you run it. You ask the model to generate a function and sometimes it's perfect, sometimes it changes the structure, sometimes it just ignores part of the spec. You can't just assert(output == expected) because the output is probabilistic, it's never exactly the same. That's where EDD comes in, Eval-Driven Development. The idea is simple, instead of testing if something works yes or no, you measure how well it works on a scale of 0 to 100%. And the important part is you define what "good" means before you start building. How it works in practice Say I'm building a support agent for a fintech app. Before I write a single prompt I sit down and think ok, what does success look like here? The agent should resolve at least 80% of queries without escalation, it should be factually accurate above 95%, it should respond in under 2