‘TDD’ Means Something Different at Lab Zero
Lab Zero uses a unique methodology that has roots in Test-Driven Development (TDD). In Test-Driven Development, you write the automated tests that will be used to test software before you write the code. So, in TDD, your first ‘success’ is to write and run a test that fails. After that, in TDD, you keep writing code until the test passes. Then you repeat until you have a passing test for every requirement. At Lab Zero, we have the same state at the end, but we don’t necessarily write our tests first. Rather, we write the tests as we go, and deliver code with all tests present. At Lab Zero, TDD stands for ‘Test Delivered Development’. This post is a hypothetical argument between a testing Skeptic and a Lab Zero Advocate of Test Delivered Development. We invite you to discuss.
The Debate: Skeptic and Advocate
Hey, Advocate--on this project, our Agile team will start working alongside former Waterfall developers. You’re telling me that, in addition to setting up new development environments, we need to pair with their developers for an extra two weeks in order to teach them how to write automated tests? Writing tests takes time to learn. Why do we have to bring those new developers into our testing camp?
Okay Skeptic--I understand you want to see code delivery start as soon as possible. The up-front training is really what it takes to achieve a new mindset for the developer. As a developer, when you think of the test at the same time as the code, you think less about all the things your code could do, and focus clearly on what your code must do to pass the test.
To understand the difference in mindset, suppose you sit down to start coding the story ‘User picks starting date’’. And you start working. Ten hours later you have chosen a date picker and you find yourself trying different configurations to get it to exclude US holidays from being selected. You have no idea whether this is a requirement, but the configuration file is right there so you start playing with it.
Or suppose you slot in the default date picker. You pick today’s date, and it works. You deliver the story. But you didn’t realize that it fails for every other date you pick besides today. In both of these cases, you hadn’t set a testable goal for yourself, so you just stopped when you had something that worked in one case. These are both examples where coding with the mental state of a tester would help you focus on the objective your code must meet.
Okay--I get it. But here’s something you can’t ignore. With all these automated tests you’re writing, won’t it always take longer to get to Code Complete?
I love that you still think in terms of “Code Complete”. But what does Code Complete mean? Code Complete is the instant when developers agree that there is a non-zero chance that the system will function correctly. It will take longer to reach this milestone if you write automated tests as you go--but is the Code Complete milestone even valuable? Code Complete assumes that there is a single known time in the future when the code will be stable. Code Complete is one of the steps on the path toward stability and release. But in Agile we release early and often. To do this we have to reduce the time it takes to get to stability. With our method, you are always one reversion away from Code Stability. As a result, you can make one change and become confident very quickly that the code is stable enough for release. [Aside: we know producers of ‘enterprise’ software that measure the time between Code Complete and Release in years.]
Here’s one idea that’s still valid after all this time: the earlier you find a bug, the cheaper it is to fix. With our method, you find bugs early. Because the tests are written by developers, more bugs are found in the developer’s own environment before the code even gets to integration.
You know I like metrics! Time to Code Stability and Mean Time for Bug Discovery--those are two good ones. It’s easy to authorize my dev team to spend a few extra weeks to start writing tests as they go. But there’s an organizational change too because QA’s job has to change. They need to understand how this works so they don’t duplicate effort. They also need to know which functions the automated tests don’t reach, so they can test those functions manually.
You are correct. Can I tell you a story? We once worked with a client where Development and QA had been siloed for decades. The QA team had developed a list of ‘secret’ techniques for generating a lot of use case-specific bugs. Basically, they had written automated tests and were keeping them to themselves because they wanted to be able to show early progress finding bugs in each QA cycle. But the bugs they found this way were predictable and easy to fix. And because they were all of a type that was well-known, detecting them early didn’t reduce the cost of fixing them. With our methods, QA can stop performing the same mindless tests over and over for each release. Instead, they spend more time identifying the automated tests that are still missing. They start working smarter, not harder--and they eventually embrace that.
If you’re looking down the road, there are even more organizational changes in store. The more people work on a codebase, the more likely it is that one person will break something that another person built. With automated tests, you find out immediately what test your update caused to fail, and you’re on your way to a fix without having to interrupt or wait for anybody else.
That sounds like a benefit. You seem to know what you’re doing. But I’ve heard of cases when old, brittle tests start to fail, and everybody ignores the tests because of the constant failures.
Yes--old broken tests need to be fixed. And we aren’t afraid to delete a test that doesn’t help us get to stability faster. Under the conditions you’re talking about, you have used this methodology long enough for tests to be failing. By that time you have already realized much greater value from refactoring. The longer a codebase is under active development, the more likely it is to be refactored. The tests themselves define what the old code (and therefore the new code) must do, which makes refactoring much easier.
If you wait long enough, you’re also likely to see why it’s good to have highly observable code in production. Imagine your billion-dollar-per-day transactions processor is behaving strangely at 9 am. You find the problem by 10 am and have a hotfix by 10:15. Would you rather have a release that is one hour from being 95% validated, or a release that is two weeks from being 100% validated?
Is that a trick question? I want that hotfix right away! Your arguments are convincing. But I have one last objection. If Test Delivered Development is so helpful, then why don’t you go ahead and do Test-Driven Development, and write all the tests before you start coding? It seems like the time required is the same.
That’s the question I was waiting for. It comes down to this: you get 80% of the benefit of Test-Driven Development simply by requiring that the tests be complete by the time the code is checked in. For our clients, the remaining benefits are not worth the extra effort of forcing a fundamental shift in the way all of the developers do their work. Although some developers are a natural at Test-Driven Development, it is a very rigid standard that isn’t for every working style. Some developers like to do just enough exploratory coding to understand the requirements better--or even switch back and forth between testing and coding. Those developers would not function well under rigid TDD.
Well, I’m convinced. A final question: Have you ever completed a project without writing any tests?
Yes. Here’s the deal: the longer you intend to have code under active development, the more automated testing pays off. If you build a prototype that you know will be thrown away in three weeks, and you have the same developer working on it the whole time, then you shouldn’t write tests for that project. But in our experience, about three out of four ‘throw-away’ prototypes are launched as products. So be careful when you decide to leave testing out of your plan.
Conclusion
We’ve found that the pros of Test Delivered Development far outweigh the cons. No client has told us at the end of a project that they wished we had skipped writing automated tests, or that we had written fewer tests. Or that they wished we had let their own developers off the hook. For us and our clients, Test Delivered Development has been an effective compromise between no automated testing and full Test-Driven Development. When we go back to these clients, we find them still delivering tests with code.
Can We Help You?
Are you stalled with your project that has grown too large too quickly? Are you worried that you’re in too deep to start writing automated tests? Are you thinking about Test-Driven Development, but hesitating because of the size of the change? Can we help you transition to automated testing? How can we help you?
Continue the conversation.
Lab Zero is a San Francisco-based product team helping startups and Fortune 100 companies build flexible, modern, and secure solutions.