You may have noticed the recent attention given to TDD and whether it has been proven effective or not. I think the entire “conversation” is great. I appreciate everyone’s position primarily because the parties appear to be presenting their opinions and the interpreted facts with open minds. If I were to take sides, though, I think I would be in favor of the “Not” argument mostly for the reasons I commented about on a recent post by Jon Galloway:
You know, it is pretty easy to read the popular blogs (especially the ones which make references to “alpha” programmers) and think their way is the right way. I take a lot of pride in [dev stuff], but if I hadn’t been in this business for a while, what-every-developer-must-know type posts could really get me off track.
Please do not interpret this as a knock on the “popular blogs.” I like them a lot, but what I love is the fact that Jacob Proffitt took the time to challenge the TDD effectiveness evidence. It isn’t for a lack of trust, it’s just that as developers it is our job to question everything. Equally, it is our job to validate everything.
I literally just wrapped up a meeting with one of my lead developers. We are just over a week away from rolling Version 2 of an application to our Production environment and today — one day after our final QA release — he discovers where Version 1 of the product lives and what it is *really* doing. Why did it take so long to find out the real story? Well, he (really we, I am equally responsible) trusted documentation and what past developers shared with him…and now we are in a bit of a pickle. If only we verified earlier. Fortunately, this kind of thing doesn’t happen often, but I’d be willing to bet that something questionable crosses each of our paths every single day.
Back to the TDD effectiveness conversation. Here’s my semi-confirmation bias story as it relates to TDD…
I worked at a shop where fingers were pointed towards Development no matter what the technical issue. The web server lost connectivity to the app server. Database XYZ hasn’t been backed up for 8 weeks. The L Drive has only 97KB available. Two new defects were just introduced into the Production Environment after four months of testing. A third party web site is down. Quick, call the Developers. Don’t call Networking. Don’t call the DBAs. Don’t call the NT Admins, the QA group or the third party site owners. Call the Developers. Maybe you are familiar with such a place? Anyhow, since Development clearly had to straighten up its act, we needed to provide evidence (there’s that word again) that we were putting more care into our code and validating our applications prior to release. Thus, TDD was introduced to my shop. We found the perfect guinea pig project — an Enterprise Messaging System (read: glorified email engine.) It was simply a service which sat around waiting to complete requests issued by a handful of web applications. For this app, we did true TDD as we actually wrote the tests before the code. All the tests passed and we rolled the service to QA and, lo and behold, not a single defect was logged against the Enterprise Messaging System. It was deemed perfect. The greatness of TDD was soon shared at the quarterly IS meeting using the first-ever defect free application as evidence of its brilliance. I think those who pushed for TDD were hoisted on shoulders and paraded around the conference room as Development had finally had gotten their sh*t together.
If truth be told, instead of praise and congratulations, everyone should have been asking a whole bunch of questions. Was TDD really the reason for the bug free code — directly or even indirectly? Was a superior product released because time was taken to validate all of the tests? Perhaps merely taking the time to write the tests before coding gave Development a needed boost? Well, these questions certainly couldn’t be proven either way. What about the zero defect count providing evidence that TDD was our saving grace? Very simply, there was no relationship between TDD and bug free software at all. In fact, the service had bugs, but no one other than the developers knew it. All bugs were logged against the calling web apps (as testers couldn’t interpret the difference between a web app failure and a service failure) and fixes were applied to the service unbeknownst to the folks looking for the last great Development hope. It was all a matter of sharing only the information which supported Development’s case as well as stretching the truth a bit.
If you look hard enough this stuff happens every day whether it be accidental or intentional. Though it is exhausting, it isn’t a bad idea to question everything — or not. The choice is yours.