Today I'm going to share an observation I made recently, related to Test-Driven Development (TDD).
I've been studying and practicing TDD for about three years now, mostly while developing in C and C++. Starting with a test-after approach using a simple single-header-file unit testing framework, I focused on testing small modules with very isolated behavior. As soon as they were ready, I would put them right into the system, surrounded by some other non-unittested modules making use of that particular tested behavior. This was already a huge step compared to the traditional debug-later programming approach (DLP, see reference to James Grenning's blog), because the main logic could be easily verified without even touching a debugger.
After a while I discovered the discipline of TDD, in particular the mockist approach, as it turned out to be. I was now eager to create all units in total isolation, decoupled from each other by interfaces (meanwhile I switched over to C++, which has abstract base classes).
This felt like the right thing to do, hopefully leading to the best possible class design. Sure, after some time I noticed one of the drawbacks: every change in the interface of a dependent class leads to at least two other changes - one in the test implementation and another in the production one. This feels like redundancy introduced just for the sake of testing.
Some months later I heard about a different approach: using existing (production) classes as collaborators in unit tests. Seriously? I was surprised - somehow, that felt wrong. Fortunately, after some research, I found Martin Fowler's article on the topic that really made some things clear to me.
Fowler makes a distinction between a classical and a mockist approach to TDD. While the classical methodology utilizes both state and behavior verification, the mockist approach solely depends on the latter. From my experience, behavior verification is what mocking frameworks (like Google Mock) suggest.
While I considered it misuse to instantiate real objects as collaborators in unit tests, Fowler states that many classicist TDDers only use test doubles (Mock, Stub, Spy, ...) if working with real objects would be too awkward. While reading those words, this slowly started to make sense to me. The biggest concern I still have is related to the locality of the unit tests. What if many tests fail because of a small change in one of the collaborators? As he also mentions, it may indeed be harder to track down the actual failing test in such a case. Combined with a short TDD cycle however, the last working state is just some undo steps away.
For me, an advantage compared to the mockist approach is that without test doubles there is no need for any extra interfaces, which makes the whole design more readable. Aside from that, there definitely are cases where test doubles are necessary. For dependencies to file systems and device drivers, there is often no sensible way to use real objects in unit tests. The same probably applies to layer boundaries inside an application, where mocking another subsystem might be the way to go.
Mocking is obviously not the only possible approach to TDD. I have learned that there is nothing wrong with using real objects in test code. Maybe it's a good idea to stick with that approach for most of the time and consider using test doubles a last resort. The same might be said for mocking.
Are you experienced in TDD? Which approach do you prefer? As this way of writing code is actually pretty new to me, I would really like to know your opinion on that topic. Feel free to contact me on twitter: @ronalterde.