We do the minimum amount of work necessary to deliver a feature.
This sounds like a bad thing. And I’ve not often told our clients that this is what we do. But we definitely feel that it the best way to do things - but why?
We use test driven development, and try to use it 'properly'. That is think about what it is that we want to get from this unit of code, then think about the tests, pick one, write the test and then write just enough code to make it compile. The test needs to fail first to make sure that it is valid (or that the feature that we’re adding doesn’t already exist!). If we overthink earlier pieces of development we potentially won’t get past this step. We do the minimum amount of work, write tests that fail, and then write enough code to ensure that *all* of the tests pass.
Is this a good thing?
YES! The advantages of properly following the test driven development techniques clearly outweigh the cons.
Yes, there is more code (in terms of the additional tests) which is an easily measurable indicator. But in terms of productivity? Or time taken? I haven’t done the controlled tests (odd that I’m writing about TDD without tests!), but subjectively I’m sure that our team is more productive and can produce new features faster.
Development techniques that have stressed thinking about the problems and solutions have been around for a while, and there is evidence that they are ‘better’ than competitive practices. I see the biggest advantage of TDD as just this – ensuring that enough thought is put into the solution before a single line of code is written. Developing valid tests that properly test a new feature is more difficult than most developers new to the practice imagine, and the mind-set change to thinking about how to test a feature from how to write code to produce the feature encourages thinking about the corner cases and limits; this then translates to much better finished code.
How does writing the minimum amount to get tests working add to this? Well, it’s a side-effect. The majority of the time is spent developing a feature is spent on the tests, and thinking about solving the problem. Once that’s done in most cases it’s almost trivial to write the code. And the temptation to overthink or future-proof is minimised when presented with a very finite list of things that the code *needs* to do.
Another big advantage is that it keeps the code much simpler. My view is that maintainability is the key to good programming. Agile helps, as does keeping things simple (which many agile techniques encourage). But having a lot of simple tests that are supported by simple code increases maintainability.
Generally as more features are added the complexity of the system increases, and as the complexity increases the time to implement new (bug free) features increases. By keeping things simple we change the features vs complexity of the project to allow that future features aren’t much more difficult to implement than previous features.
Initially the system is more complex as it needs to take into account the test framework and dependencies (such as dependency injection – which isn’t a bad idea anyway!), but the crossover point where agile projects become less complex than the ‘old-way’ is around 8 weeks in, when a new big feature comes in and it goes into the working project with far less upset than the team thinks it will have. Often this is an eureka moment for developers new to TDD.
Having used TDD in a number of projects, both small and large, I’m sure that I won’t go back. Agile has been proven to work even in avionics, and I’ve graduated from someone who likes the ideas promoted by TDD and other agile techniques to someone who is an absolute advocate.