Test-Driven Development versus Defensive Coding

I'm not sure if it has been written down anywhere, but the original FDD project strongly believed in defensive coding techniques such as validating all parameters on entry to a method and the throwing of an exception for invalid parameters.

Recently, the trend towards test-driven development has pushed developers towards an "I don't need defensive code, it just slows the runtime performance. I can catch all my errors with my JUnit"

I'd like to solicit opinion on this from the community. I haven't made up my own mind on it and I'm open to suggestions.

Is test-driven development a replacement for defensive coding standards enforced at code reviews? Comments please...

Regards,
David

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Jeff De Luca's picture

Well the defensive coding app

Well the defensive coding approach on that project predates that project by a long way. It (formally) goes back to 1985 and my time in the IBM Rochester programming laboratory, but also from the knowledge that paramater validation and value tracking are some of the most common programming defects.

I sat in on a TDD presentation at SD Best Practices in Boston by the author of one of the TDD books. I never heard the sentiment you're expressing in that session - but then it was only an overview session and there were no examples shown.

FWIW - you could drop TDD into FDD very, very easily. If you wanted to adopt TDD you would drop it into FDD process 4 - Design By Feature.

Unit test is a mandatory step in FDD but I don't stipulate whether it is done before, at the same time, or after coding. I only stipulate that is must be done.

So, TDD plugs into FDD very easily and without modification.

However, I'm not endorsing TDD nor am I getting into the many debates that surround it such as the badge-engineering of calling programming design.

Jeff

Integrate TDD into FDD Process #4 Design By Feature

Hi Jeff,

Very timely post. We have just reviewed the FDD process we utilized for our last project and one of the outcomes was to factor TDD into the design by feature process. The reason I felt this was so important is that often I found myself and the rest of the development team throwing in unit tests at the tail end of process 5 build by feature. Test were being thought about after development and we were always rushing to finish our unit tests. They did not receive the focus that they deserved.

We do not believe in 100% unit testing simply because in our organization the timelines do not afford it. Many will argue that it will not impede progress but we have found there is a real cost to develop tests and maintain those tests. Sure, without the tests those costs may simply be shifted to maintenance and bug fixes, however, in our organization that is preferred ... go figure.

Anyway, we do recognize the value of inserting a level of TDD into the design by feature step to accomplish the following :

1. Verify the api of the feature being developed, hopefully alleviate the amount of refactoring that has to be done later in the probject. Better to catch a poor interface early before implementation begins.

2. Identify the critical aspects of the feature which 'must' be unit tested. This will force the unit test requirements to be described prior to build by feature and we will be forced to not skip over it under time pressure.

3. This addition to the process makes sense in that the design by feature process results in interfaces stubs being created and therefore it is possible to create compilable, but not successfully executable, unit tests. The goal in build by feature then becomes to get the unit tests to succeed.

Mark

Jeff De Luca's picture

works for me

Hi Mark,

nice to hear from you!

Sure, this works for me, but I would describe this as simply moving the mandatory unit test task from the Build By Feature process to the Design By Feature process and adjusting the exit criteria of both processes accordingly.

When you do this for your next project you might want to think about whether you add a unit test milestone per feature (you can now since you have fixed where in the development sequence unit test will be done). So, you could have 7 milestones per feature rather than 6 and of course, you'll need to adjust the percentages. Even if you leave the milestones as they are (which is perfectly OK also) you would want to adjust the percentages as the Design Inspection milestone would now include Unit Test and this work has moved (I hope that makes sense!).

Jeff

Tests and Targets

Defensive coding has been around a long time, and (at least for statically typed languages) makes a lot of sense. But at the end of the day, unit tests do not serve the same objective as defensive coding.

Paramter validation (in particular) is useful because most type systems are limited in their semantic expressiveness. For example, you could declare a method Room.setTemperature(float t) in most statically typed languages. But the constraints, such as the fact that it needs to be (say) in celsius, 5-45 degrees, is beyond the scope of most (but certainly not all) languages. So parameter validation (as described above) is used to enforce this additional level of checking, to enforce the full semantics of the interface. So a unit test (from a functional testing perspective) would check that the room does indeed warm up when the temperature is set to 27 degrees. However that is not sufficient. In order to ensure that the semantics of the interface are enforced, you would need to write unit tests for the corner and exceptional cases; ie. that you tried to set the temperature to 900 degrees, and check for an exception. So in fact the two are complimentary in this sense.

Unit tests appear to be used by XP folk to validate functionality, and ensure functional coverage. (Whether this is sufficient is the topic for another debate.) Typcailly unit tests are also used for regression testing, to ensure functional changes between releases to not invalidate prior working code. This too does not preclude the use of parameter validation and so forth. In fact, parameter validation can be seen as a form of class invariants, which can be a most effective technique for improving software quality (and quite another beast from unit testing).

Another distinction: unit tests are normally black box, while invariants and parameter validation are effectively white box. They serve different purposes, and are not mutually exclusive. Unit tests are ultimately one layer of the testing onion, and are very useful - but no panacaea. There are plenty of other testing strategies that are just as important.

To the people who say:

I don't need defensive code, it just slows the runtime performance."

This style of argument can be applied to any number of situations; it all comes down to tradeoffs. Speed versus memory, space vs time, convenience vs security, usability vs efficiency, and so on... Hardware is steadily increasing in speed and decreasing in size/cost, so paying a few extra cycles for increased reliability IMHO is certainly worth it. Given the duration of a typical software project, by the time the project is delivered, faster hardware will be available. So it is a false economy to trade a very small performance hit for a potentially very large improvement in reliability. [1]

The idea that all bugs can be caught by unit tests is simply delusional.

Let us remember one of the golden rules of testing: testing can only prove the presence of faults, not the absence of faults. Code inspections (discussed in a recent newsletter) are IMHO a far more effective technique for improving software quality, and are highly complimentary to unit testing. There is plenty of evidence to support this.

As for "test-driven development"; at the end of the day, you are not delivering a bunch of tests to the client, you are delivering funcionality. Units tests are an artefact, not a deliverable, so one must wonder what should really be driving your project. Hopefully it is client valued functionality. And unit tests alone cannot ensure complete coverage of requirements. As for the inevitable "design by implication" of some approaches, I will save that discussion for another rant.

So IMHO test-driven development is not at all a replacement for defensive coding approaches. Unit tests are useful and complimentary.

[1] This of course is no excuse for sloppy programming, and may need to be carefully evaluated for embedded systems.

:: Gavin Baker - http://antonym.org/

Thanks

Thanks Gavin and Jeff! Both very useful comments.

Just to clarify - and Jeff knows this already ...

I did know that defensive coding pre-dated the first FDD project. I am an old machine code hacker, as many here know, and I was doing it in the early eighties. In fact, I was teaching it in labs to 2nd year students at the University of Strathclyde in the late eighties.

I agree with Jeff that TDD does slip into FDD very easily. I will also refrain from getting in to the deeper debate of "test as design".

I agree that unit testing is no replacement for inspections and their guiding standards. It seems however, that TDD, is being offered as just the latest excuse for (lazy) engineers who don't want to do inspections or design. Gavin's observations on white versus black box are an excellent rebuttal to the notion that TDD is a silver bullet.

Cheers

David
--
David J. Anderson
author of "Agile Management for Software Engineering"
http://www.agilemanagement.net/

You people are too gentle

tdd and xp is the revenge of the smalltalkers. so lot of it is grounded in smalltalk-think of sort it out at runtime and we are so productive at programming with this miraculos language that design is not needed. xp is so much criticized for no design so they intentionally call programming design and claim xp design all the time. writing test programs is programming. it is not design. refactoring is programming. it is not design. stupid that refactoring is to change working code and that it becomes the goal of xp people. to criticize xp or try it and fail is to not do it right they say. so it is with cults.

Defeinsive Coding Vs Unit Testing

I fully agree. Even if unit testing is done with 100% coverage the key is that unit testing treats each object as a black box. A lot of defensive coding is implementation specific and potentially runtime specific.

Unit tests are exercised in a controlled environment and do not necessarily cover every possible situation that will be encountered at runtime. Defensive coding often catches a problem and pinpoints it. This eventually saves a lot of debugging time trying to trace where and why a particular attribute of an object is null or not within an expected range etc...

Many times I have been very thankful to Jeff and Paul for introducing me to defensive programming. It is easy to make assumptions about a runtime environment and simply ignore possible exceptions to the rules. By placing defensive code at key locations in methods most if not all of 'unanticipated' scenerios are caught prior to release. Things which are missed and fail after release are extremely easy to locate, understand, and fix.

Last point, placing defensive code within a method is far easier than writing a unit test for the method to catch many situations. It requires less effort and is far less likely to get out of synch since it is directly inside the method and directly related to the implementation.

Mark

Design by Contract (DBC)

B. Meyer's Design by Contract has a lot in common with defensive programming.
Java doesn't support DBC like Eiffel, so I try to assert preconditions in the code, and assert postconditions in the unit tests. The performance penalty for the preconditions is low, and they pay for themselves with reduced development and maintenance costs. A nice thing about Java is that defensive tests for common errors, like NullPointer are built right into the runtime.

TDD misconceptions

I have not noticed TDD pushing developers towards the
"I don't need defensive code, it just slows the runtime performance. I can catch all my errors with my JUnit" mentality.

If any code written through TDD has a lack of defensive code in it, it would be from the idea of "do the simpliest thing first" and have nothing to do with performance. The primary goal of testcases in TDD is to get rapid feedback! That is all. From this feedback we can derive many things and confirm whether our assumptions are true. The claim made in this post seems incorrect and perhaps you need to re-analyze your sample population that this conclusion was drawn from.

I do not see how TDD can replace defensive coding standards as the two techniques catch errors in two distinct manners that are independent of one another. Defensive coding standards is aimed at stopping errors that may occur in production. When thinking about TDD in terms of defense, it can pinpoint holes where errors can occur by looking at failed or missing test cases. Therefore TDD is more of an indicator then a preventive measure like defensive coding standards.

TDD and defensive coding standards can live together and neither should try to replace the other. TDD can even show a programmer where defensive coding is required!

As a final note, TDD should not be thought of as a testing technique which ensures error-free code. That is not the motivation behind it. It is a way to achieve rapid feedback as to what your code is doing and whether it is doing what you thought it should be doing. It allows you to take small steps when the situation is hazardous and you are prone to make a misstep. It's like a good scout that will test the water for you before you leap.

TDD / Defensive coding harmony

As a developer, I use both TDD and defensive coding techniques together. I see each as a value-providing activity. IMO, TDD does not push developers to "I don't need defensive code ... ". If developers say that it does then their boss should be catching this during code reviews/inspection.

Defensive coding best serves the purpose of increasing the robustness of a system. Code reviews/inspection best serve the purpose of decreasing defects. TDD + Refactoring best serve the purpose of adding functionality in such a way as to preserve/improve the design of the system.

All of these ( and more that haven't been mentioned here ) are aspects of high quality software development.

Note that architecture is missing from the FDD, TDD, Refactoring, Defensive coding, Code review/inspection set. Architecture is a critical element that needs to be dealt with separately. SOA is fast becoming a viable ( some say preferable ) alternative to OO architecture and appears ( from my limited (~1 year) experience ) to fit well with the above mentioned practices.

Architecture

Udi

The problem with architecture is that in RUP and for many people it is the 'magic happens here' part of software development. Too diffuse to be of practical value.

And concerning Service Oriented Architecture, I have been a services advocate for several years, but services are a separation layer, and thats all they are. Services allow loose coupling and are extremely valuable in scaling and distributing software development, but SOA has nothing useful to say about software design and is not a replacement to OO.

phil

TDD vs. Defensive Programming, And CBSD

TDD can be and in fact has been successfully used instead of defensive coding techniques on several projects in which I have recently been involved.

1) Many projects (increasingly, over the past 5 or so years, with web apps, etc.) are what I personally term 'faux component-based' (no disparagement intended at all): i.e., they are 'componentized' and divided into layers, but
a) this is more for convenience and 'because that's how it's done' than because of an actual need;
b) the lack of actual need is because, for instance, there is one data layer, one middle tier (or whatever term is in vogue right now), and one presentation layer; and
c) the project team is often unfamiliar with component-based concepts, and can also get away without them because all the use is 'in-house' (although usually the project could benefit from them, often greatly, for various reasons)

2) In fact (and increasingly), my experience is that an increasing proportion (perhaps a majority?) of corporate developers (in IT shops, on ASP and/or web apps, web sites, etc.) have no experience with 'true' component-based applications (and it follows that increasingly fewer apps have a need for a 'component-based mindset')

3) These developers would, in my opinion, be likelier to use TDD instead of defensive coding, because they assume that 'if Client X of Component A calls doesn't break the contract, everything's okay' (where Component A and Client X are both developed in-house, and Client X is in reality the only client that Component A will ever have). Many of these developers, in fact, have never had to grasp that Component A might be used by some other client (and, in fact, it's okay for them not to grasp that, because it's a fact).

4) Many more experienced developers ('as of now') have had component-based concepts ingrained into them (because of the period during which they were 'serving their apprenticeship'). So for them (us, since I count myself as a CBSD advocate), we sort of come from the 'flip side' of above, in that we tend to assume and design as if our components were 'true components' and not 'faux components'.

5) IMHO, in most cases, designing 'true components' is a good thing. But, as with every discipline, we must recognize our assumptions and biases and scientifically start to explore and evaluate techniques that seem to be successful, even if they contradict our core beliefs. I believe that TDD can be one of these areas.

6) And thus, to sum up, my assertion is that there are indeed projects where TDD can replace defensive coding techniques (and, BTW, design-by-contract - which would be a wholenother post); but the key determinant is whether or not the project is 'very very close to faux-componentized'.

Disclaimer: This post represents my opinion, which is open to change - in fact, like George Bush, Sr., "I have very strong opinions, but I don't always agree with them." :)

P.S. One could argue that component-based precepts and principles should be included in a project even though it seems that the project - currently - can get away without them, because the project might in the future need them. Although it is from XP / Agile generally and not specifically FDD, I would argue from the principle of YAGNI - You Ain't Gonna Need It. In experience (and I'm sure there are many many who would agree with at least this), these sorts of projects very rarely change in nature such that they are used by any higher layers other than the one for which they were initially designed.

TDD vs. Defensive Programming, and CBSD: Addendum And Clarificat

I got so busy espousing that I didn't make some assumptions, etc. clear:

1) In my postscript, I did not intend to imply that every CBSD precept and principle is unnecessary in 'faux-componentized' projects, only that (possibly for each principle), the assumption that they are necessary or good is open to challenge in these specific type of project.

2) This is all also predicated on the assumption that there is adequate 'Post-Unit Testing' (or very robust Unit Testing and Dev Testing - the line is often gray), specifically that there is something like Integration Testing **somewhere** (Dev Testing or post-Dev Testing) that **will** thoroughly test how well the component layer's client(s) 'live up to their end of the contract', and that has adequate coverage to give confidence that the tests were adequate. So 'contract thinking' is still requied/desired, but 'where the contract is enforced' changes.

3) The performance gain also tends to make more sense and thus justify a different approach in highly-transacted sites like web applications.

Jeff De Luca's picture

Both not One

I feel the comment earlier in this thread by mlesk sums it up well.

Defensive coding fails fast and close to the source. That is gold in production. To presume that ANY form of testing can REPLACE defensive coding is to presume that the testing will catch ALL defects before deployment to production.

If you know how to ship zero defect production code, please tell the rest of us how to do that.

Testing COMPLEMENTS defensive programming; it doesn't REPLACE it.

jdl