I'm a big believer in unit tests. In my current project we have over 400 unit tests. We even sneak some integration tests into our nunit code. Many of our tests read data from an xml file containing a few objects distilled into xml. The tests rehydrate the objects, invoke a method in our application and pass in those objects. The application returns a string or xml object and we check inside the xml data file to see if it is correct. It's quite tidy.
But with some many tests adding a new feature can be work. Recently while adding a new feature I implemented it in such a way as to have a minimal impact on our test files. Bad, very bad. Should I change 150 xml files, adding in new parameters, or implement the new feature in a way that doesn't change the test files?
I chickened out and did the wrong thing. In the end, like most "shortcuts" it took more time. I went back and wrote a little ruby program to go through the 150 xml files and programatically add new xml attributes with their varying values (name of the file plus occurance in the file, eg, attr="Test5.xml:3").
I learned a few things from the experience:
1. Always do the right thing, even if it takes longer and you are really pressed for time.
2. Unit tests are great and wonderful, but they come with a price. You can have too many tests which can slow you down.
Does my project have too many unit tests? I don't think so, but I'm less sure of that than I use to be. My project partner thinks we have too many, but I still think the tests ensure quality, but the tests come with a higher price than I realized.