With NEdifis we advocate putting tests right beside the code rather than into separate test projects. Both approaches have its pros and cons, yet the more popular still seems to be putting tests into a separate test project, see for example Mark Seemann: Where to put unit tests or StackOverflow: Do you put unit tests in same project or another project? The most common reasons for separated test projects are:
- According to TDD, tests are the first client of your API, i.e. accessing the API from outside gives early feedback and should therefore improve your API design for intended users.
- The additional testing code requires more disk space and may increase startup or loading time. You definitely want to avoid this on low resource hardware like mobile devices or embedded systems system.
- The testing code can make your component vulnerable so you might need to think more about security concerns.
- Internal helper code can be tested a lot easier. The usual workarounds with
internaland InternalsVisibleTo (corresponding Package scope in Java), conditional compilation (e.g. only DEBUG) and even test-specific subclassing are way more prone to rework.
- Completely separated test bases are yet a lot worse in terms of rework required to keep code and tests in sync. Tests in a separate solution/workspace or worse yet, in a different git repository, a different programming language/framework, a different whatever... just don't age well with your code. On the contrary, experience shows that tests are the more effective in minimizing implementation risk the closer they are to the code to be tested, cf. test early and often (lmgtfy). Only then, tests can evolve quickly with the code. Otherwise, the chance of your tests diverging from the code is increased. This is the main reason why commercial testing software like QF-Test or Ranorex is less effective.
- The additional test code does increase the final assembly size. But unless you are counting bytes this is often negligible. The startup time is not affected since testing code and frameworks don't need to be loaded (e.g. NUnit) unless you do a test run. In fact, you can even skip shipping testing framework assemblies to save some kilobytes.
- Shipping testing code (unit, integration and acceptance tests) greatly documents the state of implementation. It represents the requirements specification - not written on paper but directly executable for the customer. It is vital that you manage to keep the abstraction level of the tests very close to the specified requirements. This is of course easier for a technical project or customers with a strong technical background. In early 2012, i met Daniel Fischer (@lennybacon) who told me about DocX2 Unit Test which aims to reduce the media barrier between customer and developer. Great idea!
A very small point i want to add to this list, that hasn't been mentioned before AFAIK, is a pragmatic one: Keeping code and testing code together simplifies the steps needed for automated
compile & testwhich is fundamental for Continuous Integration, i.e:
- Compiling the main project also compiles all its tests. No need for a sepaerate compile instruction, i.e.
- build MyProject.csproj
- build MyLib.csproj
- All relevant test binaries are compiled to the output folder of the main project. So, parameterizing the test or coverage runner from a build job is simple ("run tests/coverage for all assemblies in folder"). Build job maintenance is also reduced, e.g. you never need to track project naming changes in your build job - except for the main project, of course ;-).