1/04/2012

.NET Unit Testing

This is the forth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

In my previous post I was explaining what role a CI build plays in the development process. Also Gated Check-in's are really important to ensure a certain code quality. But this quality highly depends on the checks and tests which are running during the CI build.

Writing unit tests and integrating them into the continuous integration build is essentially for writing code with good quality. Always assume that your code is not working until you have proven that it works by a unit test.

Software has to be permanently adapted and changed due to new requirements. That is the nature of software development because humans cannot overview the complexity of IT systems. That is also the reason why iterative and agile development processes are so successful compared to traditional waterfall and V-models.

But how to write good unit tests?
I have seen a couple of projects with totally different ideas and solutions how they created and organized their unit tests. One main goal and very important technique for writing successful unit tests is to keep the scope for a test very small. In other words just test a single class or even better a single method per test. What sounds pretty easy in theory can be challenging in the practice. Introducing unit tests in existing projects which have already a net of references can be very tricky. A good practice is to improve the code step by step. Every check-in has to make the code better. When you start a new project it is much easier to introduce good unit tests with a little effort and discipline. Patterns like inversion of control and dependency injection are techniques to reduce the dependencies between components without introducing more complexity. Do not try to write unit tests which test all your layers at once. This results in high effort for building and maintaining the test data and unit tests during the software lifecycle. Better introduce local unit tests step by step.

Here is a list of simplified best practices which can be applied in most of the cases to reach easily a looser coupling and therefore a better testability:
1. Every time you want to call another class add the interface of the class in the constructor of your class and store it as a field or property and call the interface instead of the class directly. If the class you want to call does not have an interface, what stops you from creating one? Tools like Visual Studio and ReSharper even support you doing that easily. If it is external code
just create a wrapper around this code which is anyhow a good practice to integrate external code in your application.

Assuming you want to test your business logic OrderManagement but unfortunately your business logic calls a web service through your OrderServiceProxy class. That makes testing your business logic much more difficult because every time the web service is not accessible your unit test would fail. We just add a new interface IOrderServiceProxy and add a constructor taking this interface in the Order Management class.

public class OrderManagement : IOrderManagement
{
    private IOrderServiceProxy OrderService { get; set; }

    public OrderManagement(IOrderServiceProxy orderService)
    {
        OrderService = orderService;
    }

    public bool ProcessOrder(Order order)
    {
        return OrderService.PlaceOrder(order);
    }
}


2. Now you can easily test your OrderManagement class und ProcessOrder method because you can pass a replacement for the OrderServiceProxy implementation and test against your dummy implementation.

public class OrderServiceProxyMock : IOrderServiceProxy
{
    public bool PlaceOrder(Order order)
    {
        return true;
    }
}

[TestClass()]
public class OrderManagementTest
{
    [TestMethod()]
    public void ProcessOrderTest()
    {
        // Create mock class
        IOrderServiceProxy orderService = new OrderServiceProxyMock();

        // Create test data
        Order order = new Order();
            
        // Create your class to test and pass your external references
        OrderManagement target = new OrderManagement(orderService);
            
        // Execute your test method
        var result = target.ProcessOrder(order);
            
        // Assertions
        Assert.IsTrue(result);
    }
}


3. You can use a mocking framework like RhinoMock, Typemock, Justmock, NMock, etc... to simplify testing your code and reduce the lines of code you have to write.

RhineMocks example:

[TestClass()]
public class OrderManagementRhinoMocksTest
{
    [TestMethod()]
    public void ProcessOrderTest()
    {
        // Create test data
        Order order = new Order();

        // Create mock
        MockRepository mock = new MockRepository();
        IOrderServiceProxy orderService = mock.StrictMock<IOrderServiceProxy>();
        orderService.Stub(x => x.PlaceOrder(order)).Return(true);
        mock.ReplayAll();
            
        // Create your class to test and pass your external references
        OrderManagement target = new OrderManagement(orderService);

        // Execute your test method
        var result = target.ProcessOrder(order);

        // Assertions
        Assert.IsTrue(result);
    }
}


4. You can use Dependency Injection Frameworks in order to inject the implementations in the constructors. Especially for your productive code you have in most of the cases just one implementation for an interface which can be easily mapped. There are a lot of Dependency Injection Frameworks available like Unity, StructureMap, Spring.NET, etc...

A dependency injection framework resolves the interfaces you placed in a constructor or property with the real implementation. Which interface maps to which implementation can be either configured in a xml file or just coded.

Unity Configuration example:

First you usually define your alias which maps to a full qualified type name. You have to do that for your interface as well as your implementation. After that you can register a mapping from the interface to the actual implementation.

<configuration>
  <configSections>
    <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" />
  </configSections>
  <unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
    <alias alias="IOrderServiceProxy" type="TSTune.CodeExamples.ServiceAgents.IOrderServiceProxy, TSTune.CodeExamples" />
    <alias alias="OrderServiceProxy" type="TSTune.CodeExamples.ServiceAgents.OrderServiceProxy, TSTune.CodeExamples" />
    <alias alias="IOrderManagement" type="TSTune.CodeExamples.BusinessLogic.IOrderManagement, TSTune.CodeExamples" />
    <alias alias="OrderManagement" type="TSTune.CodeExamples.BusinessLogic.OrderManagement, TSTune.CodeExamples" />
    <container>
      <register type="IOrderServiceProxy" mapTo="OrderServiceProxy"/>
      <register type="IOrderManagement" mapTo="OrderManagement"/>
    </container>
  </unity>
</configuration>


After you configured your unity container you have to load the configuration and initialize your unity container before you can use it:

IUnityContainer unityContainer = new UnityContainer();
UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");
section.Configure(container); 


Unity Code example:

You can also register the mappings using code, which is much easier:

IUnityContainer container = new UnityContainer();
container.RegisterType<IOrderManagement, OrderManagement>();
container.RegisterType<IOrderServiceProxy, OrderServiceProxy>();


But this approach has two disadvantages:
First of all you have to recompile your code to exchange implementations. Secondly a static reference has to be added to all the assemblies you want to register, because the classes have to be known during registration. This can be problematic when you use the Visual Studio Layer Diagram Validation, which I am going to explain in one of my next posts.

Unity - How to use it:

Every time you call now UnityContainer.Resolve<IOrderManagement>(); you will get an instance of your OrderManagement class.

var orderManagement = unityContainer.Resolve<IOrderManagement>();
orderManagement.ProcessOrder(new Order());


5. If using a Dependency Injection Framework is too much of a pain for you (which it should not!), then you can add a default constructor which wires up all implementations with the interfaces. This is called poor man's dependency injection.

public OrderManagement()
{
    OrderService = new OrderServiceProxy();
}


If you are using interfaces instead of real implementations you lose the chance to easily navigate with F12 through your code during design time. Instead you end up looking at the interface
when you want to investigate the implementation and you have to search the actual implementation manually. ReSharper helps you navigating directly to the implementation by clicking Ctrl+F12.

How to integrate in the Team Foundation Server Build:

First, you should create your test lists. Usually there is a test list for CI, Nightly and maybe Manual tests.



After you placed your unit tests in the test lists, you can set up the TFS build to execute your test list in the CI build. Do not forget to fail the build if the test execution failed.





Final important note !!!

Think about unit test code like productive code. Use the same quality criteria. Unit test code has to be maintained together with your productive code and underlies the same changes!

This makes writing and maintaining of your unit test code much easier and increases the quality of your code.

CI and Nightly Builds

This is the third blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

In this post I am going to show how to create builds with the TFS 2010 Build Workflow Engine. To ensure code quality and enable continuous delivery, we have usually two different types of builds: CI and Nightly Builds.

CI Builds are executed during the check-in of the developer. There are 3 types of CI builds:
  • Continuous Integration Build - Every check-in of the developer is build, but the code is always committed even if the build fails. Even code which is not working correctly will be stored in the productive source control.
  • Rolling Builds - Builds a set of check-in's which have been committed since the last build. It has the same disadvantage like normal CI builds and on top it is not all the time clear who created the faulty code.
  • Gated Check-in Builds - This type of CI builds just commit the code in the main source control when the build is green and every quality check has been successfully passed. This gives the possibility to ensure certain criteria's and forces the developer to adapt the code if just one of rules in broken.

The task of CI builds is to ensure the code quality. This works great with Gated check-ins because it does not allow to check-in anything which is not according the defined quality criteria.


Nightly Builds are used for integration tests which take a longer time to execute and it would not be feasible to execute them during each check-in process. The Nightly Builds are triggered on a certain time. A good example are Coded UI Tests, which test the user interface by performing clicks and other actions on controls of the screen. A usual practice is to deploy the application every night to the target system and performing completely automated integration and user interface tests.

Overview about a possible development process to prevent broken applications:
  • CI Build (with Gated Check-in) on every code change
  • Nightly Deployments to Development System
    In order to execute integration and user interface tests it is important to establish a completely automated deployment, which can be executed during the night. After that the automated tests can be executed and instant feedback can be given every morning. It is important that the automated integration and user interface tests cover the main functionality of the application and ensure the health of the application. Additionally, the customer should not test on this system, because it can be broken every morning.
  • Weekly Deployments to Staging System
    Only when CI Build, integration and user interface tests have been successfully passed, an automated deployment can be triggered for the staging system. After that the automated tests shall be executed on this system again, in order to ensure the health of the application and prevent configuration errors.
    In that case the customer gets always a stable version on the staging system and can focus on reviewing the implemented requirements.
This process should help to ensure that the customer does not see a broken application and should be much more satisfied about the software quality.

1/03/2012

Why to force so many check-in rules during the CI build?

This is the second blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Why to force so many check-in rules during the CI build?

The reason seems pretty simple: I have seen over and over again in the projects I was working that every rule which is not enforced during the check-in process will be broken sooner or later.

Usually, this is not done on purpose. There are multiple reasons for that. Most of the time due to high time pressure in the projects or because the developer is deeply focused on the current task and forgets just about it. But the rules and guidelines can also be misunderstood or just forgotten when definitions are discussed in a meeting or send as mail.

Another reason for check-in rules is that this approach saves a lot of time and money because problems are detected before they are actually integrated in the main code. During the architecture and code reviews, the architects can focus on other more important things than static dependencies and code metrics.

In general, every important change in the design of the application is forbidden by check-in rules. It will be all the time an explicit change and not done by accident without noticing it. A good example for that is ReShaper. It is a great tool of course, everybody should have it. But it has a feature, for instance, which detects the namespace and assembly when you just type-in the class name. It automatically adds a reference to the assembly in the current project. I often catch myself adding unwanted references while I am coding and trying to solve my local problem.

All of these problems can be avoided by using good static analysis tools during the check-in process:

Another good approach is:
When you find a problem in a code review, you can create a rule which detects this violation in the future. It is like writing unit tests to check the software architecture.

1/02/2012

Continuous Integration and Continuous Delivery

This is the first blog of a series of posts about the topics continuous integration and continuous delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Everybody of you faced for sure problems with incomplete or just bad specifications. Additionally a software has to be frequently changed due to new or updated requirements. That's why agile software development processes and techniques are so successful. But a good agile software development processes also implies a continuous delivery to the customer in my opinion. Only like this it is possible to verify if the features match the customers thoughts. But the software can or should be just delivered if it fulfils the quality standard.

I think (almost) everybody delivered already a piece of software which was crashing on the first use. This is frustrating for the customer and gives also a bad picture about the development team. And that is exactly where continuous integration comes into play. It ensures a certain defined quality standard and enables the possibility to define quality gates in order to prevent crucial software failures.

I would like to give you some hints what is important about continuous integration and how you can set up a configurable team foundation server 2010 build workflow with a lot of features and quality gates to ensure a software which works and fulfils the customer needs.

The following build workflow steps will be covered in the next posts:
I am going to explain how to set up each build process, the reason for each step as well as pros and cons.

After that I want to focus on the Delivery Process and which challenges are waiting in this area: