3/18/2012

Static Code Analysis based on Microsoft Rules

This is the tenth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

In this post and the next one I want to show how static code analysis can be used to improve the code quality and execute them during the build or check-in process. First of all I want to show how to enable the build-in Microsoft Code Analysis. It is actually a great feature which is not well known and just rarely used.

There is a "Code Analysis" tab in the project settings where the static code analysis can be enabled and the rule sets can be selected:


There are a couple of predefined rule sets from Microsoft. At least the "Microsoft Minimum Recommended Rules" should be enabled because it includes checks for potential application crashes, security holes and other important issues. If, for instance, an IDisposable object is not released, a warning is shown by the Code Analysis during the CI Build:


The Code Analysis is a simple and fast way to enable static code checks to prevent typical errors based on rule definitions from Microsoft.

Client-side Architecture Validation

This is the ninth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

JSAnalyse
A couple of weeks ago I already blogged about JSAnalyse, which is an extension for the Visual Studio Layer Diagram to enable architecture validation for Client-Side Scripts. The JavaScript Client-Side code is still not threated in the same way like server-side code, with the same quality criteria. Nowadays, many projects have already a couple of unit tests and layer validations for the server-side code but do not care about testing and validating their JavaScript code at all. That is the reason why I want to mention JSAnalyse again. It helps defining a client-side architecture and to keep the JavaScript dependencies under control.


More details about Client-Side Validation, how to use it and how it works can be found on the following pages:
Blog about JSAnalyse
JSAnalyse on CodePlex

Additionally for testing JavaScript code I would recommend using JS-Test-Driver and reading the following blog:
JavaScript Unit Testing
JS-Test-Driver

Server-side Architecture Validation

This is the eighth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

The architecture layer diagram is one of the best features in Visual Studio. It gives an easy and great possibility to validate the defined architecture. There are a lot of applications which started with a well-defined architecture and a lot of thoughts and ideas behind it. But over time when the code is getting implemented, refactorings are done and time pressure comes up, the defined architecture is not followed anymore. It takes a lot of time to review the dependencies between the layers and assemblies during the development phase. Sometimes reviews point out that some unwanted assembly references have been added to the projects but this reference is already heavily used and it takes a high effort to get rid of it again.

The Visual Studio Validation Layer Diagram this mentioned problem can be solved or at least reduced.

How to create a Server-Side Validation Layer Diagram?
The following article explains how to use the Validation Layer Diagram in Visual Studio 2010. It even explains how to enable the Layer Validation in the CI Build and to reject check-in's which do not follow the defined architecture. This helps that the code which violates the layer definitions is not committed to the main branch and does not cause a lot of headache and effort to get fixed at a later point of time:
Favorite VS2010 Features: Layer Validation

Which layer diagram views should be created?
Here is an architecture project which defines different views on the applications and components. It is an example project which givens an idea which different views can be created.


Usually, the following three views should be at least defined in the Architecture Layer Diagram:
  • High-Level View (Overview.layerdiagram)
  • Second Level View - (Presentation.layerdiagram, ServiceLayer.layerdiagram, Business.layerdiagram, DataAccess.layerdiagram)
  • External Components View - Restricts the access to external components (ExternalComponents.layerdiagram)

High-Level View (First Level View)
This view defines how the different layers depend on each other. This is the most important view of the application and should be defined already during the project set up together with the assemblies. It is very important to keep this view up to date because it has a high impact on the maintainability of the solution. Here is an example how this high-level view can look like.


Second Level View
This view explains the internals of a single layer. Usually, there is at least one diagram per layer. If a layer is quite complex there can be even more diagrams. The following figure shows an example layer diagram for a Data Access Layer.


External Components View
This view is also very important because it restricts the layers / assemblies to use a defined set of external libraries. It helps to keep external code isolated by defining the exact places where a library is allowed to be used. It solves the problem that special API calls are spread over the whole applications. A good example is for instance the Entity Framework which should be just referenced and accessible from the Data Access component.


After defining the dependencies and configuring the gated CI Build, it is not possible anymore to check-in code which is against the basic architecture. The build process shows an error if a layer break happened even before the code is committed. In this example a call from the Web Application (Presentation Layer) directly to the Order Repository (Data Access Layer) is not allowed anymore:


3/17/2012

Code Coverage

This is the seventh blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Code coverage is a measure which indicates how much percentage of the code has been tested. There are different techniques but it usually describes how many lines of the code have been executed during the Unit Test and how many have not been. It does not say anything about the quality of the tests itself. Even a high percentage of Code Coverage does not help if the unit tests do not cover the use cases how the component / class is used. But it is a good indicator to find out which parts are tested at all and which have a lack of testing.

Enable Code Coverage
It can be easily activated in the Visual Studio Menu "Test", "Edit Test Settings" and select the Test Settings file.

After that the Code Coverage can be enabled on the "Data and Diagnostics" tab.


The assemblies which should be instrumented have to be selected by clicking the "Configure" button. It has to be checked that just productive assemblies are selected and no unit test projects.


Code Coverage Check during CI Build

A code coverage check can be implemented in order to ensure that a certain amount of unit tests are written and stay in a healthy state. It can check the code coverage percentage and fail the build if the value is under a defined amount. This prevents the tests from getting removed from the build because it would drop the code coverage value. Of course, this says nothing about the quality of the tests but it makes a least sure that the tests are executed and increase with the code basis.

The following example coverage output file is written during the build process if the code coverage has been enabled. The value can be checked reading the BlocksCovered and BlocksNotCovered nodes and compared to a defined value which is the criteria to fail the build or not.

<CoverageDSPriv>
  <xs:schema id="CoverageDSPriv">...</xs:schema>  
  <Module>
    <ModuleName>TSTune.CodeExamples.dll</ModuleName>
    <ImageSize>57344</ImageSize>
    <ImageLinkTime>0</ImageLinkTime>
    <LinesCovered>7</LinesCovered>
    <LinesPartiallyCovered>0</LinesPartiallyCovered>
    <LinesNotCovered>7</LinesNotCovered>
    <BlocksCovered>7</BlocksCovered>
    <BlocksNotCovered>6</BlocksNotCovered>
  </Module>
  <SourceFileNames>
    <SourceFileID>1</SourceFileID>
    <SourceFileName>OrderServiceProxy.cs</SourceFileName>
  </SourceFileNames>
  <SourceFileNames>
    <SourceFileID>2</SourceFileID>
    <SourceFileName>OrderManagement.cs</SourceFileName>
  </SourceFileNames>
</CoverageDSPriv>

In this simple example 7 of 13 blocks have been covered during the test, which is a code coverage of: 7 / (6 + 7) = 0.5385 = 53.85 %.

Coding Guidelines

This is the sixth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Many companies have written coding guidelines for the development which define naming, layout, commenting and a lot of other conventions. Even Microsoft has a couple of MSDN articles about coding conventions and design guidelines.

Coding guidelines are really important for the readability of the code and they can reduce the maintenance effort because the developer understands the code more quickly.

Many companies invest in writing documents about coding and design guidelines but the code does not follow most of the defined rules and the different components and classes have a completely different style. Just a document does not improve the code quality. The developers have to know the content of the document and have to follow it. The code has to be reviewed on a regular basis.

Usually, the guidelines document is just stored somewhere on a SharePoint or file share. It is also most of the time not up to date because another version of the programming language has been released. The new language features are not described or parts of the document are already obsolete.

This problem can be solved by using a tool like StyleCop. StyleCop checks during the build process if the code follows the defined rules. It can be checked, for instance, if all public methods are commented or every if-block is within curly brackets. The StyleCop rules can be defined instead of writing and updating the coding guidelines document. If the StyleCop rules are checked during the development process, some important time during the review can be saved. The reviews can focus on the architecture and design of the components instead of checking the style and naming conventions.

There are two ways to check StyleCop rules during the development process. Either via check-in policy or integrated into MS build. I would recommend to use the MS build integration because the check-in policy has to be installed on all develop machines and have to be kept up to date.

Integrate StyleCop into MS Build:
After downloading and installing StyleCop, there is a MS Build target in the installation folder:
StyleCop\<version>\Microsoft.StyleCop.targets

Just copy the file and check it in your source control. After that it can be referenced relatively in order to work on all developer machines.
<Import Project="..\StyleCop\Microsoft.StyleCop.targets" />

If the StylCop target is integrated in the MS Build, every violation is shown as a warning. In case you want to ensure the rules this might be not enough. I have seen projects with thousands of warnings in the build process. A warning is indeed not an error and the assembly can be still compiled but there are reasons why warnings are shown. That is why they should not be ignored. One possibility is to enable the build option "Treat warnings as errors". In combination with gated builds code cannot be checked-in anymore which do not fulfill the StyleCop rules.


But there is one big disadvantage in that approach. The developer cannot test easily changes on the code anymore because every time the violated StyleCop rules make the build fail. If, for instance, a new public method has been added and is not commented yet, because it is not finished completely, this code cannot be compiled and tested. That is the reason why I would enable this option just during the continuous integration build and disable it on the local machine. This can be done using different Configurations like in the following project file:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Local|AnyCPU' ">
  <TreatWarningsAsErrors>false</TreatWarningsAsErrors>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'TFS|AnyCPU' ">
  <TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>

3/16/2012

JavaScript Unit Testing

This is the fifth blog of a series of posts about the topics Continuous Integration and Continuous Delivery. It explains how important continuous integration is to deliver software in short iterations to the customer with a high quality standard.

Modern applications use more and more JavaScript to provide a rich and interactive user interface. Especially with HTML 5, JavaScript code is getting even more. I am wondering that JavaScript is still not taken as serious as most of the other programming languages. There is still not enough awareness that JavaScript code is an important part of applications and has to have a good code quality as well. I have seen projects which were writing a lot of server-side unit tests but had no quality assurance on the client-side.

The tools have been improved in the last couple years but are still not as intuitive as they should be. For instance, Visual Studio does not support Unit Testing of JavaScript code out-of-the-box. But at least there are already a couple of JavaScript Unit Testing Frameworks available.

In the following blog, different JavaScript Test Frameworks shall be tested and compared. It is focused on the TFS integration in order to execute the tests during the CI build.

Browser-based or Browser-less?

There are two different approaches, either the test frameworks are using a browser to execute the tests or the JavaScript code is interpreted and executed from a host application.

Browser-based Frameworks: QUnit, JS-Test-Driver, ...
Browser-less Frameworks: JSTest.NET, google-js-test, ... Crosscheck, ...

The browser-less frameworks can be executed usually pretty easy. Also the integration in CI builds is much easier because the overhead of starting and stopping a browser is not needed. But there is one big disadvantage with browser-less frameworks. The execution runs in a virtual environment and the different browser habits cannot be tested. Additionally some features are usually not supported by these frameworks. That is the reason why I prefer browser-based frameworks.

Writing JavaScript Tests can be tricky

In general, writing JavaScript Unit Tests is not as easy like testing server-side code, because usually JavaScript code is calling web services and interacting with the DOM of the browser. Of course, you can separate your JavaScript logic from the DOM interaction and service calls (and you should always do that!). But that does not change the fact that loading data and manipulating the DOM are the main tasks of your JavaScript code. If you would just test the pure JavaScript logic without DOM interaction, you would miss a big part of your code.

Mocking Ajax-Request

The first problem with AJAX service calls can be solved by using a mocking framework. If you are using JQuery, you just need to include the JQuery Mockjax library and you can easily redirect your AJAX calls to return the data you need for your test:

$.mockjax({
  url: 'testurl/test',
  responseText: 'Result from the test operation.'
});

This line hooks into the JQuery library and returns the given response text for all JQuery requests to the defined url. The response text can be simple text, JSON or any other content.

DOM Manipulation

The DOM interaction problem is more difficult. Almost in all cases, JavaScript code communicates and manipulates the browser's DOM. Asynchronously retrieved data has to be displayed in a certain way. This topic is also the most important task of a JavaScript unit testing framework (besides the test execution, of course).

There are different approaches to support the declaration of HTML markup for unit tests. The most frameworks like QUnit for example, need a real HTML document for the test execution. The unit tests are written within this document and executed by simply loading the document. The results are shown afterwards by the testing framework within the browser as HTML output.

This approach has two big disadvantages:
  • All the tests have to work in the context of the HTML page. The JavaScript unit tests usually highly depend on the HTML markup. If a lot of different cases have to be tested, a new HTML page has to be created each time. These pages are usually just slightly different but cause a lot of troubles and effort in the test maintainance.
  • The test results are usually shown as HTML output in the browser and cannot be automatically processed. But this is very important to fail the Continuous Integration Build and deny the check-in.

But there is JS-Test-Driver, a tool especially made for the integration of JavaScript unit tests in CI builds as well as an easy definition of HTML markup. It makes it much easier to execute JavaScript unit tests within a CI build and to reduce the effort to write tests.

JS-Test-Driver

JS-Test-Driver is a great Unit Testing framework, which supports inline definition of DOM elements and a seamless integration into the Continuous Integration build.

The HTML markup for unit tests is not written in a separate HTML page. It can be defined with a special DOC comment, e.g. /*:DOC += */. The html document is automatically created and can be used within your test case.

TestCase.prototype.testMain = function() {   
  /*:DOC += <div class="main">
</div>
*/   
  assertNotNull($('.main')[0]); 
};

That is the reason why js-test-driver is my favorite Javascript Test Framework. It scales like a charm and allows to define HTML tags within the tests. Additionally it can be easily integrated into the build process.

Configuration of JS-Test-Driver:

The following script shows how to configure JS-Test-Driver. It is quite self-explaining. The "server" declaration defines the binding for the started server. "load" defines which scripts should be available during the tests. "test" defines where are the unit tests located. Additionally plug-ins like code coverage calculation can be integrated as well.

server: http://localhost:4224

load:
 - Script/Main/*.js
 - Script/Page/*.js

test:
 - Script/UnitTests/*.js

plugin:
 - name: "coverage"   
   jar: "coverage.jar"   
   module: "com.google.jstestdriver.coverage.CoverageModule"

Integrate JS-Test-Driver into Team Foundation Server Build

JS-Test-Driver starts a server and a browser instance, runs the tests for you and posts the result to the server. The result can be evaluated during the CI Build and check-ins can be even rejected when just one test-case fails. After that JS-Test-Driver is also shutting down the server and the browser.

To integration JS-Test-Driver into the TFS Build a configuration file (like above) and a build target has to be created:

<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <!-- SolutionDir is the dir, where the solution file exist -->
    <SolutionRoot>$(MSBuildStartupDirectory)\..</SolutionRoot>
  </PropertyGroup>

 <Target Name="JSTestDriver">
    <PropertyGroup>
      <JSTestDriverJar>$(SolutionRoot)\JsTestDriver\JsTestDriver-1.3.4.b.jar</JSTestDriverJar>
      <JSTestDriverConfig>$(SolutionRoot)\JsTestDriver\jsTestDriver.conf</JSTestDriverConfig>
      <BrowserPath>C:\Program Files (x86)\Internet Explorer\iexplore.exe</BrowserPath>
    </PropertyGroup>

    <Exec Command='java -jar "$(JSTestDriverJar)" --port 40000 --basePath "$(SolutionRoot)" --browser "$(BrowserPath)" --config "$(JSTestDriverConfig)" --tests all --verbose' />
  </Target>

</Project>

This target starts JSTestDriver and can be easily executed from the local or TFS build:
build JSTestDriver

The screenshots show how the JSTestDriver target can be added to the TFS build workflow XAML. The MS Build activity uses the JSTestDriver target to start the Java jar-file and executes the javascript unit tests. If one of the tests fails the MS Build activity returns an error and therefore also the build fails. If the gated check-in is enabled, the code is not committed in the code basis until the tests are fixed.