Author: ogvolkov

Making NUnit parameterized tests less verbose

If you are a heavy user of NUnit parameterized tests, especially ones driven by TestCaseSourceAttribute, you might sometimes wish for a less verbose test data definition. Consider the following (contrived) example:

private static readonly object[] _additionCases =
{
  new object[] { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 13), new DateTime(2017, 06, 06, 21, 16, 27) },
  new object[] { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 56), new DateTime(2017, 06, 06, 21, 17, 10) },
  new object[] { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 59, 56), new DateTime(2017, 06, 06, 22, 16, 10) }
};

[Test, TestCaseSource(nameof(_additionCases))]
public void Addition(DateTime source, TimeSpan difference, DateTime expected)
{
  Assert.That(source.Add(difference), Is.EqualTo(expected));
}

Clearly, repeated new object[] type declarations steal some line width and don’t make understanding of the test data any easier.

However, there is a solution. While NUnit documentation illustrates TestCaseSourceAttribute with array of arrays, it also informs that any IEnumerable will do. Therefore, we can create our own class for that, and also use a succinct initialization trick for collections implementing Add method:

public class TestCaseList: List<object[]>
{
  public new void Add(params object[] data)
  {
    base.Add(data);
  }
}

With a help of this small class, the original test can be written as follows:

private static readonly TestCaseList _additionCases = new TestCaseList
{
  { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 13), new DateTime(2017, 06, 06, 21, 16, 27) },
  { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 56), new DateTime(2017, 06, 06, 21, 17, 10) },
  { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 59, 56), new DateTime(2017, 06, 06, 22, 16, 10) }
};

[Test, TestCaseSource(nameof(_additionCases))]
public void Addition(DateTime source, TimeSpan difference, DateTime expected)
{
  Assert.That(source.Add(difference), Is.EqualTo(expected));
}

That looks definitely better!

Slow ASP.NET MVC website startup in case of many view folders

Recently we’ve observed that it takes too long for our ASP.NET MVC web application to be up and running after a build. It didn’t take long to find out that the cause of the problem was the recompilation of Razor views.

Most of people on the Internet are concerned with the downtime of the production environments after rolling out a new version, and the pre-compilation of views is able to solve it. However, in our case it’s the development process that is being hampered, so naturally pre-compilation gains nothing. Other commonly found advices like turning off unused view engines haven’t had significant results as well.

But why does it take so long to compile a bunch of views? It turns out, the complexity of views affect the total time only to a small extent, but what really matters is how much view folders are there, specifically how much views located outside of the current folder a given view uses.

I’ve created a small application based on the default ASP.NET MVC project with several partials added to the Home page:

partialsproject

Let’s see what Mini Profiler can tell us about the rendering times of the Index page right after a build:

compilationtime

Results suggest that the first partial is so complex that it takes a lot of time to render it; however, the view consists only of a small text string, and swapping the content with other partials does not make any difference. Moreover, partials 2 and 3 take practically no time to render, and so does the partial located in the same folder as the view.

Further experimentation confirms that the rendering of a partial itself takes a negligible amount of time – however, the first time framework encounters a new folder, it stumbles quite noticeably , obviously performing some activity.

Why is it so? Let’s run the trusty DotTrace:

trace

Let’s look at CompileWebFile method in Microsoft reference sources:

https://referencesource.microsoft.com/#System.Web/Compilation/BuildManager.cs,1662

compiledir

So it seems that whenever the view compiler sees a web file, it tries to compile the whole directory (but not more!). What happens next is that we run the compiler and wait until it’s done:

http://www.symbolsource.org/…Compiler.cs

compile

Starting a process is an expensive task, of course, so that explains why we were observing a noticeable delay.

The conclusion to be made is that a dashboard style home pages doesn’t play well with ASP.NET MVC if their component views are located in different folders. I do not know a solution to this problem, because the build strategy is buried well inside the MVC internals, so for now the advice would be to keep everything you reference in the same folder as much as possible.

Creating web sites on IIS using Powershell

Basic operation

The simplest way to create an IIS website via Powershell is to use Carbon module:

Import-Module 'Carbon'

Install-IisWebsite -Name 'test site' -PhysicalPath 'c:\website'

If a website with a given name exists, the script will update it.

Host names

Suppose you need to run several versions of your software side by side, using neat host names such as “v1.localhost” or “v2.localhost”. For that, you need to manipulate the Windows hosts file, like this (again using Carbon):

Set-HostsEntry -IPAddress 127.0.0.1 -HostName v2.localhost

If such host record already exists, it will not insert a duplicate.

You can bind a website to the host in the following way:

Install-IisWebsite -Name 'test site - version 2' -PhysicalPath 'c:\v2\website' -Bindings 'http/*:80:v2.localhost'

Certificates

If you use HTTPS (as you should), you will probably need to make your website use some SSL certificate. The simplest way is to use a self-signed certificate created by IIS. You can automate it like this:

$certificate = New-SelfSignedCertificate -dnsname '*.localhost'

After you have a certificate, you need to tell the website to use it:

Install-IisWebsite -Name 'test' -PhysicalPath 'c:\website' -Bindings 'http/*:80:test.localhost', 'https/*:443:test.localhost'
$applicationId = [System.Guid]::NewGuid().ToString()
Set-IisWebsiteSslCertificate -SiteName test -Thumbprint $certificate.Thumbprint -ApplicationID $applicationId

This will create an IIS website with proper HTTPS bindings.

Note that Carbon’s function Set-IisWebsiteSslCertificate requires a parameter called ApplicationId. You can supply a meaningful value here, but I just use a new guid.

Detecting if a certificate has been created

Sometimes you need a script which takes advantage of the existing data, proceeding without errors if some items already exist and just adding missing bits to them. The certificate creation described above is not suitable for this purpose, as it creates certificate unconditionally. Let’s try to get the existing certificate and check if it exists

$certificate = Get-ChildItem -Path CERT:LocalMachine/My |
    Where-Object -Property DnsNameList -EQ '*.localhost' |
    Select-Object -first 1

if (!$certificate) { Write-Output 'Need to create certificate' }
else { Write-Output 'Certificate is already there' }

 

Putting everything together

Let’s combine the previous material in a couple of reusable functions to create a self-signed certificate if it does not exist, and create an HTTPS-enabled website with a corresponding host entry:

function Get-Or-Create-Certificate($certificateDns)
{
    $certificate = Get-ChildItem -Path CERT:LocalMachine/My |
        Where-Object -Property DnsNameList -EQ $certificateDns |
        Select-Object -first 1

    if (!$certificate)
    {
        Write-Output "Creating self-signed certificate for $certificateDns"

        $certificate = New-SelfSignedCertificate -dnsname $certificateDns
    }
    else
    {
        Write-Output "Self-signed certificate for $certificateDns already exists"
    }

    return $certificate
}

function Create-WebSite($hostName, $path, $siteName, $certificate)
{
    Write-Output "Adding host $hostName"
    Set-HostsEntry -IPAddress 127.0.0.1 -HostName $hostName

    Write-Output "Creating website $siteName on IIS"
    Install-IisWebsite -Name $siteName -PhysicalPath $path -Bindings "http/*:80:$hostName", "https/*:443:$hostName"

    Write-Output "Attaching certificate to the website $siteName"
    $applicationId = [System.Guid]::NewGuid().ToString()
    Set-IisWebsiteSslCertificate -SiteName $siteName -Thumbprint $certificate.Thumbprint -ApplicationID $applicationId
}

The functions above are made to be idempotent, so you can call them several times without errors. Use them as follows:

$certificate = Get-Or-Create-Certificate -certificateDns $certificateDns
Create-Dev-WebSite -hostName 'v1.localhost' -path 'c:\v1\website' -siteName v1 -certificate $certificate
Create-Dev-WebSite -hostName 'v2.localhost' -path 'c:\v2\website' -siteName v2 -certificate $certificate

Feel free to tailor those scripts to your needs.

.NET environment automation with Powershell

In the upcoming series of short posts, I would like to describe how some common developer tasks in .NET ecosystem can be automated via Powershell.

The list follows:
Create a website on IIS, including creation and usage of self-signed certificate
– Setting Sql Server authentication mode and creating database logins
– Creating a database based on Visual Studio Database Project
– Executing Sql queries
– Calling REST APIs

Migrating from TFS to Git in Visual Studio Team Services

This post will explain how to migrate from a legacy Team Foundation Server version control into a Git-based source control. It is targets the case when both TFS and Git are hosted on Visual Studio Team Services (VSTS, former Visual Studio Online), but a great deal of it will also apply to on-premise TFS or other git hosting options.

The general migration strategy is pretty simple: use git-tfs bridge to create a local git repository which mirrors the existing TFS one, then create a remote git repository on your hosting of choice and push from the tfs mirror into it. However, you may encounter a plenty of unexpected issues, so I’ll discuss workarounds for them.

Let’s start from the general outline.
First, install git and git-tfs.

Then, create a git repository in VSTS and start a master branch by creating an initial commit.

After that, proceed with the following commands:

git tfs clone https://company.visualstudio.com/DefaultCollection $/branchPath . --export --branches=none

git remote add origin https://company.visualstudio.com/DefaultCollection/git-project-name/_git

git push -uf origin master~2400:master
git push -u origin master~2200:master
git push -u origin master~2000:master
...
git push -u origin master:master

Adjust TFS urls in git tfs clone and git remote add according to your setup. We will discuss the magic numbers below.

Why push is so complicated?

You might expect to send the whole repository to VSTS with a simple push. However, if your repository is large enough, things won’t be so simple. What you’ll probably get instead is this cryptic message:

RPC failed; result=22, HTTP code = 404

A lot of results on the Internet suggest increasing the post buffer:

git config http.postBuffer 524288000

It’s good to do so, but probably it won’t solve the problem completely.
There is no explanation given for such VSTS behaviour, but there is an official workaround: push in chunks.

Firstly, you’ll need to know the total number of commits in your branch, which can be retrieved using the following command:

git rev-list master --first-parent --count

Now choose a number somewhat smaller than that (2400 in my example), and push all commits which are farther than that amount of commits from the branch top. Then repeat the process by decreasing the number (pushing commits up to more and more recent points in time), until you can finish it by doing just git push master. The chunk size is determined empirically.

Make sure the first push is forced (-f) to erase the initial commit you might have created to bootstrap the master branch. If you don’t do it, you’ll end up with a messy history.

Why export flag?

Export flag for tfs clone is needed purely for work item tracking. By default, git-tfs retrieves work items related to your changesets, and stores this information in git commit notes, but VSTS is unable to show links between a commit and a work item based on that. Instead, use export flag to include related work item information directly into a commit message, which will be understood and properly displayed by VSTS.

In case you wonder, the commit message format will be as follows:

<Original commit message>
work-items: #<work item id>
git-tfs-id: [<TFS project collection url>][<TFS branch path>];C<changesetId>

Why no branches?

In my experience, cloning all the branches resulted in impossibility to push to VSTS even despite the chunking. Due to the merge changeset, one chunk was just to big to fit, and Microsoft support was not really willing to help solving this issue. Therefore I decided to clone only trunk, which also made a nice linear history.

How to port changesets appearing in TFS after the migration?

If there is a pending work based on TFS (e.g. some items are under review or testing when you perform the migration), chances are new changesets might appear on TFS after the migration. It is possible to migrate them to git as well.

To do that, first fetch the latest changesets using git-tfs. Then go to your local git repository which is already connected to the remote VSTS git, and add a remote pointing to your TFS mirror created by git-tfs. Fetch such a remote and then perform a cherry pick of the commits representing the newly added changesets. It will port those changes into your local branch, and you only need to push them into the VSTS remote.

# in your git-tfs clone of the TFS repository
git tfs pull

# in your git repository connected to the remote VSTS git repository
git remote add tfs-mirror </path/to/git-tfs/mirror/on/your/pc>
git fetch tfs-mirror
git cherry-pick <commit from the tfs-mirror>

# ...proceed as usual
git push

Gotcha when using git-tfs from Git Bash

If you get the following message when running git-tfs under bash:

TFS repository can not be root and must start with “$/”

prepend the command with MSYS_NO_PATHCONV=1 https://github.com/git-tfs/git-tfs/issues/845 (or, use cmd or Powershell).

A note about credentials

On some systems, git command line doesn’t work with VSTS out of the box (you may encounter permission errors). If that’s the case, install Git Credential Manager for Windows. You might also need to create a token as described here.

Passing complex objects to ASP.NET MVC action using HTTP GET method via jQuery ajax

Sometimes, you might encounter a minor issue in ASP.NET MVC: via AJAX, you can easily pass whatever object you want to your POST action, but doing so with GET endpoint is not so obvious. Plain object would work, but once it starts to be more complex, containing nested objects, everything breaks down and nested objects properties are never filled in.

As an example, consider this jQuery bit:

var obj= { prop1: "val1", nested: { prop2: "val2" } };
$.ajax({
    type: "GET",
    data : obj
})

This is what it sends to the server:

prop1=val1&nested%5Bprop2%5D=val2

or, unencoded:

prop1=val1&nested[prop2]=val2

It seems like array syntax is not welcomed by MVC. What IS welcomed then? Turns out, dot member access is parsed quite nicely – all we need to do is to replace foo[bar] with foo.bar in case of non-numeric array index. The sample code, adapted from the following StackOverflow answer http://stackoverflow.com/a/17580574/76176 is like this:

var obj = { prop1: "val1", nested: { prop2: "val2" } };
var data = $.param(obj).replace(/%5b([^0-9].*?)%5d/gi, '.$1');
$.ajax({
    type: "GET",
    data : data
})

Which produces the following result, happily consumed by MVC:

prop1=val1&nested.prop2=val2

ASP.NET performance measurements on desktop Windows

Proper benchmarking has always been hard – it is trivial to get some numbers, but not so if you are trying to receive relevant ones, that is, results describing the behaviour of the system under test rather than some of auxiliary setup.

One example is testing ASP.NET web sites performance on a developer machine running Windows. It is very easy to achieve pretty low results compared to other systems like Apache, and the outcome is tempting to make far-reaching conclusions about platform’s performance. For a less obvious example, blog post I’ve stumbled upon recently compared the performance of Web API project using synchronous and asynchronous code, having threaded solution blown out of water by the magnitude of ten.

However, such results do not signify a weakness of Windows/IIS/ASP.NET platform in general due to the simple fact that IIS running on non-server Windows is limited to pretty small number (3 or 10) of concurrent web requests, which naturally limits throughput of a web application quite heavily if the request processing time is not too small. It might not be a bottleneck if your requests are done in 50 ms or so, but if the idea is to simulate a long call to an external web service, then the results stop to be realistic very soon.

How to get the real data then? Of course one way is to use Windows Server machine for performance testing. However, it might present less than optimal feedback cycle when you want to work in a measure – adjust – measure loop. Then another solution might be to use self-hosting capabilities present for a long time in Web API and recently included in ASP.NET MVC as well. In my experience, such hosts are quite quick and do not have any artificial limitations even on desktop Windows.