Author: oleg

The reason why SHA-1 should not be used for password hashing is not 2017 attacks

A surprising amount of people believe that SHA-1 is broken and therefore should not be used in password hashing. “It was on the news!”, “it’s as bad as MD5”, they say. While indeed it’s a bad choice for hashing passwords, the reason has nothing to do with SHAttered attack of 2017.

SHAttered has shown a practical possibility of a collision attack, which finds messages x and y such as

SHA1(x) = SHA1(y),

while cracking a hashed password is a preimage attack, which requires an efficient way for a given hash h to find an input z so that

SHA1(z) = h.

While collision resistance is crucial for authenticity checks (such as the validity of SSL certificates), password hashing needs only pre-image resistance.

Furthermore, the vulnerability to collision attacks absolutely does not imply the vulnerability to pre-image attacks.

In fact, in “Lessons From The History Of Attacks On Secure Hash Functions” it is shown that the overwhelming majority of proposed hash functions are safe against pre-image attacks. For SHA-1, the successful attempts involve weakened versions of the hash function, e.g. in “New Preimage Attacks Against Reduced SHA-1” authors show how one can better the brute-force up to 57 hashing rounds instead of normal 80, but not further. “Higher-Order Differential Meet-in-the-middle Preimage Attacks on SHA-1 and BLAKE” claims to lift the bar to 62 rounds, which is still far from the unencumbered SHA-1.

So then, is SHA-1 suitable for hashing passwords? Absolutely not! Even as we don’t know of a way to find an element from the pre-image more efficiently than using a brute-force, on modern hardware, brute-forcing SHA-1 is fast enough. That’s the real reason why nobody should use a naive SHA-1 application as a hashing method.

The iterated schemes such as PBKDF2 are a different matter. The use of a large number of iterations ensures that brute-forcing becomes reasonable difficult, and the lack of pre-image attacks on the hash function ensures that brute-force is the only way to go. Therefore, using SHA-1 for passwords in PBKDF2 is out of danger for now – perhaps it would be more future-proof to use a different hash in new developments, but there shouldn’t be a rush to change it in the existing systems.


Making NUnit parameterized tests less verbose

If you are a heavy user of NUnit parameterized tests, especially ones driven by TestCaseSourceAttribute, you might sometimes wish for a less verbose test data definition. Consider the following (contrived) example:

private static readonly object[] _additionCases =
  new object[] { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 13), new DateTime(2017, 06, 06, 21, 16, 27) },
  new object[] { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 56), new DateTime(2017, 06, 06, 21, 17, 10) },
  new object[] { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 59, 56), new DateTime(2017, 06, 06, 22, 16, 10) }

[Test, TestCaseSource(nameof(_additionCases))]
public void Addition(DateTime source, TimeSpan difference, DateTime expected)
  Assert.That(source.Add(difference), Is.EqualTo(expected));

Clearly, repeated new object[] type declarations steal some line width and don’t make understanding of the test data any easier.

However, there is a solution. While NUnit documentation illustrates TestCaseSourceAttribute with array of arrays, it also informs that any IEnumerable will do. Therefore, we can create our own class for that, and also use a succinct initialization trick for collections implementing Add method:

public class TestCaseList: List<object[]>
  public new void Add(params object[] data)

With a help of this small class, the original test can be written as follows:

private static readonly TestCaseList _additionCases = new TestCaseList
  { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 13), new DateTime(2017, 06, 06, 21, 16, 27) },
  { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 0, 56), new DateTime(2017, 06, 06, 21, 17, 10) },
  { new DateTime(2017, 06, 06, 21, 16, 14), new TimeSpan(0, 59, 56), new DateTime(2017, 06, 06, 22, 16, 10) }

[Test, TestCaseSource(nameof(_additionCases))]
public void Addition(DateTime source, TimeSpan difference, DateTime expected)
  Assert.That(source.Add(difference), Is.EqualTo(expected));

That looks definitely better!

Slow ASP.NET MVC website startup in case of many view folders

Recently we’ve observed that it takes too long for our ASP.NET MVC web application to be up and running after a build. It didn’t take long to find out that the cause of the problem was the recompilation of Razor views.

Most of people on the Internet are concerned with the downtime of the production environments after rolling out a new version, and the pre-compilation of views is able to solve it. However, in our case it’s the development process that is being hampered, so naturally pre-compilation gains nothing. Other commonly found advices like turning off unused view engines haven’t had significant results as well.

But why does it take so long to compile a bunch of views? It turns out, the complexity of views affect the total time only to a small extent, but what really matters is how much view folders are there, specifically how much views located outside of the current folder a given view uses.

I’ve created a small application based on the default ASP.NET MVC project with several partials added to the Home page:


Let’s see what Mini Profiler can tell us about the rendering times of the Index page right after a build:


Results suggest that the first partial is so complex that it takes a lot of time to render it; however, the view consists only of a small text string, and swapping the content with other partials does not make any difference. Moreover, partials 2 and 3 take practically no time to render, and so does the partial located in the same folder as the view.

Further experimentation confirms that the rendering of a partial itself takes a negligible amount of time – however, the first time framework encounters a new folder, it stumbles quite noticeably , obviously performing some activity.

Why is it so? Let’s run the trusty DotTrace:


Let’s look at CompileWebFile method in Microsoft reference sources:,1662


So it seems that whenever the view compiler sees a web file, it tries to compile the whole directory (but not more!). What happens next is that we run the compiler and wait until it’s done:…Compiler.cs


Starting a process is an expensive task, of course, so that explains why we were observing a noticeable delay.

The conclusion to be made is that a dashboard style home pages doesn’t play well with ASP.NET MVC if their component views are located in different folders. I do not know a solution to this problem, because the build strategy is buried well inside the MVC internals, so for now the advice would be to keep everything you reference in the same folder as much as possible.

Creating web sites on IIS using Powershell

Basic operation

The simplest way to create an IIS website via Powershell is to use Carbon module:

Import-Module 'Carbon'

Install-IisWebsite -Name 'test site' -PhysicalPath 'c:\website'

If a website with a given name exists, the script will update it.

Host names

Suppose you need to run several versions of your software side by side, using neat host names such as “v1.localhost” or “v2.localhost”. For that, you need to manipulate the Windows hosts file, like this (again using Carbon):

Set-HostsEntry -IPAddress -HostName v2.localhost

If such host record already exists, it will not insert a duplicate.

You can bind a website to the host in the following way:

Install-IisWebsite -Name 'test site - version 2' -PhysicalPath 'c:\v2\website' -Bindings 'http/*:80:v2.localhost'


If you use HTTPS (as you should), you will probably need to make your website use some SSL certificate. The simplest way is to use a self-signed certificate created by IIS. You can automate it like this:

$certificate = New-SelfSignedCertificate -dnsname '*.localhost'

After you have a certificate, you need to tell the website to use it:

Install-IisWebsite -Name 'test' -PhysicalPath 'c:\website' -Bindings 'http/*:80:test.localhost', 'https/*:443:test.localhost'
$applicationId = [System.Guid]::NewGuid().ToString()
Set-IisWebsiteSslCertificate -SiteName test -Thumbprint $certificate.Thumbprint -ApplicationID $applicationId

This will create an IIS website with proper HTTPS bindings.

Note that Carbon’s function Set-IisWebsiteSslCertificate requires a parameter called ApplicationId. You can supply a meaningful value here, but I just use a new guid.

Detecting if a certificate has been created

Sometimes you need a script which takes advantage of the existing data, proceeding without errors if some items already exist and just adding missing bits to them. The certificate creation described above is not suitable for this purpose, as it creates certificate unconditionally. Let’s try to get the existing certificate and check if it exists

$certificate = Get-ChildItem -Path CERT:LocalMachine/My |
    Where-Object -Property DnsNameList -EQ '*.localhost' |
    Select-Object -first 1

if (!$certificate) { Write-Output 'Need to create certificate' }
else { Write-Output 'Certificate is already there' }


Putting everything together

Let’s combine the previous material in a couple of reusable functions to create a self-signed certificate if it does not exist, and create an HTTPS-enabled website with a corresponding host entry:

function Get-Or-Create-Certificate($certificateDns)
    $certificate = Get-ChildItem -Path CERT:LocalMachine/My |
        Where-Object -Property DnsNameList -EQ $certificateDns |
        Select-Object -first 1

    if (!$certificate)
        Write-Output "Creating self-signed certificate for $certificateDns"

        $certificate = New-SelfSignedCertificate -dnsname $certificateDns
        Write-Output "Self-signed certificate for $certificateDns already exists"

    return $certificate

function Create-WebSite($hostName, $path, $siteName, $certificate)
    Write-Output "Adding host $hostName"
    Set-HostsEntry -IPAddress -HostName $hostName

    Write-Output "Creating website $siteName on IIS"
    Install-IisWebsite -Name $siteName -PhysicalPath $path -Bindings "http/*:80:$hostName", "https/*:443:$hostName"

    Write-Output "Attaching certificate to the website $siteName"
    $applicationId = [System.Guid]::NewGuid().ToString()
    Set-IisWebsiteSslCertificate -SiteName $siteName -Thumbprint $certificate.Thumbprint -ApplicationID $applicationId

The functions above are made to be idempotent, so you can call them several times without errors. Use them as follows:

$certificate = Get-Or-Create-Certificate -certificateDns $certificateDns
Create-Dev-WebSite -hostName 'v1.localhost' -path 'c:\v1\website' -siteName v1 -certificate $certificate
Create-Dev-WebSite -hostName 'v2.localhost' -path 'c:\v2\website' -siteName v2 -certificate $certificate

Feel free to tailor those scripts to your needs.

.NET environment automation with Powershell

In the upcoming series of short posts, I would like to describe how some common developer tasks in .NET ecosystem can be automated via Powershell.

The list follows:
Create a website on IIS, including creation and usage of self-signed certificate
– Setting Sql Server authentication mode and creating database logins
– Creating a database based on Visual Studio Database Project
– Executing Sql queries
– Calling REST APIs

Migrating from TFS to Git in Visual Studio Team Services

This post will explain how to migrate from a legacy Team Foundation Server version control into a Git-based source control. It is targets the case when both TFS and Git are hosted on Visual Studio Team Services (VSTS, former Visual Studio Online), but a great deal of it will also apply to on-premise TFS or other git hosting options.

The general migration strategy is pretty simple: use git-tfs bridge to create a local git repository which mirrors the existing TFS one, then create a remote git repository on your hosting of choice and push from the tfs mirror into it. However, you may encounter a plenty of unexpected issues, so I’ll discuss workarounds for them.

Let’s start from the general outline.
First, install git and git-tfs.

Then, create a git repository in VSTS and start a master branch by creating an initial commit.

After that, proceed with the following commands:

git tfs clone $/branchPath . --export --branches=none

git remote add origin

git push -uf origin master~2400:master
git push -u origin master~2200:master
git push -u origin master~2000:master
git push -u origin master:master

Adjust TFS urls in git tfs clone and git remote add according to your setup. We will discuss the magic numbers below.

Why push is so complicated?

You might expect to send the whole repository to VSTS with a simple push. However, if your repository is large enough, things won’t be so simple. What you’ll probably get instead is this cryptic message:

RPC failed; result=22, HTTP code = 404

A lot of results on the Internet suggest increasing the post buffer:

git config http.postBuffer 524288000

It’s good to do so, but probably it won’t solve the problem completely.
There is no explanation given for such VSTS behaviour, but there is an official workaround: push in chunks.

Firstly, you’ll need to know the total number of commits in your branch, which can be retrieved using the following command:

git rev-list master --first-parent --count

Now choose a number somewhat smaller than that (2400 in my example), and push all commits which are farther than that amount of commits from the branch top. Then repeat the process by decreasing the number (pushing commits up to more and more recent points in time), until you can finish it by doing just git push master. The chunk size is determined empirically.

Make sure the first push is forced (-f) to erase the initial commit you might have created to bootstrap the master branch. If you don’t do it, you’ll end up with a messy history.

Why export flag?

Export flag for tfs clone is needed purely for work item tracking. By default, git-tfs retrieves work items related to your changesets, and stores this information in git commit notes, but VSTS is unable to show links between a commit and a work item based on that. Instead, use export flag to include related work item information directly into a commit message, which will be understood and properly displayed by VSTS.

In case you wonder, the commit message format will be as follows:

<Original commit message>
work-items: #<work item id>
git-tfs-id: [<TFS project collection url>][<TFS branch path>];C<changesetId>

Why no branches?

In my experience, cloning all the branches resulted in impossibility to push to VSTS even despite the chunking. Due to the merge changeset, one chunk was just to big to fit, and Microsoft support was not really willing to help solving this issue. Therefore I decided to clone only trunk, which also made a nice linear history.

How to port changesets appearing in TFS after the migration?

If there is a pending work based on TFS (e.g. some items are under review or testing when you perform the migration), chances are new changesets might appear on TFS after the migration. It is possible to migrate them to git as well.

To do that, first fetch the latest changesets using git-tfs. Then go to your local git repository which is already connected to the remote VSTS git, and add a remote pointing to your TFS mirror created by git-tfs. Fetch such a remote and then perform a cherry pick of the commits representing the newly added changesets. It will port those changes into your local branch, and you only need to push them into the VSTS remote.

# in your git-tfs clone of the TFS repository
git tfs pull

# in your git repository connected to the remote VSTS git repository
git remote add tfs-mirror </path/to/git-tfs/mirror/on/your/pc>
git fetch tfs-mirror
git cherry-pick <commit from the tfs-mirror>

# ...proceed as usual
git push

Gotcha when using git-tfs from Git Bash

If you get the following message when running git-tfs under bash:

TFS repository can not be root and must start with “$/”

prepend the command with MSYS_NO_PATHCONV=1 (or, use cmd or Powershell).

A note about credentials

On some systems, git command line doesn’t work with VSTS out of the box (you may encounter permission errors). If that’s the case, install Git Credential Manager for Windows. You might also need to create a token as described here.

Passing complex objects to ASP.NET MVC action using HTTP GET method via jQuery ajax

Sometimes, you might encounter a minor issue in ASP.NET MVC: via AJAX, you can easily pass whatever object you want to your POST action, but doing so with GET endpoint is not so obvious. Plain object would work, but once it starts to be more complex, containing nested objects, everything breaks down and nested objects properties are never filled in.

As an example, consider this jQuery bit:

var obj= { prop1: "val1", nested: { prop2: "val2" } };
    type: "GET",
    data : obj

This is what it sends to the server:


or, unencoded:


It seems like array syntax is not welcomed by MVC. What IS welcomed then? Turns out, dot member access is parsed quite nicely – all we need to do is to replace foo[bar] with in case of non-numeric array index. The sample code, adapted from the following StackOverflow answer is like this:

var obj = { prop1: "val1", nested: { prop2: "val2" } };
var data = $.param(obj).replace(/%5b([^0-9].*?)%5d/gi, '.$1');
    type: "GET",
    data : data

Which produces the following result, happily consumed by MVC:


ASP.NET performance measurements on desktop Windows

Proper benchmarking has always been hard – it is trivial to get some numbers, but not so if you are trying to receive relevant ones, that is, results describing the behaviour of the system under test rather than some of auxiliary setup.

One example is testing ASP.NET web sites performance on a developer machine running Windows. It is very easy to achieve pretty low results compared to other systems like Apache, and the outcome is tempting to make far-reaching conclusions about platform’s performance. For a less obvious example, blog post I’ve stumbled upon recently compared the performance of Web API project using synchronous and asynchronous code, having threaded solution blown out of water by the magnitude of ten.

However, such results do not signify a weakness of Windows/IIS/ASP.NET platform in general due to the simple fact that IIS running on non-server Windows is limited to pretty small number (3 or 10) of concurrent web requests, which naturally limits throughput of a web application quite heavily if the request processing time is not too small. It might not be a bottleneck if your requests are done in 50 ms or so, but if the idea is to simulate a long call to an external web service, then the results stop to be realistic very soon.

How to get the real data then? Of course one way is to use Windows Server machine for performance testing. However, it might present less than optimal feedback cycle when you want to work in a measure – adjust – measure loop. Then another solution might be to use self-hosting capabilities present for a long time in Web API and recently included in ASP.NET MVC as well. In my experience, such hosts are quite quick and do not have any artificial limitations even on desktop Windows.

Multiple web servers simulation using nginx

Recently, I had to prepare a web application to being run on multiple web servers (also known as a web farm). The goal of that is to ensure the failover (so another web server replaces the primary one if the latter is down), or to increase the performance using a load-balancing between several webservers. I wanted to perform the most tests using a single machine, with several web sites using the same code, to have a benefit of a quick feedback cycle – without the need to touch another machine or roll up a VM.

As we use Microsoft technologies, the solution seemed to be pretty obvious: use IIS Application Request Routing (ARR) to set up a web farm. However, on a local machine, it’s proved to be a difficult task. After a day and a half of trying and getting only 503, I decided to search for another solution.

This another solution turned out to be nginx, widely used static/proxy web server used mostly on Unix machines. I was able to configure it in 40 minutes, and while Windows version of the server is not that well performant, it turned out to be good enough for our purposes.

This is an example of a minimal config you might use to simulate a web farm on localhost:

worker_processes  1;

error_log  logs/error.log info;

events {
    worker_connections  1024;

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    client_max_body_size 100m;

    keepalive_timeout  65;

    upstream farm {
        server localhost:8003;
        server localhost:8004;
    server {

        listen       8007 ssl;
        server_name  lb.localhost;
        ssl_certificate      cert.pem;
        ssl_certificate_key  cert.key;

        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

        location / {
            proxy_pass https://farm;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Port 8007;

        # redirect server error pages to the static page /50x.html
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;

In a nutshell, listen directive specifies on which port nginx will listen to incoming requests, and proxy_pass defines how nginx will forward them to the server set specified in the upstream directive. Normally nginx uses a round robin procedure to route the request, that is, first request lands on the first server listed in the upstream, next request lands on the subsequent one and so forth, starting from the beginning if the server list is exhausted.

Note that for HTTPS, you will need to acquire SSL certificates and put them into the nginx configuration directory (cert.pem is a certificate, and cert.key is a private key). The easiest (and free) way to do so is to use

You can run nginx via typing start nginx at command line.

Note that ssl_session_cache directive present in the default nginx config is omitted here because it doesn’t work on Windows (if not removed, it will cause nginx to fail miserably). Also, client_max_body_size is increased compared to the default value for the case when your site supports uploads of large files (otherwise, you will get unexpected errors from nginx). Proxy_set_header is need in case your web site wants to know the outward domain name of the website.

Sql Server Management Studio cannot edit rows with long text

Today, I found a reason why Sql Server Management Studio refused to edit a row in a table with “String or binary data would be truncated” error. I checked all columns content, and they were perfectly fitting their column definitions; and there were no triggers for that table at all.

Using a profiler, I found that actual SQL it tried to perform is UPDATE with some extra optimistic concurrency-looking checks:

UPDATE TOP(200) Table SET Field1 = @Field1 WHERE (Id = @Param1) AND … (Field2 LIKE @Param2)

Now, @Param2 was a 10KB text which lived nicely in a ntext column, except LIKE pattern cannot be longer than  8,000 bytes according to the documentation!

Good work, SSMS 🙂