Here I would like to present a small recipe, that will let you enable monitoring of WCF server-side calls with Microsoft’s ApplicationInsights service. It might help you in analytics of:

  • what services are used mosts
  • what hours users are active
  • what are the response times
  • what is called far too often and needs optimization
  • and probably most important thing - what crashed, why, when with the callstack!

It’s pretty straightforward and I split it into two parts.
First part - tasks are required to be done on the Azure side. For the foremost – you will need an Azure Account and an active subscription. I am assuming you already got one. Details are out of the subject for this tutorial. Even though Azure is a paid service, an ApplicationInsights till first million of actions is free as I remember correctly.
Second part – how to update the server-side of an application to utilize the Application Insights service. It focuses on proper packages installation via NuGet and the configuration.

Let’s go then. Do it on Azure:

  1. Login into Azure Portal.
  2. Click on “Create a resource” and input “Application Insights” in a search field. Then click “Create” button.

    Azure Create AppInsights resource

  3. Select an “ASP.NET web application” and complete the process.
  4. After few seconds, new service will be created, navigate to it.
  5. What is really required to store/remember is called “Instrumentation key”. Obtain it from the “Properties” section or via “Essentials” part of the “Overview” section as shown below.

    Azure AppInsights Overview

  6. We are done here.

Now, it’s time to update the ASP.NET WCF application, to post metrics and call to Azure appropriately.

  1. Open your solution in Visual Studio 2015 Community (or newer).
  2. Navigate to “Tools –> NuGet Package Manager –> Package Manager Settings –> Package Source” as on picture below.
  3. Add new source for AppInsights SDK (open-sourced and hosted on GitHub).

    NuGet Sources

    Name it anyhow you like, and use “https://www.myget.org/F/applicationinsights-sdk-labs/nuget” as source URL.

  4. Install the package: “Microsoft.ApplicationInsights.Wcf” (Include prerelease should be checked). It should also download all other packages it depends on.

    AppInsights Packages

  5. Now, inside the Globa.asax file, update the “Application_Start” method. Enable the monitoring of the application by placing the “Instrumentation Key”. The value we got earlied from Azure Portal, when the cloud service itself was created.

    TelemetryConfiguration.Active.InstrumentationKey = "my-instrumentation-key";

    AppInsights Instrumentation Key

  6. All the other necessary configutaion could be tweaked via code or ApplicationInsights.configfile, that was added to the project during packages installation. At this time, we don’t need to modify it anyhow.
  7. So we are done. Now the new version of the application is ready to be deployed. Once this is finished, statistics will be uploaded and visible through poral almost in real time.

    If you can visit the portal again after some time, instead of being empty, it should display content as follows:

    Azure AppInsights working

That was it!


And also preserving the machine’s role in the company!

Recently I discovered that on my build server resides Visual Studio 2010 with SP1. It occupies tremendous amount of space and at the first glance it was totally unnecessary to be there at all. So I had this brilliant though and immediately went to Control Panel and hit Uninstall. Which of course failed miserably after 20 minutes of processing. Then destroyed the hard drive even more with finalization that rolled back the whole procedure…

It failed due to lack of the original .msi file, the Visual Studio 2010 Premium was installed from. Uninstaller asked me to point the location, but I am not in a possession of the DVD nor ISO anymore. Pity, because it could have saved me a lot of stress and time then. And yes, I had been warned (and even agree to take the risk) that it could leave the whole installation in an undefined state, where IDE won’t work and SP1 can’t be applied again.

My problem was not the IDE at all, but all the builds I failed to compile from now on. The message looked like this:

C:\TC.build\z-2022\Project.Dto\Project.Dto.csproj(222, 3): error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\Portable\v4.0\Microsoft.Portable.CSharp.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.

And fixing it started to be mission critical.

Fast forwarding to the day after.

Now, let me proudly present the solution:

  1. Download the Visual Studio 2010 uninstaller from here (cached also by me here).
  2. Open command prompt and type the command:
    ./VS2010_Uninstall-RTM.ENU.exe /full
  3. Hit enter and grab a coffee as it will take some time.
  4. Then download the Portable Library Tools from here.
  5. Type the command:
    ./PortableLibraryTools.exe /buildmachine
  6. Hit enter and wait few seconds to restore everything to normal.
    Without the switch it won’t install anything until it finds VS 2010 SP1 first!
  7. Optionally, if .NET Framework seems to be dead too it might be useful to also install .NET Framework Developer Pack 4.6.2.

All green again!


I really like development in Visual Studio even if I have a feature to implement for other platforms. And anytime I can I try to continue this experience. Period.

This time I had a pleasure to wrap several HTTPS calls using libcurl. But how to make it run on my x64 Windows machine? There is some info on StackOverflow.com, what can be moved to VS2017 in following steps:

  1. Start VS2017 console: "%ProgramFiles(x86)%\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64
  2. Download the curl-7.53.1.zip source-code and unzip it
  3. enter “winbuild” folder
  4. compile with Windows SSL build-in support: nmake /f Makefile.vc mode=static VC=14 ENABLE_IPV6=no MACHINE=AMD64(optionally mode can be set as ‘dll’ to have later one more DLL do deal with in the project)
  5. grab the outcomes from “/builds/libcurl-vc14-AMD64-release-static-sspi-winssl” folder
  6. setup new project’s ‘include’ and ‘lib’ folder (put libcurl_a.lib or libcurl.lib into references)
  7. and remember to take a look at samples!



Lastly I have shown how to enforce encoding of strings in DBF table by setting up code-page inside its header. I also mentioned it was the easiest way. That’s still true. But sometimes there is no room to be polite and things need to be done little messy in the code (for example when the DBF file is often recreated by 3rd-party tool an can be altered in any way). So each time the string value is loaded try to recover it with those steps.

First get the original bytes stored from loaded *text* (assume that system inappropriately returned Windows-1250 encoded string):

var bytes = Encoding.GetEncoding("Windows-1250").GetBytes(text);

Secondly convert them from correct encoding (it was natively stored as Latin-2 aka CP-852) to UTF-8:

var convertedBytes = Encoding.Convert(Encoding.GetEncoding(852), Encoding.UTF8, bytes);
return Encoding.UTF8.GetString(convertedBytes);

Of course encoding objects can be cached to increase performance.


In my recent post I have shown, how to track web requests issued against remote server I don’t have a control over. Curious reader could ask at this point – but what have I broken this time? And the answer is as usual – personally did nothing wrong, I was just doing my job. I had to port some HTTP related code from using Windows Runtime-specific HttpClient (i.e. Windows.Web.Http.HttpClient) to also work outside the sandbox on a regular .NET 4.5 desktop version and start using System.Net.Http.HttpClient instead.

Then I noticed that everything worked fine except multipart form-data submissions using POST method that were behaving unexpectedly bad (timeouting or returning an error immediately). There was nothing unusual mentioned on MSDN, so I started to compare sent requests using both implementation. It turned-out of course, the problem lied deeper in the framework in a value serialized as a *boundary* part of the Content-Type field. HttpClient from the System.Net.Http somehow was adding extra quotes around it, while the one from Windows.Web.Http didn’t. Even though server should accept both values (according to RFC2046 section 5.1.1), it actually didn’t and was waiting for correct *boundary*.

Simple fix by manually overwriting the guilty header can look like that:

var request = new HttpRequestMessage(HttpMethod.Post, url);
var boundary = "--" + Guid.NewGuid().ToString("D");
request.Content = new MultipartFormDataContent(boundary);

// fill content
// ...

// update the Content-Type header to be accepted by the server:
request.Content.Headers.TryAddWithoutValidation("Content-Type", "multipart/form-data; boundary=" + boundary);

// send request:
var result = await _client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, token);


World saved. Job done!