0 Comments

Among all the features of ASP.NET framework, there is one I love the most:

moving parts of the configuration (i.e. Web.config) into separate files!

It’s just it, a little something, that makes life so much easier by:

  • less effort while working with GiT (conflicts are subtle and with better context, changing braches mostly let you move also your local changes, or reset parts of it)
  • quicker navigation, as they are files with my custom naming convention (via R# hit Ctrl+T and start typing the name using only first uppercase letters of the file to open it)
  • custom sections can also be put in an another file.


In my typical scenario, I move away sections responsible for:

  • database connection strings (as they change the most)
  • WCF service definitions (behaviors, bindings and services themselves) as they can occupy lots of space
  • and definition of my custom section of continuously evolving list of options (titanSection).


Sample web.config

As the picture above shows separation is achieved simply by using “configSource” attribute inside a section that is supposed to be outsourced. And that’s all – just name the file, create it and inside that file define directly the same section tag as from Web.config. But this time put there a real content.


For example the content of database connection-strings (named by me: Web.Connections.config, that is located next to Web.config), could look like that:

Separate file with database connection-strings


You can name the files whatever you like. However I highly encourage you to have a naming schema. Anything that starts with ‘Web’ and has an extension ‘.config’ would be probably sufficient and also be easy enough to remember, what is inside.


Final though on custom section. It might be placed in separate location in identical way. Only difference is that his kind of section should have a code-behind C# class inheriting from ConfigurationSection with all public properties marked with ConfigurationPropertyAttribute attribute (and inner tags define as separate classes inheriting from ConfigurationElement).

Once the class is completed, inside Web.config there should be a link defined between that class and a custom name of the section. Notice the “configSections” and a “section” entry on the picture above. It contains the name of the “titanSection” section and also points to a TitanConfigSection class (with an assembly name) that will handle accumulation of all settings.

Then only the imagination stops you from creating config complicated as the one below:

titanConfig


I hope it proved useful for you too!

0 Comments

Recently, I have shown, how to enable Application Insights within own WCF server application (look here). It might similarly work in other kinds of apps, so we won’t bother with further demystification of this procedure. But as you might suspect (or already stepped on), you will quickly require more metadata transmitted than it is done by default.

My scenario is pretty simple. The WCF server is installed multiple times, in multiple physical locations across the whole country. And they all just sit there, working since most of the bugs aren’t so critical. It can’t be even updated everywhere at the same time. So how to actually distinguish between different instances and app versions in case of a crash or other serious misbehaviors?

And the answer is – ITelemetryInitializer.

This kind of objects are responsible for performing initialization of the telemetry data gathered and then sent back to the Azure’s AppInsights service.


Here is the recipe, how to define and enable one in own project:

1) First create a class that implements ITelemetryInitializer interface.

2) Within Initialize() method, set additional context properties. To match my scenario, let it be two fields:

  • ClientInstance” – to let know, which server it’s running on (yes, it should be read from external configuration file, but should be clear enough for now, as I hard-code it)
  • BuildInfo” – to inform about application version running there, which I read directly from the assembly name

Of course the way it was obtained should be updated to match the logic in your app.

public sealed class ClientInstanceTelemetryInitializer : ITelemetryInitializer
{
    private const string ClientInstanceParam = "ClientInstance";
    private const string BuildInfoParam = "BuildInfo";

    private readonly string _instance;
    private readonly string _version;

    public ClientInstanceTelemetryInitializer()
    {
        _instance = "beaa9ac0-3267-41e4-9c14-2167271aca4d"; // should be different for each running instance
        _version = new AssemblyName(Assembly.GetExecutingAssembly().FullName).Version;
    }

    /// <inheritdoc />
    void ITelemetryInitializer.Initialize(ITelemetry telemetry)
    {
        telemetry.Context.Properties[ClientInstanceParam] = _instance;
        telemetry.Context.Properties[BuildInfoParam] = _version;
    }
}

3) Finally, register the initializer via ApplicationInsights.config file (this file was added automatically to the project, when AppInsights NuGet packages were installed inside the previous guide). Simply add the type at the end of TelemetryInitializers section. It will be instantiated automatically at application startup.

Telemetry Initializers

And that was all again.


Verification:

Now, when navigating to the crash logs on Azure, open a request details, click ‘Show all’ properties and the view should look similar to this one:

Azure Telemetry Info

“Custom Data” section contains info about ClientIntance and BuildInfo as expected.


Thanks!

2 Comments

Here I would like to present a small recipe, that will let you enable monitoring of WCF server-side calls with Microsoft’s ApplicationInsights service. It might help you in analytics of:

  • what services are used mosts
  • what hours users are active
  • what are the response times
  • what is called far too often and needs optimization
  • and probably most important thing - what crashed, why, when with the callstack!

It’s pretty straightforward and I split it into two parts.
First part - tasks are required to be done on the Azure side. For the foremost – you will need an Azure Account and an active subscription. I am assuming you already got one. Details are out of the subject for this tutorial. Even though Azure is a paid service, an ApplicationInsights till first million of actions is free as I remember correctly.
Second part – how to update the server-side of an application to utilize the Application Insights service. It focuses on proper packages installation via NuGet and the configuration.


Let’s go then. Do it on Azure:

  1. Login into Azure Portal.
  2. Click on “Create a resource” and input “Application Insights” in a search field. Then click “Create” button.

    Azure Create AppInsights resource


  3. Select an “ASP.NET web application” and complete the process.
  4. After few seconds, new service will be created, navigate to it.
  5. What is really required to store/remember is called “Instrumentation key”. Obtain it from the “Properties” section or via “Essentials” part of the “Overview” section as shown below.

    Azure AppInsights Overview


  6. We are done here.


Now, it’s time to update the ASP.NET WCF application, to post metrics and call to Azure appropriately.

  1. Open your solution in Visual Studio 2015 Community (or newer).
  2. Navigate to “Tools –> NuGet Package Manager –> Package Manager Settings –> Package Source” as on picture below.
  3. Add new source for AppInsights SDK (open-sourced and hosted on GitHub).

    NuGet Sources

    Name it anyhow you like, and use “https://www.myget.org/F/applicationinsights-sdk-labs” as source URL.

  4. Install the package: “Microsoft.ApplicationInsights.Wcf” (Include prerelease should be checked). It should also download all other packages it depends on.

    AppInsights Packages

  5. Now, inside the Globa.asax file, update the “Application_Start” method. Enable the monitoring of the application by placing the “Instrumentation Key”. The value we got earlied from Azure Portal, when the cloud service itself was created.

    TelemetryConfiguration.Active.InstrumentationKey = "my-instrumentation-key";

    AppInsights Instrumentation Key


  6. All the other necessary configutaion could be tweaked via code or ApplicationInsights.configfile, that was added to the project during packages installation. At this time, we don’t need to modify it anyhow.
  7. So we are done. Now the new version of the application is ready to be deployed. Once this is finished, statistics will be uploaded and visible through poral almost in real time.

    If you can visit the portal again after some time, instead of being empty, it should display content as follows:

    Azure AppInsights working

That was it!

0 Comments

I am a really big fan of JetBrain’s TeamCity product. I use it a lot at work and also for my hobby projects. But unfortunately I found it lacks one important, yet very basic feature – proper (and easy) application version management. Someone might say, that this is not totally true, there is a build-in AssemblyInfo Patcher. Yes, OK, although this one is extremely limited and can mostly be used for very simple projects, created by Visual Studio New Project Wizard and never modified.

 

What are my special requirements then?

Well, in my world I really wish to:

 

#1. Make TeamCity fully responsible for assigning versions to all the software components.

Let’s say the hardcoded version inside the application and its libraries is “0.1.*”, so the builds created on developers’ machines are quickly discoverable. Just in case someone totally accidentally with all the good reasons in mind tries to install it on customer’s machine. All official public builds should at least be greater than “1.0.*“ though.

 

#2. Each code branch should have a different build version.

For example the current application version build on ‘master’ branch is ‘1.7.*’, while artifacts of ‘develop’ branch should be ‘1.5.*’, comparing to feature branches, could be even lower, alike ‘0.9.*’.

This plays nicely with the #1 requirement stated above, as since all versioning management is handled by TeamCity, there should never be a merge conflicts between any of those branches and versions, nor any other manual source-code plumbing required.

 

#3. It should be possible to set BuildNumer to default, what C# compiler does.

BuildNumber is third part (number after second dot) of the version string. By default, if left as a star (1.0.*) C# compiler will put there today’s date, counted as days elapsed since 1st January 2000. It is extremely useful information, what I would love to preserve. This value automatically increases each day, and at any time can help find, how old any public release package used by customer was.

 

#4. It should be possible to also store the version info inside regular text file.

For a Web (or Web API) project, it’s much easier to store the current version inside a static file (as it never changes until the new release, so can be server-side cached) and let the client apps (that actually communicate with mentioned server part) download at startup to check, if there are any updates etc. Also it might play a label role, if resides next to compiled binaries.

 

Implementation

This is the very quick way of saying, how I’ve chosen to complete this task:

I designed a PowerShell script, launched as the very-first work item, just after the sources being downloaded from repository and NuGet packages updated.

 

TeamCity.steps

 

It’s flexible enough (as accepting lots of startup parameters from TeamCity) to locate the AssemblyInfo.cs file (or other variants), read current version stored there, update it according to wishes and project spec. Then it stores newly generated version back into the original file, optionally also into a text file and most importantly sends this new info back to TeamCity. Version is then generated in one place, simply based on given inputs and then spread to all required locations.

 

TeamCity.run.ps1

 

Of course to avoid any influences, repository checkout is configured to always load full sources from remote, so any local changes done previously during this process are reversed and never included into the current run.

 

Check the code at my gist and have fun using it!

 

Sample calls

 

PS> .\update-buildnumber.ps1 -ProjectPath 'src/ToolsApp' -BuildRevision 15

##teamcity[buildNumber '1.0.0.15']

 

PS>  .\update-buildnumber.ps1 -ProjectPath 'src/Apps/DevToolsApp' -BuildNumber 0 -BuildRevision 21

##teamcity[buildNumber '1.0.6162.21']

 

PS> .\update-buildnumber.ps1 -ProjectPath 'src/Apps/DevToolsApp' -BuildNumber 0 -BuildRevision 29 -SkipAssemblyFileVersionUpdate $True

##teamcity[buildNumber '1.0.6162.29']

0 Comments

Lastly I have shown how to enforce encoding of strings in DBF table by setting up code-page inside its header. I also mentioned it was the easiest way. That’s still true. But sometimes there is no room to be polite and things need to be done little messy in the code (for example when the DBF file is often recreated by 3rd-party tool an can be altered in any way). So each time the string value is loaded try to recover it with those steps.

First get the original bytes stored from loaded *text* (assume that system inappropriately returned Windows-1250 encoded string):

var bytes = Encoding.GetEncoding("Windows-1250").GetBytes(text);

Secondly convert them from correct encoding (it was natively stored as Latin-2 aka CP-852) to UTF-8:

var convertedBytes = Encoding.Convert(Encoding.GetEncoding(852), Encoding.UTF8, bytes);
return Encoding.UTF8.GetString(convertedBytes);

Of course encoding objects can be cached to increase performance.