0 Comments

I am a really big fan of JetBrain’s TeamCity product. I use it a lot at work and also for my hobby projects. But unfortunately I found it lacks one important, yet very basic feature – proper (and easy) application version management. Someone might say, that this is not totally true, there is a build-in AssemblyInfo Patcher. Yes, OK, although this one is extremely limited and can mostly be used for very simple projects, created by Visual Studio New Project Wizard and never modified.

 

What are my special requirements then?

Well, in my world I really wish to:

 

#1. Make TeamCity fully responsible for assigning versions to all the software components.

Let’s say the hardcoded version inside the application and its libraries is “0.1.*”, so the builds created on developers’ machines are quickly discoverable. Just in case someone totally accidentally with all the good reasons in mind tries to install it on customer’s machine. All official public builds should at least be greater than “1.0.*“ though.

 

#2. Each code branch should have a different build version.

For example the current application version build on ‘master’ branch is ‘1.7.*’, while artifacts of ‘develop’ branch should be ‘1.5.*’, comparing to feature branches, could be even lower, alike ‘0.9.*’.

This plays nicely with the #1 requirement stated above, as since all versioning management is handled by TeamCity, there should never be a merge conflicts between any of those branches and versions, nor any other manual source-code plumbing required.

 

#3. It should be possible to set BuildNumer to default, what C# compiler does.

BuildNumber is third part (number after second dot) of the version string. By default, if left as a star (1.0.*) C# compiler will put there today’s date, counted as days elapsed since 1st January 2000. It is extremely useful information, what I would love to preserve. This value automatically increases each day, and at any time can help find, how old any public release package used by customer was.

 

#4. It should be possible to also store the version info inside regular text file.

For a Web (or Web API) project, it’s much easier to store the current version inside a static file (as it never changes until the new release, so can be server-side cached) and let the client apps (that actually communicate with mentioned server part) download at startup to check, if there are any updates etc. Also it might play a label role, if resides next to compiled binaries.

 

Implementation

This is the very quick way of saying, how I’ve chosen to complete this task:

I designed a PowerShell script, launched as the very-first work item, just after the sources being downloaded from repository and NuGet packages updated.

 

TeamCity.steps

 

It’s flexible enough (as accepting lots of startup parameters from TeamCity) to locate the AssemblyInfo.cs file (or other variants), read current version stored there, update it according to wishes and project spec. Then it stores newly generated version back into the original file, optionally also into a text file and most importantly sends this new info back to TeamCity. Version is then generated in one place, simply based on given inputs and then spread to all required locations.

 

TeamCity.run.ps1

 

Of course to avoid any influences, repository checkout is configured to always load full sources from remote, so any local changes done previously during this process are reversed and never included into the current run.

 

Check the code at my gist and have fun using it!

 

Sample calls

 

PS> .\update-buildnumber.ps1 -ProjectPath 'src/ToolsApp' -BuildRevision 15

##teamcity[buildNumber '1.0.0.15']

 

PS>  .\update-buildnumber.ps1 -ProjectPath 'src/Apps/DevToolsApp' -BuildNumber 0 -BuildRevision 21

##teamcity[buildNumber '1.0.6162.21']

 

PS> .\update-buildnumber.ps1 -ProjectPath 'src/Apps/DevToolsApp' -BuildNumber 0 -BuildRevision 29 -SkipAssemblyFileVersionUpdate $True

##teamcity[buildNumber '1.0.6162.29']

0 Comments

Lastly I have shown how to enforce encoding of strings in DBF table by setting up code-page inside its header. I also mentioned it was the easiest way. That’s still true. But sometimes there is no room to be polite and things need to be done little messy in the code (for example when the DBF file is often recreated by 3rd-party tool an can be altered in any way). So each time the string value is loaded try to recover it with those steps.

First get the original bytes stored from loaded *text* (assume that system inappropriately returned Windows-1250 encoded string):

var bytes = Encoding.GetEncoding("Windows-1250").GetBytes(text);

Secondly convert them from correct encoding (it was natively stored as Latin-2 aka CP-852) to UTF-8:

var convertedBytes = Encoding.Convert(Encoding.GetEncoding(852), Encoding.UTF8, bytes);
return Encoding.UTF8.GetString(convertedBytes);

Of course encoding objects can be cached to increase performance.

1 Comments

Recently I had a problem importing data from a 10-years-old set of DBF tables. All was fine until it came to reading texts with polish diacritic marks. It worked fine on 9 out of 10 machines, all with identical configurations (or at least I had hoped they are identical and couldn’t find any differences - Windows 7 x64 PL, .NET 4.5.2, the same regional options). On that single one all special letters got converted into some eye-hurting characters and looked purely wrong.

As it started to reveal, the OleDbConnection class I used to connect (with “Microsoft.Jet.OLEDB.4.0” provider) magically treated strings as Windows-1250 encoded, event though they were CP852 Latin-2. Thanks to this site, helping me to find out about it.

I tried to enforce the encoding by updating 0x1D byte of the DBF header with proper code page. Following is the list of all possible values (I used 0x64), but still it didn’t help much.

 

Value

Description

0x00No codepage defined
0x01Codepage 437 (US MS-DOS)
0x02Codepage 850 (International MS-DOS)
0x03Codepage 1252  Windows ANSI
0x04Codepage 10000  Standard MacIntosh
0x64Codepage 852  Easern European MS-DOS
0x65Codepage 866  Russian MS-DOS
0x66Codepage 865  Nordic MS-DOS
0x67Codepage 861  Icelandic MS-DOS
0x68Codepage 895  Kamenicky (Czech) MS-DOS
0x69Codepage 620  Mazovia (Polish) MS-DOS
0x6ACodepage 737  Greek MS-DOS (437G)
0x6BCodepage 857  Turkish MS-DOS
0x78Codepage 950    Chinese (Hong Kong SAR, Taiwan) Windows
0x79Codepage 949  Korean Windows
0x7ACodepage 936  Chinese (PRC, Singapore) Windows
0x7BCodepage 932  Japanese Windows
0x7CCodepage 874  Thai Windows
0x7DCodepage 1255  Hebrew Windows
0x7ECodepage 1256  Arabic Windows
0x96Codepage 10007  Russian MacIntosh
0x97Codepage 10029  MacIntosh EE
0x98Codepage 10006  Greek MacIntosh
0xC8Codepage 1250  Eastern European Windows
0xC9Codepage 1251  Russian Windows
0xCACodepage 1254  Turkish Windows
0xCBCodepage 1253  Greek Windows
all othersUnknown / invalid

 

Ultimately, the very old Visual FoxPro driver did the trick (with switched provider to “VFPOLEDB.1”) and respected encoding, saving me from manual strings transcoding in my C# application.

 

Now you have seen everything!

0 Comments

In my recent post I have shown, how to track web requests issued against remote server I don’t have a control over. Curious reader could ask at this point – but what have I broken this time? And the answer is as usual – personally did nothing wrong, I was just doing my job. I had to port some HTTP related code from using Windows Runtime-specific HttpClient (i.e. Windows.Web.Http.HttpClient) to also work outside the sandbox on a regular .NET 4.5 desktop version and start using System.Net.Http.HttpClient instead.

Then I noticed that everything worked fine except multipart form-data submissions using POST method that were behaving unexpectedly bad (timeouting or returning an error immediately). There was nothing unusual mentioned on MSDN, so I started to compare sent requests using both implementation. It turned-out of course, the problem lied deeper in the framework in a value serialized as a *boundary* part of the Content-Type field. HttpClient from the System.Net.Http somehow was adding extra quotes around it, while the one from Windows.Web.Http didn’t. Even though server should accept both values (according to RFC2046 section 5.1.1), it actually didn’t and was waiting for correct *boundary*.

Simple fix by manually overwriting the guilty header can look like that:

var request = new HttpRequestMessage(HttpMethod.Post, url);
var boundary = "--" + Guid.NewGuid().ToString("D");
request.Content = new MultipartFormDataContent(boundary);

// fill content
// ...

// update the Content-Type header to be accepted by the server:
request.Content.Headers.Remove("Content-Type");
request.Content.Headers.TryAddWithoutValidation("Content-Type", "multipart/form-data; boundary=" + boundary);

// send request:
var result = await _client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, token);
result.EnsureSuccessStatusCode();

 

World saved. Job done!

0 Comments

Usually it’s not a big deal, when a HTTP request to a remote server is not working on a desktop Windows machine. There are plenty of useful tools, that could help in the process:

  • one, which work like a proxy and dump the whole traffic, that we might be interested in (Fiddler would be the best example here)
  • others, that interact with the TCP/IP stack itself and look much deeper for sent packets (like WireShark or Microsoft Message Analyzer).

BTW. Do you know, that there is a way to visualize HTTP traffic in Fiddler, that was originally captured by Wireshark or MMA?
Simply export packets from WireShark as pcap file (File –> Export –> Export as .pcap file) and import it directly into Fiddler (File –> Import Session… –> Packet Capture).
Many thanks to Rick Stahl, for showing me this feature.

Mobiles are totally different world. Mostly because applications are running in a dedicated sandbox and there isn’t at all internal or external access on system level. There is not much to do beside debugging of our own code. But if this doesn’t help, it doesn’t mean we stay blind. Take a look at http://httpbin.org/ (project sources available on GitHub). It’s the server you always wanted to write yourself – server that returns info about client accompanied by a list of headers, info about content for all kinds of submitted requests and even some random binary data, if you like. Respective behavior is selected by dedicated path on the httbin.org server and response is of course in JSON format (maybe beside for the binary data request).

Typically request paths:

  • /get
  • /post
  • /put
  • /delete – all of them to know, how the request really looks like
  • /status/<code> – to verify, if the application handles given error respectively
  • /bytes/<size>
  • /stream-bytes/<size> – both to download some random binary block of bytes from server.

It *won’t* simulate your destination server at all nor any more advanced interactions. And you will need to to hack your application to be able to issue requests against this server and dump somewhere the JSON response. Still remember, it’s only during application development and while fighting with a non-working request against remote server you totally have no control over, so additional #ifdef won’t hurt at all ;)

Final thought – the trick described above could be also without any problem used inside desktop or Windows Store application. It’s not only dedicated to mobiles!