I recently noticed that my GitLab installed on Raspberry Pi (running Jessy) stopped updating and stick to version 8.7.9, however the latest one as of today is 8.16.4.

Normally apt-get updateand apt-get upgradeshould do the trick. But it turned out there was a change in the build system and newer packages don’t get uploaded into ‘raspbian’ version of repository. For details - take a look on issue #1303. Although quick patch is following and short:


Edit configuration file located at: /etc/apt/sources.list.d/gitlab_raspberry-pi2.list

And redirect the repository path from ‘raspbian/’ to ‘debian/’.



EDIT: 2017-03-12:

The broken repository for 'raspbian' was fixed and this trick is no more required to install latest version of GitLab.


I am a really big fan of JetBrain’s TeamCity product. I use it a lot at work and also for my hobby projects. But unfortunately I found it lacks one important, yet very basic feature – proper (and easy) application version management. Someone might say, that this is not totally true, there is a build-in AssemblyInfo Patcher. Yes, OK, although this one is extremely limited and can mostly be used for very simple projects, created by Visual Studio New Project Wizard and never modified.


What are my special requirements then?

Well, in my world I really wish to:


#1. Make TeamCity fully responsible for assigning versions to all the software components.

Let’s say the hardcoded version inside the application and its libraries is “0.1.*”, so the builds created on developers’ machines are quickly discoverable. Just in case someone totally accidentally with all the good reasons in mind tries to install it on customer’s machine. All official public builds should at least be greater than “1.0.*“ though.


#2. Each code branch should have a different build version.

For example the current application version build on ‘master’ branch is ‘1.7.*’, while artifacts of ‘develop’ branch should be ‘1.5.*’, comparing to feature branches, could be even lower, alike ‘0.9.*’.

This plays nicely with the #1 requirement stated above, as since all versioning management is handled by TeamCity, there should never be a merge conflicts between any of those branches and versions, nor any other manual source-code plumbing required.


#3. It should be possible to set BuildNumer to default, what C# compiler does.

BuildNumber is third part (number after second dot) of the version string. By default, if left as a star (1.0.*) C# compiler will put there today’s date, counted as days elapsed since 1st January 2000. It is extremely useful information, what I would love to preserve. This value automatically increases each day, and at any time can help find, how old any public release package used by customer was.


#4. It should be possible to also store the version info inside regular text file.

For a Web (or Web API) project, it’s much easier to store the current version inside a static file (as it never changes until the new release, so can be server-side cached) and let the client apps (that actually communicate with mentioned server part) download at startup to check, if there are any updates etc. Also it might play a label role, if resides next to compiled binaries.



This is the very quick way of saying, how I’ve chosen to complete this task:

I designed a PowerShell script, launched as the very-first work item, just after the sources being downloaded from repository and NuGet packages updated.




It’s flexible enough (as accepting lots of startup parameters from TeamCity) to locate the AssemblyInfo.cs file (or other variants), read current version stored there, update it according to wishes and project spec. Then it stores newly generated version back into the original file, optionally also into a text file and most importantly sends this new info back to TeamCity. Version is then generated in one place, simply based on given inputs and then spread to all required locations.




Of course to avoid any influences, repository checkout is configured to always load full sources from remote, so any local changes done previously during this process are reversed and never included into the current run.


Check the code at my gist and have fun using it!


Sample calls


PS> .\update-buildnumber.ps1 -ProjectPath 'src/ToolsApp' -BuildRevision 15

##teamcity[buildNumber '']


PS>  .\update-buildnumber.ps1 -ProjectPath 'src/Apps/DevToolsApp' -BuildNumber 0 -BuildRevision 21

##teamcity[buildNumber '1.0.6162.21']


PS> .\update-buildnumber.ps1 -ProjectPath 'src/Apps/DevToolsApp' -BuildNumber 0 -BuildRevision 29 -SkipAssemblyFileVersionUpdate $True

##teamcity[buildNumber '1.0.6162.29']


Virtual hard-drives used by virtual systems running under Hyper-V on Windows 8 Pro (or later) can very quickly become extremely huge in size. Thankfully there is a nice and easy procedure that I always use to minimize and compact them, which presents as following:

  1. Turn on the system, that is going to be optimized and log in.
  2. Clean the trash bins, clean `temp` folders and remove all other unneeded stuff from the system drive (like: old system restore points), turn off hibernation and finally turn off or reduce the size of paging file.
  3. Defrag if necessary (do it even x3 times, if using Windows XP as guest OS).
  4. *critical* – clean the reclaimed space using sdelete.exe utility from SysInternals (available here).
  5. Turn off the guest system.
  6. Run the Hyper-V management console and optimize the VHD size from there.

Done. Good job!


GiT is a marvelous tool. It's like a developer's tool shaped into Swiss knife. It pays back to have it, yet you still need a bit of training not to make yourself hurt (like lose whole day’s work!). Or most importantly training to know, what is the command’s name for the task you are going to do. Those commands are not very obvious, somehow Mercurial and SVN did a better job here, that's why I am providing my own list in this short tutorial. I dislike being forced to know everything by heart. Feel free to copy and use it as you like! Or share with me your own set of commands (here or via twitter).

Let’s start, how to:

  1. Remove a local branch, that was never pushed on server?

    git branch -d <branch_name>

    or if it was never merged into any other branch use `-D` instead, to force the deletion and forget about the code.

  2. Remove branch from the server, as it was pulled from remote, already merged and is no more needed?

    # delete locally:
    git branch -d <branch_name>

    # apply changes to remote (like a push of nothing to specified branch)
    git push origin :<branch_name>

    Server will then reply with a message similar to:
    - [deleted]           <branch_name>

    Of course replace `<branch_name>` with respective name of the branch.
    And since every person, who cloned the repository has a full local copy, each of them has to remove the reference to remote, non-existing branch by typing (otherwise, they could accidentally push this branch again):

    git pull
    git remote prune origin

  3. Change last commit

    git commit --amend

    Additionally use `--author=”<name> <email>` to even override the author’s name. If there are any files in stash area, their new content will be applied as part of this commit and originally committed data will be lost.
    Of course the recommended moment to perform this operation is before pushing commits to server. Otherwise it will create a blast number of conflicts and other bad consequences.

  4. Reverse and forget about last commit

    git reset --hard HEAD~1

    This one also should be executed if needed only on local commits. Playing with commits sent back to server might start issuing conflicts.

  5. Uncommit last commit and keep changes in stash area

    git reset --soft HEAD~1

    It will simply undo the last `git commit` execution.
    Type again to remove selected file from stash area to keep it away of next commit:

    git reset <file_name>

  6. Remove tag from server (when placed accidentally)

    # remove it locally
    git tag -d <tag_name>

    # ask the server to perform the same operation

    git push origin :refs/tags/<tag_name>

Let the force be with You!


Internet is full of tutorials about git usage, so here is mine too. But instead of showing basic, I wish to present the solution for an advanced problem, that I personally fight from time to time. I hope this could be valid and could also save your day!


The Problem.

My project’s repository became so big I noticed some components that could be turned into a separate libraries and used elsewhere either. I would like to split this repo into several ones, apply new folder layout inside new repos and finally bind them all with submodules or subtrees. Most importantly I must keep the full change history of files and transfer it to all those new repos.


What Internet says about it?

Browse the stackoverflow.com site and you will find lots of suggestions for all versions of git. I recommend reading this answeras it combines really comprehensive guide and walkthrough. It only doesn’t explain, how to move files around nicely.


My Solution.

Here is my proposal for solving the problem:

1) Split the repository

git subtree split -P <path_to_extract> -b <branch_name>



This will actually extract specified folder across whole project’s history and place it on the given branch. It’s good to have the whole component already available via that one folder and not spread across. Additionally on Windows, always use the slash (‘/’) to separate path segments.


2) Create new repository

mkdir <component_name>
cd <component_name>
git init


3) Import files with history from old repository into new one

git pull <path_to_source_repository> <branch_name>


4) Patch new repository to move files to respective folders

It’s vital mostly because new source-code files, extracted at step 1., will be placed directly at root. I try to keep some ‘predefined’ repository structure with ‘art’, ‘bin’, ‘ext’, ‘src’ folders.


git filter-branch --tree-filter 'mkdir -p src/libX/core/; mv *.cs src/libX/core/;' HEAD

or move whole folders:

git filter-branch -f --tree-filter 'if [[ -e Model ]]; then mkdir -p src/libX/core/; mv Model src/libX/core/; fi' HEAD


Repeat the last command for all folders that need to be moved.



The ‘-f’ parameter is used to overwrite the index backup done during first filter-branch call. This backup could be potentially used, when something went wrong with history rewrite or with revert request. It’s stored inside “.git/refs/original” folder and could be deleted, but why to do it manually? Without it, you could see an error similar to:

Cannot create a new backup.
A previous backup already exists in refs/original/


Secondly, there is an ‘if’ statement. It’s mostly required, when moving files and folders, that were not added during the first commit inside this new repository. Otherwise you will see:

As tries to move folders/files that don't exist in initial commit, could lead to following error:

Rewrite 85700a9a54c203d49de11d3fbb15a37f4f5637E9 (1/18)mv: cannot stat `Model': No such file or directory


5) Add a remote, where the new repo will be pushed

git remote add <component_name> <repo_url>
git push –set-upstream <component_name> master


6) Remove original source

git rm -rf <path_to_extract>
git commit -a -m "Removed component"


7) Add submodules or subtree to the old source repo

Here is an official guide, how to manage submodules.



Final thoughts.

The source repository could be optimized and whole history about extracted component could be removed (pruned). There is only one catch with it – it requires rewriting history on an already published repository. If you fully manage the environment, where the repository is used, that should be OK. But if it’s a public project, I would highly avoid it. The procedure requires deletion of the repository and creating it again. Since publicized hashes changed, it will be a nightmare, to keep only the latest ones, if someone with the old version pushes all, there will be plenty of duplicates.