Some time ago I have written a post about, how to setup Windows 10 to wake it up remotely via network using magic packet. The question that remained unanswered is: how to actually send this packet to the PC to trigger it running.

In the first example I will use Raspberry Pi with Rasbpian Stretch Lite installed:

1) First make sure a tool etherwake is available. If not, this could be fixed with following command, otherwise skip this step:

sudo apt-get install etherwake

2) Then simply invoke it with proper arguments:

sudo etherwake –i <ethernet-interface-name> <PC MAC-address>

Some explanations:

Yes, it will ask you for a password, each time you try to call it. Also MAC address should be specified in format “XX:XX:XX:XX:XX:XX”. That was the easier part.

The harder part is that etherwake by default tries to use eth0 as the network interface, where to send the magic packet. Raspberry has a feature called “Predictable network interface names” enabled by default (since August 2017 I think; was even in Jessie edition, but was disabled by default). With a big probability the name of the interface will be different than eth0 (something like enx?? or wlx?? depending if using LAN or WiFi and ?? should be replaced with Raspberry’s own MAC address).

To list existing network interfaces type this command:

ip link show

Then grab the proper name, which is not a loopback. Or use raspi-config (via command “sudo raspi-config”) and navigate to “2: Network Options” and disable predictable network interface names, so it should go back to using eth0 and wlan0 names.

In the second example I wish to show, how to achieve the same effect on Synology NAS station.

If your DSM is really old, you could use the optional package manager named ipkg. Though this manager make sure wakelan utility is installed. And waking up a PC is just a matter of issuing following command in the console:

export PATH=$PATH:/volume1/@optware/bin/

wakelan –m <PC MAC-address>

MAC this time is in format: “XXYYZZXXYYZZ”.

On the other hand, if you don’t want to play with custom package managers and have a decent DSM 6.x on-board, one could use the build-in synonet utility (as also described here). Then it’s just a matter of calling it this way:

sudo synonet --wake XX:XX:XX:XX:XX:XX eth0

Executable is located at: /usr/syno/sbin/synonet

And that’s all. Have fun!


Setting up GitLab was pretty easy on a Raspberry PI 3. The installation process is straightforward, it only took very long time to unpack (prepare for several hours!). And once running, its a brilliant combination comparing to all those noisy servers (aka my old PCs) I should have kept running. For the most Pi uses SD card, giving an immediate access at any time of day and doesn’t need to awake and start to spin its disks.


Moving to HTTPS configuration.
I have installed letsencrypt-auto successfully, created a config file with webroot authenticator and inside listed my domain. The real problem appeared, when I actually failed to pass the authentication challenge. Since I use the Synology NAS at the same domain, which occupies the port 80, the required web folder ‘/.well-known’ was unavailable. This unit I can’t just throw away, I wish to make both devices running smoothly together. Luckily Synology DSM 6.0 uses Let’s Encrypt too, so its nginx server is already preconfigured. What I did was to tweak a bit the config.


On the NAS side:

  1. Create shared Samba folder /volume1/acme(it might be hidden and only one user could have rights to write there)
  2. Make sure the path exists: /volume1/acme/letsencrypt/
  3. Edit /etc/nginx/nginx.conf and for location “/.well-known/acme-challenge” redefine the root from “/var/lib/letsencrypt” to “/volume1/acme/letsencrypt”


Now on the Raspberry PI

  1. Mount the folder
    sudo mount -t cifs //<nas_ip>/acme /var/www/acme -o username=<user_name>,password=<password>
  2. Retry certificate generation, it should pass this time
    ./letsencrypt-auto renew
  3. Update GitLab config (“sudo vi /etc/gitlab/gitlab.rb”) adding following lines into:

    nginx['ssl_client_certificate'] = "/etc/gitlab/ssl/ca.crt" # Most root CA's are included by default
    nginx['ssl_certificate'] = "/etc/letsencrypt/live/<domain>/fullchain.pem"
    nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/<domain>/privkey.pem"

  4. Finally run “gitlab-ctl reconfigure” to refresh running instance (or only “sudo gitlab-ctl restart nginx” to restart nginx, if you renew certificate 3 months later…)



To remember, try first against acme-staging servers before switching to production one to generate real certificate.


I use gitolite to remotely manage my repositorites inside own cloud on Synology DS411 DiskStation. Hardware is maybe a bit old, but still gets new software updates. And of course from time to time, those updates break my configuration. Mostly because my symlinks are removed and $PATH gets reset to predefined folders.


Simplest fix to restore gitolite is to symlink mktemp into known location. Login as administrator and type:

ln -s /opt/bin/mktemp /sbin/mktemp

New repository creation should work fine now.