How to install Node.js on Debian stable (squeeze)

At the moment the nodejs package is available only in Debian unstable, so you'll have to build it yourself. There are several ways to compile Node.js, here I chose the nvm way. NVM is a simple bash script for building and managing node.js installations.
  1. wget the nvm script itself https://raw.github.com/creationix/nvm/master/nvm.sh
    mkdir ~/nvm
    wget  https://raw.github.com/creationix/nvm/master/nvm.sh -O ~/nvm/nvm.sh
  2. You need to source it and for future use I also add it to .bashrc
    . ~/nvm/nvm.sh
    echo '. ~/nvm/nvm.sh' >> ~/.bashrc
  3. Now for some required libraries - build-essential libssl-dev pkg-config
    aptitude install build-essential libssl-dev pkg-config
  4. One more thing, in stable libc6-dev is 2.11, while to build node.js we need 2.13 from testing. This will upgrade some other packages, too, but can't be avoided.
    echo 'deb http://ftp.us.debian.org/debian/ testing main' >> /etc/apt/sources.list
    aptitude install  libc6-dev
  5. The rest is simply using nvm according to its README.
    nvm install v0.6.14
    nvm alias default 0.6
    nvm help


make[2]: *** No rule to make target `/usr/include/bits/predefs.h', needed by `dvdnav.o'. Stop.

make[2]: *** No rule to make target `/usr/include/bits/predefs.h', needed by `dvdnav.o'. Stop.
I got this error during recompiling XBMC on Debian. The problem is that I was missing the whole /usr/include/bits/ directory, because at some point Debian moved it to /usr/include/x86_64-linux-gnu/bits/ or /usr/include/i386-linux-gnu/bits/, depending on your arch. Just creating a symlink to the new folder does the trick for now:
ln -s /usr/include/x86_64-linux-gnu/bits/ /usr/include/bits


Setting Debian development server on a VirtualBox virtual machine.

I hate using daemons on Windows and sometimes the server environment is critical for the application being developed. That's why I often use a virtual machine running Linux, just like I'd use a remote development server.

Get VirtualBox.

Download your ISO (I prefer Debian) and install it on a virtual machine (VM).

In VirtualBox->Preferences->Network should be one "VirtualBox Host-Only Ethernet Adapter". Set it to address netmask and enable the DHCP server with address netmask, handing out addresses in the - range.

In your VM's Settings->Network you need to have 2 adapters. The first is to be attached to "Host-only Adapter" (Adapter 1, eth0), providing connectivity between the host OS and the VM. The second should be attached to "NAT" (Adapter 2, eth1) to provide internet access.

When you start your VM make sure that both Ethernet adapters are present (ifconfig -a). Then set them to use DHCP with the following lines in /etc/network/interfaces:
allow-hotplug eth0 eth1
iface eth0 inet dhcp
iface eth1 inet dhcp
Bring up the interfaces with ifup -a.

Now you should be able to access the VM from the host OS:
Pinging with 32 bytes of data:
Reply from bytes=32 time<1ms TTL=64
In the VM's Settings->Shared Folders you can add shares and then mount them in the VM like this:
mount -t vboxsf <share-name> /mnt
That way I can edit my local files with Zend Studio and have them used by the VM's web server with no hassle.

VBoxHeadlessTray is a great app, with which you can run the VM in the tray bar.


Creating a self-signed SSL certificate for use with your web server

This is for Debian, using the openssl package:
apt-get install openssl

Create a private key. You will be asked for a pass phrase to encrypt the private key with (or omit -des3 and the next step).
openssl genrsa -des3 -out server.key 2048
Remove the aforementioned pass phrase encryption, because probably you don't want it (it's complicated).
cp server.key server.key.org
openssl rsa -in server.key.org -out server.key
Create a certificate signing request. Here you provide some information, most important of which is "Common Name". That is your domain name (www.example.com) and you can use a wildcard to cover all subdomains (*.example.com).
openssl req -new -key server.key -out server.csr
Sign the request with the private key
openssl x509 -req -days 1825 -in server.csr -signkey server.key -out server.crt

That's it. The files you need are server.crt (the certificate) and server.key (the private key). Refer to your web server's documentation.


ifup (if-up.d) bash script for running dhcpd only when needed

Sometimes I use my Debian box as a wifi hotspot (wlan0) and other times as a NAT router for my TV (eth1). On both occasions I rely on dhcpd to configure my network, but I don't need it running all the time. Also I hate to manually restart it when bringing up an interface.

The following bash script must be in /etc/network/if-post-down.d and /etc/network/if-up.d (use symlinks). As you bring up or down interfaces it will run/restart or stop dhcpd.

The DHCPD_INTERFACES variable contains the interfaces dhcpd should be running at.


#interfaces of interest


#are we changing any interface of interest
        if [ "$IFACE" = $x ]; then

#if changing interface of interest, then run
if [ $RUN = true ]; then

        INTERFACES_UP=`ifconfig -a | awk '/^[a-zA-Z][a-zA-Z0-9_,:.-]/{n=$1}($1=="UP"){u[n]=n}END{for(n in u){print u[n]}}'`

        #do we need dhcpd running

        #are some of the remaining interfaces is a dhcpd interface
        for x in $INTERFACES_UP
                for y in $DHCPD_INTERFACES
                        if [ $x = $y ]; then

        #do what is needed
        if [ $NEED = true ];
                        #reconfigure dhcpd
                        /etc/init.d/dhcp3-server restart > /dev/null
                        #stop dhcpd
                        /etc/init.d/dhcp3-server stop > /dev/null



Problem with Cyrillic parameters in AJAX request

Some browsers (IE) don't pass properly Cyrillic parameters when making an AJAX call. The following JavaScript code will escape the characters properly:
encodeURIComponent("text with абвгд")
  .replace(/!/g, '%21')
  .replace(/'/g, '%27')
  .replace(/\(/g, '%28')
  .replace(/\)/g, '%29')
  .replace(/\*/g, '%2A')
  .replace(/%20/g, '+')
Taken from http://phpjs.org/functions/urlencode:573

NB: You don't have to worry if you use jQuery and pass the parameters with an object, instead of concatenating a string:
   'anotherparam': 'абвгд'
 function (data) { }
Here anotherparam will be passed just fine.


Using HAProxy as a reverse proxy

I had a web app that was using some Comet functionality. Since Apache is no good for that I had to run additionally cometd on jetty. Naturally I wanted both web servers to be accessible on port 80 for which a reverse proxy was needed.
The reverse proxy would listen on the external interface's port 80 and according to some criteria forward the requests to the appropriate server. I set Apache on localhost:10080 and jetty on localhost:10088.

Install HAProxy
apt-get install haproxy
Its config file is /etc/haproxy/haproxy.cfg. There you define two backends for the two web servers:
backend apaches
        mode http
        timeout connect 10s
        timeout server 30s
        balance roundrobin
        server apache1 weight 1 maxconn 512
backend cometd
        mode http
        timeout connect 5s
        timeout server 5m
        balance roundrobin
        server cometd1 weight 1 maxconn 10000
Note that HAProxy is also a load balancer, but that is not relevant for the task.
Then you define the frontend - where and how HAProxy should work:
frontend http_proxy #arbitrary name for the frontend
        bind *:80 #all interfaces at port 80
        mode http
        option forwardfor
        option http-server-close
        option http-pretend-keepalive
        default_backend apaches #by default forward the requests to apache
        acl req_cometd_path path_beg /cometd/
        use_backend cometd if req_cometd_path
By default requests are forwarded to the apaches backend. Then there is a condition - if the requested path begins with /cometd/ then forward the request to the cometd backend.
With this setup all requests to Apache will appear to be coming from That's why with option forwardfor we make HAProxy add the X-Forwarded-For HTTP header, which will contain the real request's originating IP. To see that value in the Apache logs you replace %h with %{X-Forwarded-For}i in your LogFormat definitions:
#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
#LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b" common

option http-server-close and option http-pretend-keepalive solve issues with Keep-Alive, which I don't understand well enough. Basically when the first connection is forwarded to some backend and it is kept alive, the following requests are not routed at all, but go to the same backend.