Using the ACL in HAProxy for Load Balancing Named Virtual Hosts

Until recently, I wasn’t aware of the ACL system in HAProxy, but once I found it I realized that I have been missing a very important part of load balancing with HAProxy!

While the full configuration settings available for the ACL are listed in the configuration doc, the below example includes the basics that you’ll need to build an HAProxy load balancer that supports multiple host headers.

Here is a quick example haproxy configuration file that uses ACLs:

global
    log 127.0.0.1 local0
    log 127.0.0.1 local1 notice
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode http
    option httplog
    option dontlognull
    retries 3
    option redispatch
    maxconn 2000
    contimeout 5000
    clitimeout 50000
    srvtimeout 50000

frontend http-in
    bind *:80
    acl is_www_example_com hdr_end(host) -i example.com
    acl is_www_domain_com hdr_end(host) -i domain.com
    
    use_backend www_example_com if is_www_example_com
    use_backend www_domain_com if is_www_domain_com
    default_backend www_example_com

backend www_example_com
    balance roundrobin
    cookie SERVERID insert nocache indirect
    option httpchk HEAD /check.txt HTTP/1.0
    option httpclose
    option forwardfor
    server Server1 10.1.1.1:80 cookie Server1
    server Server2 10.1.1.2:80 cookie Server2

backend www_domain_com
    balance roundrobin
    cookie SERVERID insert nocache indirect
    option httpchk HEAD /check.txt HTTP/1.0
    option httpclose
    option forwardfor
    server Server1 192.168.5.1:80 cookie Server1
    server Server2 192.168.5.2:80 cookie Server2

In HAProxy 1.3, the ACL rules are placed in a “frontend” and (depending on the logic) the request is proxied through to any number of “backends”. You’ll notice in our frontend entitled “http-in” that I’m checking the host header using the hdr_end feature. This feature performs a simple check on the host header to see if it ends with the provided argument.

You can find the rest of the Layer 7 matching options by searching for “7.5.3. Matching at Layer 7” in the configuration doc I linked to above. A few of the options I didn’t use but you might find useful are path_beg, path_end, path_sub, path_reg, url_beg, url_end, url_sub, and url_reg. The *_reg commands allow you to perform RegEx matching on the url/path, but there is the usual performance consideration you need to make for RegEx (especially since this is a load balancer).

The first “use_backend” that matches a request will be used, and if none are matched, then HAProxy will use the “default_backend”. You can also combine ACL rules in the “use_backend” statements to match one or more rules. See the configuration doc for more helpful info.

If you’re looking to use HAProxy with SSL, that requires a different approach, and I’ll blog about that soon.

Generate a PKCS #12 (PFX) Certificate from Win32 CryptoAPI PRIVATEKEYBLOB

We had an accounting system that used a Microsoft Win32 CryptoAPI blob to encrypt/decrypt credit card information for recurring customer information. It was time for an upgrade to .NET land. Keith, the lead developer for this project, decided it would be beneficial to switch to x509 certificates for improved key management (and I wasn’t going to argue).

So what we physically used to encrypt/decrypt cards in the legacy system was a PRIVATEKEYBLOB and our ultimate goal was to use a certificate in the PKCS #12 format. My system at the office is Windows XP, and I wanted to use OpenSSL to accomplish the task of converting the private key blob to something more suitable for our new system, but I didn’t want to transmit any of our top secret keys across the VPN or even across the network for that matter.

OpenSSL did not begin supporting PRIVATEKEYBLOB as an acceptable format until 1.0.0 Beta, but 0.9.8h was the only Windows binary readily available. So I grabbed the OpenSSL source (here) and compiled it using GCC within Cygwin. If you don’t have Cygwin (get it here), it’s very easy to get started, and you can select from a large variety of Linux packages during setup. So, during setup, look for GCC and make sure you enable it.

Here’s how to compile OpenSSL 1.0.0 Beta on your native Linux environment or with Cygwin:

[code lang=”bash”]$> cd /usr/local/
$> wget http://www.openssl.org/source/openssl-1.0.0-beta3.tar.gz
$> tar -xzf openssl-1.0.0-beta3.tar.gz
$> cd openssl
$> ./config && make && make install && make clean[/code]

If something broke during install, check the online docs, or re-run Cygwin setup to make sure you selected the gcc toolset. I’ll assume from this point forward you are using OpenSSL 1.0 in either a native Linux or a Cygwin environment. If you aren’t sure, start OpenSSL and type “version” to check your ::drumroll please:: version number.

Let’s get started.

The OpenSSL command below will take your PRIVATEKEYBLOB and output an RSA private key in PEM format. Please note the use of “MS\ PRIVATEKEYBLOB” instead of the alternative “PRIVATEKEYBLOB”. Backspace is required to escape the blank space after “MS” in Linux when passed as a parameter on the command line. So, if all goes well, you should have a PEM file. If it doesn’t, try specifying a different input form (e.g. DER or PRIVATEKEYBLOB instead of MS\ PRIVATEKEYBLOB).

[code lang=”bash”]$> openssl rsa -inform MS\ PRIVATEKEYBLOB -outform PEM -in private.pvk -out private.pem[/code]

Now that we have a PEM file with an RSA private key, we can generate a new certificate based on that private key (command below). This will generate an x509 certificate valid for 5 years. Once you run this you’ll be prompted with the usual country/state/city/company information, but what you specify there is up to you. I would recommend adding a passkey when it prompts you at the end

[code lang=”bash”]$> openssl req -new -x509 -key private.pem -out newcert.crt -days 1825[/code]

If all continues to go well, you should have a private key in PEM format and your brand new certificate. One last command is needed to generate the PKCS #12 (aka PFX) certificate bundle.

[code lang=”bash”]$> openssl pkcs12 -export -in newcert.crt -inkey private.pem -name “My Certificate” -out myCert.p12[/code]

If you didn’t receive any errors, then congratulations! You can now import this PKCS12 bundle into any Windows certificate repository and no longer need to hard code blobs into your code.

Hope this helps save someone a few hours time.

Cloud Hosting – Scaling Websites the Easy Way

One often has to make a choice when it comes to website hosting. You weigh the variables and decide on the best solution for your hosting needs. Cloud hosting makes this decision a WHOLE lot easier. Let’s break it down.

Price. You want to get the best deal possible. Shared hosting probably comes to mind first. In the classic sense, shared hosting means a company has a server, and they load as many websites onto this server in order to make the most profit from one server. Sometimes, this can mean hundreds of websites on one box. One box… susceptible to the same physical hardware limitations as any other server. Sure, they might even include RAID, redundant power supplies, and a lot of disk space.

However, what happens when your website actually starts getting traffic? I had an experience where my company put their trust in a shared hosting company (*cough* Dreamhost *cough*). When it came down to it, one of our websites had a lot of visitors one evening, and after battling to keep things running smoothly, the host ultimately disabled our website via renaming the index file to index.php_disabled_by_host. Seriously? So much for saving money and “unlimited” space and bandwidth… which brings me to my next point.

Scalability. If you have a website that has outgrown shared hosting, what is your next move? Many people consider purchasing dedicated equipment for their website. A dedicated server is usually the first move. Not enough? Scaling out from this point then usually requires the purchase of another dedicated server and a load balancer, then it just gets pricier from there with a dedicated database server, file servers, caching servers, and more to handle growing traffic and load. We’re talking a significant amount of expenses just to get the ability to scale.

Scale My Site is the answer. The concept of a cloud host is that it takes the best of the scalable, dedicated world and lets you just pay for what you use. You put your website in the cloud and instantly your application is scaled across multiple webservers. Your files are stored on a redundant SAN mirrored across many physical drives. Database queries are performed on powerful, multi-node database clusters. You don’t have to think about “how am I going to handle all of that traffic?” because it just happens automatically. You no longer have to think about “do I need a Windows or Linux based account?”. It doesn’t matter. You can run ASP.NET applications side-by-side PHP web sites. It’s the cloud that doesn’t mind – it’s cool with whatever you want to do. I highly recommend checking out Ninja Systems, the cloud hosting company, if you are serious about scaling your website, and if you don’t want to waste your time recreating another scalable infrastructure that you need to manage yourself.

How-to Backup Joomla! 1.5 to Amazon S3 with Jets3t

Introduction to backing up a Joomla website to Amazon S3 storage using Jets3t.

We all know backups are important. I’ve found what I consider a pretty good backup solution using Amazon S3. It’s super cheap, your backups are in a secure location, and you can get to them from anywhere. For my backup solution, I’m using Debian Linux (Etch), but this whole setup is not dependent on your current favorite flavor of Linux because it uses Java.

  1. Signup for Amazon S3: http://aws.amazon.com/s3/
  2. Install the latest Java Runtime Environment: http://java.sun.com/javase/downloads/index.jsp
  3. Download Jets3t: http://jets3t.s3.amazonaws.com/downloads.html
  4. Extract Jets3t installation to a location on your server.Example: /usr/local/jets3t/
  5. Add your AWS account key and private key to the “synchronize” tool configuration file:Example: /usr/local/jets3t/configs/synchronize.properties
  6. Use an S3 browser tool like Firefox S3 Organizer to add two buckets: one for file backups and one for MySQL backups.
  7. Add a MySQL user whose primary function is dumping data. Let’s call it ‘dump’ with the password ‘dump’:
    [code lang=”bash”]mysql>GRANT SELECT, LOCK TABLES ON exampleDB.* to ‘dump’ identified by ‘dump’;[/code]
  8. Build your backup script (replace paths with your own) called s3backup.sh:
    [code lang=”bash”]JAVA_HOME=/usr/local/j2re1.4.2_17
    export JAVA_HOME
    JETS3T_HOME=/usr/local/j3ts3t
    export JETS3T_HOME
    SYNC=/usr/local/jets3t/bin/synchronize.sh
    WWWROOT=/var/www/fakeuser/
    MYSQLBUCKET=example-bucket-mysql
    WWWBUCKET=example-bucket-www
    MYSQLDUMPDIR=/usr/local/mysql-dumps
    WWWDUMPDIR=/usr/local/www-dumps
    # Perform backup logic
    dayOfWeek = `date +%a`
    dumpSQL=”backup-www-example-com-${dayOfWeek}.sql.gz”
    dumpWWW=”backup-www-example-com-${dayOfWeek}.tar.gz”
    mysqldump -u dump -pdump exampleDB | gzip > “${MYSQLDUMPDIR}/${dumpSQL}”
    # Compress the website into an archive
    cd ${WWWROOT}
    tar -czf “${WWWDUMPDIR}/${dumpWWW}” .
    # Perform Jets3t synchronize with Amazon S3
    $SYNC –quiet –nodelete UP “${WWWBUCKET}” “${WWWDUMPDIR}/${dumpWWW}”
    rm -f “${WWWDUMPDIR}/${dumpWWW}”
    $SYNC –quiet –nodelete UP “${MYSQLBUCKET}” “${MYSQLDUMPDIR}/${dumpSQL}”
    rm -f “${MYSQLDUMPDIR}/${dumpSQL}”[/code]
  9. Make sure your script has execute permission
  10. Add a cron job to perform daily backups:
    [code lang=”bash”]$>crontab -e
    0 0 * * * /root/s3backup.sh[/code]

That’s it. Good luck!

XenServer 3.2.0: Upgrade Debian Linux from Sarge to Etch

If you are running XenServer 3.2.0, then you have a built in Debian Sarge image. If you happen to want to upgrade an instance to Debian Etch (the latest stable build as of February 1, 2008), you should follow these steps. It isn’t a simple apt-get dist-upgrade command as other websites may have you believe. The following steps are a summary of what commands I performed while following the official upgrade guide.

If you aren’t running a XenServer instance, then I would suggest following the official guide yourself to prevent anything bad from happening (Debian Etch Upgrade Guide). My instance that I used for this installation was a fresh install of the Sarge image, so I won’t be going into any special circumstances that may need to be addressed by those who have installed a whole load of extras.

Let’s go!

Choose a Mirror
Go to Debian Mirror List and select a mirror. I happened to choose http://ftp.us.debian.org/debian as my mirror because I’m in the United States.

Update /etc/apt/sources.list with…

deb http://ftp.us.debian.org/debian etch main contrib

Perform the Upgrade


rm /etc/apt/preferences
mount -o remount,rw /
aptitude update
aptitude upgrade
aptitude install initrd-tools
aptitude dist-upgrade
aptitude update

See? Not so hard. That is all I needed to do to upgrade my XenServer 3.2.0 Debian Sarge instance to Debian Etch. I am not saying these simplified steps will work for everyone, but for those few that have the same type of setup as we do, these instructions should simplify the upgrade process. Please comment with questions and/or suggestions… and if all else fails, use the official guide!

– Matt