Tech Blog

So what keeps you up at night?  If you are in IT like me, it might be the simple fear of that 2:00 a.m. phone call proclaiming a server or service is down with demands that you need to fix it ASAP without so much as a cup of coffee first!  Most seasoned IT veterans will tell you that you just can’t have enough backups but sometimes even if you have them, restoring service can take time.  And yes, even though you have dual power supplies, hot-swappable memory, RAID, redundant NICs, redundant switches, redundant power, a SAN, etc. things CAN STILL FAIL and probably will when you have the least amount of time to deal with it!

What follows is a description of clustered systems I put together over a year ago for a local .com with the intent of providing the most redundancy and thus up-time possible WITHOUT  breaking the bank.  There have been a few hiccups along the way which I will touch on below but as I type this,  there are now 4 such systems in place and they are happily serving up web pages for hundreds of sites.

Here is a very basic diagram of how the systems are set up:

Apache/MySQL Cluster Diagram

Software

  • pfSense – For the load balancers
  • CentOS – For the server OS
  • Apache – For serving the web pages, most of which are PHP
  • Percona XtraDB Cluster – For the clustered database on all nodes
  • GlusterFS – For shared data storage across all nodes
  • HAProxy (if needed) – For balancing MySQL traffic

Hardware

There is nothing special about the hardware in use on all these servers.  It is composed of quad core servers with varying amounts of memory.  All run hardware RAID 10 on SATA drives and have bonded gigabit NICs connected to redundant switches.  In fact, one of the major benefits of this setup is you don’t need new, fast hardware because a failure will not cause a service interruption.

Putting It All Together

I’ll start with the pfSense load balancers.  They are set up in active/passive mode.  When a configuration change is made on the active node, it is automatically copied to the slave mode.  They are set up to share both an internal private IP address and an external public address via the use of the common address redundancy protocol (or CARP).  Some of you may refer to these as VIPs, or virtual IP addresses.  If the active node fails, the virtual IP addresses automatically fail over to the passive node which already has the same configuration as the old active node.  When the fail-over happens, the clients will only notice a slight delay in response while the changeover is taking place and then it is back to business as usual on the passive node.  The pfSense load balancers are also used to route web traffic (both HTTP and HTTPS) in a round-robin fashion to each of the cluster nodes.  The pfSense package comes with all the previously mentioned functionality built in, along with much more including layer 2 firewalling.  It is based on FreeBSD and has a very nice web GUI to finish out the package.

The database piece is handled by the Percona XtraDB Cluster package.  All 3 nodes are completely in sync and are used for both simultaneous reads AND writes.  I have also done this same thing with MySQL Cluster with equal success however I prefer Percona XtraDB Cluster for reasons beyond the scope of this article.

The Apache configuration files and the files that make up each site (html, php, etc.) are stored on a 3-node replicated gluster volume.  When data is written to one node, it is almost immediately available on the other nodes.

The Results

  • The biggest plus to this setup is that if one node in the cluster fails or begins to have some type of hardware or software issue, the node can be removed from the cluster in the pfSense load balancer.
  • Because nodes can be added/removed from the active cluster in pfSense, each node can be removed, updated, rebooted and then put back in the cluster with NO DOWNTIME!
  • Since the traffic is spread evenly across all 3 nodes in the cluster, each individual node doesn’t have to be all that powerful.  They also don’t have to be new since a node outage won’t bring any down time with it.  Obviously this saves on hardware costs since older hardware can be used (can you say eBay anyone).

Each node in the cluster is sufficiently powerful to handle the entire load should 2 of the nodes fail however as mentioned above, that does not have to be the case.  I have tested this with 2 nodes down while running on a single remaining node and it keeps on working just as a stand-alone server would.  When the other 2 nodes are powered back up, they immediately sync back up.

The Drawbacks

Like most things highly technical, there has been a hiccup or two along the way.

  • One of these clusters contains an application that is very database write intensive.  Every now and then, the clustered database would lock up.  I finally discovered that due to a large number of writes on the same database and table, the table would sometimes be locked by one node when another tried to lock and write to it at the same time.  I put HAProxy on each node in that cluster and set it up to point all MySQL traffic to a single node at a time with the other 2 nodes as backups.  That fixed the problem.  The other clusters don’t have this problem as they are more standard MySQL driven web sites with not nearly as much database writes and thus HAProxy hasn’t been necessary.
  • I experienced a few problems with the gluster file system.  Most were due to bugs that have since been fixed in recent releases.
  • The 2 drawbacks mentioned above bring up issues that should be noted.  If the clustered database crashes, it could crash for all nodes.  If the clustered file system crashes, it could crash for all nodes.  On the other hand, if stand-alone MySQL crashes on a single server or something else fails on that server, doesn’t an outage occur just the same?
  • Another drawback I have noticed comes into play if you host sites such as WordPress or Joomla (my personal favorite).  The Percona XtraDB Cluster database uses a database engine of type InnoDB (note the MySQL Cluster database uses an engine type of ndbcluster).  These packages have the ability to allow the user to install plugins that probably won’t be set to use the correct type of database engine and thus will not replicate properly.  For that reason, these types of sites are not well suited for this setup.  I do hope to be able to script around this issue in the future but more on that if I can make it happen.
  • One final drawback is the knowledge curve it takes to administer these clusters.  While none of this is rocket science, it is more complicated than a standard LAMP (Linux, Apache, MySQL and PHP/Perl) web server.  Once you learn the nuances of it just like you have to with any other technology, it becomes very manageable.

Conclusion

How do I know this all works?  When I first put all this together, I went to the data center and actually pulled the power plug one at a time while browsing the sites to see how it responded.  During the fail-over, there was a brief delay in site response and then things continued on as if there were no problem at all.  We probably don't need to share that little tid-bit with the owners of this company. The only other notable issue was the guy at the data center telling me to stop yelling “It Works . . . It Works” at the top of my lungs, but I digress.

So roughly a year and a half later and with several hundred web sites on 4 such clusters would I do it again?  Absolutely I would ! !  Oh, and those 2:00 a.m. phone calls?  I just tell them to shut down or remove the problem node from the load balancer so I can then deal with it the next morning after I have had my first cup of coffee .

Check back soon for my next blog entry where I will discuss how I back these servers up.  Happy clustering!

- Kyle H.