Time to use my pubic forum to muddle through some design issues I’m struggling to lay straight.
A Data center is made up of several objects: Servers ( computers, usually horizontal), racks(hold the computers), switches(a network device that connects two or more computers together), power sources, and cables (both electric and network cables, to include fiber optic for storage devices). A server in the data center can serve on or more roles: storage host, computation host, administration, or user interface. If an application is hosted in a data cetner, it is usually important enough that it requires some guarantee of availability. This application will resided on a computation host, and require access to other types of hosts. An email server stores the received messages in a storage host, probably connected to the computation host via fiber optics. It receives and sends messages via a network connection that goes to the outside world. It may also talk to a User Interface machine that runs a web server and an application that allows web access to email. If the computation host loses connectivity with either the public network or the storage host, it cannot process mail. If the web server loses connectivity to the mail server, certain users cannot access their mail.
There are many reasons that connectivity can fail. The major links in the chain are: OS failure, Network interface card (NIC) failure, bad cable, disconnected cable, bad switch, unplugged switch, switch turned off. Once you pass the switch, the same set of potential problems exist on to the other host. To increase reliability, a network will often have two switches, and each server will have two NICs, one plugged into each switch. The same set up goes for storage, although different technologies are used. As a general rule, you don’t want to have things running in hot standby mode. It is a waste of resources, and it doesn’t get tested until an emergency hits. Thus, the double network connectivity usually gets set up also as a way to double bandwidth. Now if one of the cables breaks, that server merely operates in a degraded mode. The second cable has been passing network traffic already, now it just gets all of it.
A typical data center has many machines. Typical server loads are very low, sometimes in the 1-3% range of overall capacity. Thus, if a machine fails, a data center often has plenty of servers that could absorb the load from the failed server. Figuring out how to cross load services in a data center has been a major topic in the IT world over the past decade. This goes by many names, one of which is grid computing. I’ll use that term myself here. There are several problems any grid system has to solve, but most can be clumped under the term application provisioning. This means getting all of the resources together that a given application requires so that they available on the computation host. These resources include the network and storage connections described above, as well as the executables, data files, licenses, and security permissions required to run the application.
When a host fails, some remote monitoring system needs to act. First, it needs to know that the host has failed. This is typically performed through a heartbeat sensor. This is a simple network communication sent by the computation host saying “I’m still alive.” Cue Mike McCready. When a heartbeat fails, the monitor needs to make sure that the application is up online somewhere as soon as possible. Now, the reason the heartbeat failed might have been because of a problem on the heartbeat network, and the application is actually up and running just fine. An advanced solution is to test the system through some alternative method. In the case of the email server, it may be to connect to the email port and send a sample message. This delays the restart of the application, but may minimize downtime.
Sometimes, two copies of the applications can’t run at the same time. In this case, you have to be sure that the original one is gone. To achieve this, you shut off the original server. This is called “Shoot the other node in the head.” or STONITH. Sometimes the word node is replaced with guy and you get STOGITH. If you do this incorrectly, you may take yourself down, a situation referred to as SMITH. Or you take all servers down, and this is called SEITH. But I digest…
Here’s the part that I am currently trying to decide. If an application depends on a resource, and that resource fails, you can bring the application up on a different server. It will take a non-trivial amount of time (estimate it a minute) to shut down the old instance and bring up the new instance. If, on the other hand, the disconnect is temporary, we can have minimal down time by just waiting for the network to come back up. If someone disconnects a cable by accident, that person can just plug the cable back in. If the network is redundant, removing one cable may result in degraded performance, but it may not.
If the failure is due to the switch being down, just selecting another host connected to the same switch will result in downtime and a situation that is no better than the original. If the problem is the storage host being down, there probably is nothing you can do to recover outside of human or divine intervention.
If a switch goes down but there is another set of hosts on a different switch, you can migrate applications to the new host. But you may end up overloading the new switch. This is referred to as the stampeding herd effect. If the lost switch is degrading performance for all applications dependent on that switch, you best strategy is to migrate a subset of applications to balance the load. After each application is moved, recheck the network traffic to determine if you’ve done enough. Or done too much.
A malfunctioning NIC or switch might manifest in intermittent connectivity. In this case, you want to get the application off of the original server and on to a new server. The problem is in distinguishing this from the case where the cable just got unplugged once, and then plugged back in. From the server’s perspective, the network is gone, and then it is back. This leads to a a lot of questions. What interval, and how many iterations do you let go by before you decide to bail from that server? If you have redundancy, does the failing connection impact the user’s experience, or does proper routing ensure that they have seamless connectivity?