I have been asked questions about virtualization recently, and could not find an appropriate web resource to point to, so thought perhaps it would be worthwhile to create a post to spell out the basic principals of virtualization.
Virtualization is a technique where many applications can share the same set of hardware, without interfering with one another. Generally, to the application it will appear that it has it’s own dedicated piece of hardware, and everything will run as normal. There is an often quoted statistic that typical data centre usage today, without virtualization, sees about 10 to 15% of hardware resources in use. Typically this is because systems need to be sized to support their peak load requirements. This peak load only happens about quite rarely, e.g. during a seasonal spike in sales, however as the applications are statically assigned to the hardware, the rest of the time the hardware is sitting there doing nothing but wasting space, power, generating heat and depreciating. Using virtualization techniques, it becomes possible to reduce the amount of hardware required in a data centre by anything from 50% to 80%, by allowing the power to be shifted to the applications as and when they need it.
There are four primary means through which we can achieve virtualization. To understand these we must start with the basic understanding that we have an “Application”, a piece of software that does something useful for our business. This application runs on top of the operating system, which provides the basic components required to run a computer. In turn the operating system runs on a specific piece of hardware, a computer! Below I will attempt to explain the four levels of virtualization available in terms of a hierarchy, starting with hardware virtualization, moving up to hardware virtualization, native hypervisors, hosted hypervisors and partitioning technologies.
The first way to achieve virtualization, is at the hardwarelevel. A good example of hardware providing such functionality would be Sun /Oracle high end servers through afeature called “Dynamic Domains”. For example a 32 processor server might come with 8 boards, each with 4 processors/sockets installed. This can be configured as 1 system with 32 processors, or 8 systems with 4 processors each. Resources can be reassigned from one system to another dynamically. Potentially, one can even remove a board and replace it with another with faster processors, whilst the machine is running.
The next level of virtualization is something called a “Native Hypervisor”. This is a piece of software that runs in between the hardware and the operating system, to allow each operating system to run as if it where on it’s own dedicated piece of hardware. Examples of hypervisors would include the IBM Power Hypervisor, “Xen”, or “VMWare Esx”. Effectively the introduction of the hypervisor has made the benefits of virtualization available to the man in the street who is using commodity hardware.
The next level up this hierarchy is something known as a “Hosted Hypervisor”. Probably the most prominent example of this would be the VMWare player. This is an application that is installed on top of a regular operating system, which can in turn host additional operating system instances. Each hosted operating system will behave as if it was it’s own machine. The primary disadvantage of this solution however is that it carries the overhead of an additional operating system.
The final real virtualization option is something known as operating system “partitions” or “containers”. Here there is a single OS kernel loaded into memory. This is the most restrictive form of virtualization as for example all OS instances hosted must be of the same variety, whereas a hypervisor might for example happily be supporting windows, linux and various other variants of unix simultaneously. In fact there may indeed be limitations that the individual OS instances must be operating on the exact same patch level. However, if you can accept these limitations, one can achieve maximum performance and scalability, as each application will not need a complete OS kernel. This allows far more virtual machines to be assigned to a single piece of hardware then a regular hypervisor would permit. Example of this type of technology would be IBM’s AIX Workload Partitions, Solaris Containers/Zones, FreeBSD Jails or Linux VServers.
Footnote: This is my first attempt to create a real post with WordPress, and am struggling a tad with getting the images to line up, scale etc., but I hope the information maybe of use to some!