Welcome!

Storage Made Easy

Jim Liddle

Subscribe to Jim Liddle: eMailAlertsEmail Alerts
Get Jim Liddle via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, VMware Journal, Intel Virtualization Journal, Cloudonomics Journal, Intel SOA Journal, Parallels Journal, Infrastructure 2.0 Journal, Government News, AMD Virtualization Journal, CIO/CTO Update

Blog Feed Post

Virtualization 101

With virtualization becoming intertwined with cloud computing it is worth taking a step back

With Virtualization becoming intertwined with cloud computing it is worth taking a step back and looking once again what virtualization is, and is not. Virtualization and Emulation are often compared, but there are a set of important differences. Emulation provides the functionality of a target processor completely in software. The main advantage being that you can emulate one type of processor on any other type of processor. Unfortunately it tends to be slow. Virtualization however involves taking a physical processor and partitioning it into multiple contexts. All of which take turns running directly on the processor itself. Because of this, Virtualization in faster than emulation.

Virtualization introduces an abstraction layer on top of resources, so that physical characteristics are hidden from the user. This abstraction layer takes care of resource allocation in order to meet the needs of the applications being run. In essence virtualization enables you to create one or more virtual machines that can run simultaneously at the same time as the host operating system. In its early days virtualization was more specialized and was utilized specifically in a vendor-controlled way, such as IBM’s LPAR approach for example. Virtualization vendors claim consolidation ratios of 4, with the potential for making available up to 75 percent of new infrastructure available in a date center.

Chipset manufacturers are now optimising the processors to support virtualisation. Both Intel and AMD have extended the instruction sets of their newer processors to give increased support for virtualisation. ‘AMD-V’ is what AMD have labelled their technology and Intel’s technology is called ‘VT.’ Expect even further advances. For example the Intel Xeon 7400 Dunnington processors include something called FlexMigration. This allows virtual machines to be moved around easily in a server pool.  You will need to understand in detail the processors that any virtualised environment runs upon as they offer a key mechanism for optimization.

Key to the virtualisation architecture is the hypervisor, the virtual machine manager. A hypervisor is a program that allows multiple operating systems to share a single hardware host. Although each operating system appears to have the host’s processor, memory, and other resources all to itself, the hypervisor is actually controlling the host processor and resources. It allocates what is needed to each operating system in turn, and these allocations can be managed and tuned.

There are two types of Hypervisor:

q Type 1: This is referred to as a bare-metal or native hypervisor. This type of hypervisor is software based and runs directly on a specific hardware and hosts a guest operating system. XEN, VMWARE, ESX, Parallels Server, Hyper-V all have examples of this type of hypervisor.

q Type 2: This type of hypervisor runs within an actual operating system. VMWARE Server (GSX), VirtualBox, Parallels Workstation and Desktop are examples of this type of hypervisor. The Type 2 Hypervisor is typically people are referring to when they think of Virtualisation.

There is a third element: Paravirtualisation. This is when the Operating System has been modified to be aware of the Hypervisor that is running. This makes the interaction and integration between the two much smoother and in theory less prone to any errors that may be generated. ‘Enlightenment” in Windows Server 2008 is an example of this as it enables the OS to interact directly with the Hypervisor.

With computing resources at a premium in terms of space, power, location, and cost, the use of virtualised infrastructures is a very compelling proposition for existing servers and hardware that are under utilised or have spare capacity cycles. Virtualisation can actually be thought of as addressing one of the deficiencies of building a large infrastructure, that of resource. It also addresses differences in OS infrastructure, software stacks etc. With virtualisation, on-demand deployment of pre-configured virtual machines containing all the software required by a job is possible. Flexibility is also added to resource management and application execution. For example running virtual machines can be controlled by freezing them (similar to check-pointing) or by migrating them in a real-time scenario while keeping the virtualised infrastructure running

Indeed this type of proposition is beginning to be thought of as a ‘private cloud’, in which virtualisation is used to deliver services across an organisation and in which best practices utilised in ‘public clouds’ are used to deliver this. These include Infrastructure-as-a-Service and Platform-as-a-Service type concepts in which Virtualisation and PaasS providers are releasing products and tools to enable the deployment and management of such private clouds. Examples of this are GigaSpaces who recently announcing tighter integration with VMWare which enables GigaSpaces to dynamically manage and scale VMWare instances and enable them to participate in the scaling of GigaSpaces hosted applications. A PaaS cloud provider such as GigaSpaces is able to do this due to VMWare’s launch of VSphere which opens up their VM product to allow management of both internal and external clouds. VMWare is pitching vSphere as the first Cloud-OS, which is able to break up separate hardware platforms into what they offer in terms of resources.

In terms of virtualisation, there are also drawbacks to watch out for. When you communicate ‘to’ and ‘from’ a virtualized node, the packets needs to pass through the virtualised communications layer. This is an overhead and you should estimate between 10-20% of a performance hit for this. Furthermore the VM is not an indication of the speed or performance of your grid. For example running four virtual machines on a 4 core 4-GHz chip is not the same as having 4 & 1Ghz dedicate chips for each VM. Also when one of your virtual machines is idle, if other VM’s are co-hosted they will get the majority of the power.

As the machines are virtual, and using resource cycles that are not in use, you may find that certain nodes are not available when you need or expect them. To this end you should ensure you have the ability to burst when required and have virtualised management infrastructure in place to handle this.

If you intend to embrace virtualisation then you will need to review machine specifications, paying special attention to processors and RAM,  and review storage and network infrastructure.

The positives however far outweigh any drawbacks and ultimately virtualisation will, over time, save money and, with all the innovation currently occurring around virtualisation, will make server administration easier.

Content adapted from my book “TheSavvyGuideTo HPC, Grid, DataGrid, Virtualisation and Cloud Computing” available on Amazon.

Read the original blog entry...

More Stories By Jim Liddle

Jim is CEO of Storage Made Easy. Jim is a regular blogger at SYS-CON.com since 2004, covering mobile, Grid, and Cloud Computing Topics.