Posted by cadsmith on September 27, 2009
“It prospered strangely, and did soon disperse
Through all the earth:
For they that taste it do rehearse
That virtue lies therein”,
George Herbert, Peace, 1633.
In this case the wide-spread subject is virtualization. This makes a computer or storage system look like many to its users. Popularity is due to the costs and power saved by not having to load up on hardware, often to meet a temporary peak demand, and the agility in fielding appropriate infrastructure and applications. Sometimes it is as easy as drawing a capacity plan and having the hypervisor and virtual machine monitor assemble the hardware emulation software automatically on hosted servers, tuning each virtual machine (VM) instance’s portion of resources such as processor instruction cycles, memory or bandwidth for proper load balancing.
The techniques sprang from time-sharing, portable OSes, and redundant storage devices. Of course, hardware was also often developed using simulation and functionality implemented in firmware. Now the bare metal can host a layer which mimics popular processor, memory, I/O, and network switch architectures so that off-the-shelf applications can run anywhere, operating system optional, and migration is easier. This is offered for servers, desktops, phones and data centers. The approach spans cloud, grid, parallel and high-performance computing (HPC) systems. Vendors include VMWare, Microsoft, IBM, Intel, Oracle, Cisco and many others. There are open-source versions which lower cost further if vendor support is not necessary, e.g. Xen and KVM. Hardware may also have virtualization built in as a multiplier and for compatibility to a variety of interfaces, for instance.
System management is significant since integration issues are likely and software may require licensing. A virtual machine often has to reboot when a bug causes a crash, but the rest of the VMs run intact. Version changes introduce risk. Infrastructure patches cause side-effects to virtual apps. It is possible to mix various ratios of physical and virtual components. Performance may be adversely affected by additional virtualization layers. VM sprawl makes end-to-end administration more difficult. The visibility and testing tools need improvement. Standard quality measures can still be taken, such as use cases, architectural review, and measures of functionality, usability, security, scalability and performance. Benchmarking in VMs may have time drift.
Users, developers and administrators can expect to see this topic expand as more virtual appliances are developed. Here is an example introductory Glossary.
- Cloud Security and Privacy, by Tim Mather and others, 2009, 336pp, grid computing.
- Running Xen: A Hands-On Guide to the Art of Virtualization, by Jeanna N. Matthews and others, 2008, 624pp, grid computing.
- Virtualization for Dummies 2007 by Bernard Golden. Trends: hardware is underutilized, data centers run out of space, energy costs go through the roof, system administration costs mount.
- Practical Virtualization Solutions: Virtualization From the Trenches 2009, rough cut, by Kenneth Hess and Amy Newman 336pp.
- The Best Damn Server Virtualization Book Period by Rogier Dittner and David Rule 2007, 500pp.
- Storage Virtualization: Technologies for Simplifying Data Storage and Management, by Tom Clark 2005, 264pp, grid computing.
- The Art of Scalability: Scalable Web Architecture, Processes and Organizations for the Modern Enterprise, by Martin L. Abbott and Michael T. Fisher 2009, 500pp, grid computing
- Crimeware: Understanding New Attacks and Defenses, by Markus Jakobsson and Zulfikar Ramzan, 2008, 608pp, grid computing.
- Hadoop: The Definitive Guide, by Tom White 2009, 528pp, grid computing.
- A View of the Parallel Computing Landscape, Asanovic and others, Comm of the ACM, October 2009;
- Virtualization Trends, Option and Adoption, Achleman 2008
- Understanding Full Virtualization, Paravirtualization and Hardware Assist, vmware 2007