Traditional mainframe, meet the x86 "software mainframe"

This is the first part in a two-part special report on the closing gap between distributed
systems and mainframe technology. Read part 2here.

At VMworld Europe two years ago, VMware Inc. CEO Paul Maritz coined a phrase that drew attention
in the x86 server virtualization world: He said VMware was working to create a “software
mainframe
” with its virtualization prowess.

At the time, some criticized
Maritz’s analogy, citing the vast differences in processing power between the mainframe and x86
chips; even today, VMware officials say the phrase is meant to be an analogy, not an
apples-to-apples comparison.

But the two worlds start to look more alike when you combine server virtualization for x86
platforms; advanced automation and orchestration tools for distributed systems; increasing power
and virtualization integration in commodity chips; and the trend of scale-out computing.

Traditional mainframe: Vertically integrated hardware for high performance

The mainframe and virtualization weren’t mutually exclusive concepts to begin with; server
virtualization originated in 1965 with the IBM System/360-67 mainframe with virtual memory
hardware, followed by the System/370 with virtual memory and VM/370 in 1972.

Mainframes are still orders of magnitude more powerful than even the largest virtualized
distributed systems clusters. For example, IBM’s most recent z196 system
microprocessors
run at 5.2 GHz, while today’s commodity processors run between 2.0 and 3.4 GHz.
A single z/VM in version 6.1 can accommodate more than 60 VMs per CPU; a fully loaded zEnterprise
cabinet can hold four modules, each containing six quad-core processors and up to 786 GB of memory
with four levels of cache; the full system can address more than 3 TB of memory and support
thousands of concurrent workloads, in many cases running at (or close to) 100% utilization.

Such high utilization is possible because the mainframe is a vertically integrated system with
applications written specifically to take advantage of its specialized hardware through a single
operating system. Each operating system can theoretically host somewhere between 10,000 and 30,000
users, depending on the mix of workloads. The mainframe is also still regarded as the gold standard
when it comes to batch jobs — up to 100,000 of which can be run in a day within the same OS. With
bare-metal hypervisors like VMware’s vSphere, each VM still has its own OS drivers and calls to
coordinate to the underlying hardware, upping the overhead and lowering the consolidation ratio
compared with the mainframe. Finally, the mainframe can directly access removable media, a concept
still foreign to x86 virtualization. This can be important in large shops with too much data to
store on long-term on spinning disk.

Mainframes also boast more advanced capabilities in application availability, resiliency and
disaster recovery than even the most bleeding-edge x86 virtualization software, according to Joe
Clabby, president at Clabby Analytics. “What virtualization does offer is the ability to fail over
to a pool — [you get] better failover and reliability [for x86 systems] that way, but it’s nothing
like a mainframe,” he said. “If you have business-critical applications and want the best
architecture with the best hardware and software to ensure they keep running, you’d choose
mainframe over x86 at this particular point in time.”

Users also point to system maintenance and security as advantages in the mainframe’s column.
According to Robert Crawford, a mainframe systems programmer who asked that his company not be
named, “IBM has a product called SMP/E that manages all the fixes that you put onto their software
so you can say ‘I need a fixer for this particular problem,’ and it makes sure the prerequisites
and corequisites are there before you put it on … for most [mainframe] systems, you can back out a
bad fix in about 30 seconds, where that wouldn’t be nearly as easy on a distrib system.” Crawford
was referring to IBM’s System
Modification Program/Expanded
tool.

From a security standpoint, Crawford said, viruses and worms that work through stack overflow
conditions in distributed systems simply don’t work on the mainframe.

The software mainframe:  Scale-out software at a lower acquisition cost

The software mainframe, meanwhile, is in its infancy compared with the traditional mainframe;
VMware, the “first mover” in x86 virtualization, was founded in 1998, more than 30 years after
virtualization first became available on System z.

According to VMware’s published configuration maximums, the
latest version of its software, vSphere 4.1, can scale to 32 x86 hosts and up to 3,000 VMs per
cluster. This would work out to more than a 90:1 virtualization ratio, which for most enterprises
today remains theoretical; the typical ratio is closer to between 15:1 and 20:1.

 Most enterprises are also not 100% virtualized, even those that want to be, which is a
phenomenon known as “VM
stall
.” Each vSphere 4.1 virtual machine can support up to eight virtual CPUs, but VMware’s
Fault Tolerance (FT) does not support synchronous failover between multicore systems, a limitation
which, among others, has prevented Tier
1 applications
from being virtualized in some data centers.

Virtual machines alone also do not constitute a software mainframe — the phrase more accurately
describes a private cloud, in which resources are centrally pooled and automatically provisioned on
demand. VMware and its primary competitors, Microsoft Corp. and Citrix Systems Inc., are working to
get there, or at least someplace close, but both infrastructure
integration
and management
tools
still require further development to approach the traditional mainframe’s true
capabilities.

VMware officials say Maritz’s phrase is meant figuratively.  “We use that term ‘software
mainframe’ as a metaphor,” said Bogomil Balkansky, VMware’s vice president of
product marketing. “We never actually meant it to be a side-by-side comparison.”

Still, Jonathan Eunice, analyst at Illuminata Inc., pointed out that “VMware and x86 virt bring
portions of that … [they] democratize HA in a way a lot of previous cluster and FT technologies
have not.”

X86 virtualization narrows the gap

And while big differences remain, similarities are growing. Aside from improved high availability,
capacity planning and performance management automation are developing fast in the virtualization
world, as is dynamic resource scheduling. For example, server virtualization vendors are launching
portals that would allow end users of applications to request resources from the centralized
computing pool and have them automatically provisioned as well as continually load balanced and
monitored. In the mainframe world, this is simply how resource provisioning has always been
done.

Vertical hardware integration is also making its way into the x86 virtualization world, from the
microprocessor level with Intel’s Virtualization Technology (VT) and AMD’s AMD-V chips to the level
of pre-integrated data center infrastructure with products like Oracle’s Exadata and The VCE
Company’s Vblock. So far, however, the most common use case is “out of the box” support for a
single application such as an Oracle
or SAP
database, rather than thousands of concurrent workloads.

Charles King, an analyst at Pund-IT, sees the Maritz “software mainframe” analogy finally being
realized with  Vblock, as “it integrates systems where all is incorporated in the same way the
mainframe is, only using x86-based server hardware. … They have the basic model, with some ability
to move workloads back and forth dynamically around the data center.”

So why are users working to create x86-based private clouds using software-based virtualization?
Aren’t they reinventing the mainframe wheel with less powerful technology? IBM might say yes, but
users have motivations — the foremost being cost and the flexibility that comes with scaling out
smaller commodity systems rather than building up one centralized supercomputer.

Chris Rima, supervisor of infrastructure systems for a utility in the Southwest, said he’s
considering a Vblock for virtual desktop infrastructure. “I’m not saying I’m sold yet,” he said.
“But I like that it’s modular — more modular than the mainframe. And being able to scale out your
ESX infrastructure 1U or 2U at a time is the most modular … it’s not taking up half of my data
center.”

While the mainframe may boast faster performance and immense vertical scalability, it is also
much more expensive in terms of up-front purchase price. “Maintenance costs for a single mainframe
can easily approach a million dollars a year,” said Rima, whose company decommissioned a mainframe
in favor of x86 virtualization several years ago. “Now compare that with … ESX infrastructure, and
you’re talking about 10 to 20% of those maintenance costs for the same or more computing power.
That’s why we did it — it broke down to dollars.”

Experts say that a return to the traditional mainframe, at least in North America, is also
unlikely because of the “PC generation’s” skill set. “If you actually sit down and do all the
analysis, by the time you’re done, there’s really not a huge [TCO] difference [between x86
virtualization and the traditional mainframe],” said Robert Rosen, CIO at a U.S. government agency.
“But the problem is that the people in the trenches today as well as their managers grew up in the
PC world … and you tend to stay with what you’re familiar with.”

Click to read part 2 of our special report on x86 server virtualization versus the mainframe,
Not
so fast: IBM pushes mainframe toward the cloud
.”

Beth Pariseau is a senior news writer for SearchServerVirtualization.com. Write to her at bpariseau@techtarget.com.





Be Sociable, Share!

No Comments so far.

Leave a Reply

Before you post, please prove you are sentient.

what is 8 in addition to 7?