Posts Tagged ‘iSCSI’

Windows Server Failover Clustering gets boost from PowerShell cmdlets

Tuesday, November 13th, 2012

If we conduct Windows Failover Clusters, we might notice a Cluster.exe CLI command
is blank after we implement a Windows Server 2012 Failover Clustering feature. For years,
systems administrators

have used Cluster.exe to book a origination of clusters, pierce failover
groups, cgange apparatus properties and troubleshoot cluster outages. Yes, a Cluster.exe command
still exists in a Remote Server Administration Tools (RSAT), though it’s not commissioned by default
and is deliberate a thing of a past.

Another thing we might shortly notice in Windows Server 2012 is a PowerShell and Server Manager
Icons pinned to your taskbar. What we might not notice is that a default
installation of a Windows Server 2012 handling system
is now Server Core and contains more
than 2,300 PowerShell cmdlets. Microsoft is promulgation a transparent summary that Windows servers should be
managed usually like any other information core server, both remotely and by a use of scripting.
With Windows, that means PowerShell.

Fortunately, Windows
Server Failover Clustering
is no foreigner to PowerShell. With Windows Server 2008 R2, 69
cluster-related PowerShell cmdlets support with configuring clusters, groups and resources. This tip
explores a new PowerShell cmdlets in Windows Server 2012 failover clusters.

With Windows
Server 2012
, a sum of 81 failover cluster cmdlets can be used to conduct components from
PowerShell. New cluster cmdlets can perform cluster registry checkpoints for resources
(Add-ClusterCheckpoint), guard practical machines for events or use failure
(Add-ClusterVMMonitoredItem) and configure dual new roles: Scale-Out File Servers
(Add-ClusterScaleOutFileServerRole) and iSCSI Target Server

To list all a failover cluster cmdlets, use a PowerShell cmdlet “Get-command –module
” (Figure 1). we am regulating a built-in Windows PowerShell Integrated Scripting
Environment (ISE) editor, that helps admins get informed with all a failover clustering

In further to a FailoverCluster cmdlets, Microsoft has several new modules of PowerShell
cmdlets, including ClusterAwareUpdating with 17 new cmdlets, ClusterAware
ScheduledTasks with 19 new cmdlets and iSCSITarget with 23 new cmdlets. There are many
Cluster Aware Updating cmdlets, such as adding a CAU purpose (Add-CauClusterRole), removing an
update news (Get-CauReport) or invoking a run to indicate and implement any new updates

Cluster-Aware scheduled tasks are new to Windows Server 2012 and a Task Scheduler now
integrates with failover clusters. A scheduled charge can run in one of 3 ways:

  • ClusterWide on all cluster nodes
  • AnyNode on a pointless node in a cluster
  • ResourceSpecific on a node that owns a specific cluster resource

The new ScheduledTasks cmdlets emanate a cluster-aware scheduled task. In a table, we can see
the cmdlets that register, get and set Clustered Scheduled charge properties.

To get an thought of how to use these PowerShell cmdlets, we initial allot an movement and trigger
variable. The movement non-static specifies a module that is to be executed, such as a Windows
calculator in a instance below. The trigger non-static sets adult when a charge is to be executed. The
resulting cmdlets to report a charge to run cluster-wide daily during 14:00 would demeanour like this:

PS C: $action = New-ScheduledTaskAction –Execute

PS C: $trigger = New-ScheduledTaskTrigger -At 14:00

PS C: Register-ClusteredScheduledTask -Action $action -TaskName
ClusterWideCalculator -Description “Runs Calculator cluster wide” -TaskType ClusterWide -Trigger



While usually PowerShell can be used to register, get/set and unregister Cluster-Aware scheduled
tasks, we can use a Task Scheduler in Computer Management to perspective a cluster jobs (Figure

Finally, failover clusters can now be configured with a rarely accessible iSCSI Target Server.
This purpose allows we to emanate and offer iSCSI LUNs in a highly available
fashion to clients opposite your enterprise. To supplement this new cluster role, use a Cmdlet
Install-WindowsFeature –name FS-iSCSITarget-Server (or use Server Manager) to implement the
iSCSI Target Server role. Then, use a new cmdlet Add-ClusteriSCSITargetServerRole to
create a iSCSI Target apparatus and associate it with common storage. You can afterwards precedence the
new iSCSI Target cmdlets to configure iSCSI LUNs (Figure 3).

There is no necessity of PowerShell cmdlets in Windows Server 2012 to assistance we conduct your
failover clusters. In further to creating, configuring and troubleshooting your cluster, we can
use PowerShell cmdlets to supplement new scale-out
file server
, iSCSI Target Server roles, clustered scheduled tasks and Cluster-Aware

About a Author
Bruce Mackenzie-Low is a master consultant during Hewlett-Packard Co., providing third-level worldwide
support on Microsoft Windows-based products, including clusters and pile-up dump analysis. With over
18 years of computing knowledge during Digital, Compaq and HP, he is a good famous apparatus for
resolving rarely formidable problems involving clusters, SANs, networking and internals. He has taught
extensively via his career, always withdrawal his assembly energized with his unrestrained for

This was initial published in Nov 2012

Article source:

Jobs organisation Hudson dumps NetApp for HP in disaster liberation project

Thursday, October 18th, 2012

Global recruitment association Hudson has dumped NetApp for HP Lefthand storage as partial of a
datacentre converging and disaster
recovery project
that saw it cut appetite use by 50% and double storage efficiency.

The plan began as a pierce to a singular European datacentre in London to support a roll-out of
Hudson’s homegrown CRM system. But, it
soon became apparent that a delegate disaster liberation site was indispensable and a existent NetApp
storage would have to be protracted or replaced, according to IT executive Europe Bas Alblas.

“We had NetApp storage, and there were no problems with it, though once we had a single
datacentre it was a singular indicate of failure, so we indispensable a second site,” he said. “We motionless we
wanted synchronous
between sites and that we should be means to yield present entrance to a most
critical apps in box of an outage.”

Alblas’s group evaluated what NetApp could offer, though it valid too costly. The association opted for
HP storage. It had an existent attribute with HP as a retailer of servers.

Hudson implemented dual HP Lefthand P4800 unified
nodes during any site. HP MDS600 hoop shelves residence 72 600GB SAS drives for any node,
which support HP servers regulating iSCSI around 10GbE networking.

The datacentres work in an active-active configuration, with HP storage ancillary HP
Proliant servers that run Citrix XenApp skinny clients using on VMware.

Alblas pronounced HP won out over NetApp on value-for-money terms. 

“HP gave a best crash for a buck, with reduce cost and some-more functionality,” he said. “We have
reduced storage shelve space and energy use by 50%, with 3 times some-more storage ability and
storage opening doubled. We have also achieved 50% some-more users per Citrix server and reduced the
Citrix practical server count by 30%.”

Another cause in retailer choice was that Hudson wanted an active-active pattern with
synchronous riposte between primary and delegate arrays. It opted to have a same storage
hardware during any plcae to make that easier to achieve. It could presumably have re-used its
existing NetApp storage hardware during a disaster liberation site to grasp a same ends, though that
would have been a formidable job, pronounced Alblas.

What could HP urge in destiny versions of a P4000 series? Alblas pronounced work indispensable to be done
by HP and VMware to urge iSCSI communications.

“The categorical emanate we have is a approach VMware communicates with a SAN is not smart. VMware hosts
at a categorical site infrequently promulgate with a delegate site when they should be articulate to the
primary SAN, and this puts aria on a WAN link,” he said. 

“HP and VMware need to work to urge a iSCSI smoke-stack so that this doesn’t occur with hot/hot
sites like ours,” combined Alblas.

Article source:

TwinStrata CloudArray moves inside a cloud for DR

Sunday, August 12th, 2012

TwinStrata Inc. this week pronounced it’s making available a CloudArray
cloud storage gateway as a practical apparatus inside a cloud to promote disaster
for users, who won’t have to set adult a second site.

TwinStrata began offered a CloudArray iSCSI cloud storage gateway in 2010, permitting customers
to cache frequently accessed information on premises and send other information to a cloud. The new in-cloud
option lets TwinStrata CloudArray business supplement a practical array inside Amazon Elastic Compute Cloud
(EC2), Google, IBM SmartCloud Enterprise and Rackspace clouds. The in-cloud array gives customers
the ability to destroy over on-premises applications though a dedicated DR site.

More on TwinStrata CloudArray

Firm uses TwinStrata
CloudArray gateway
to equivocate upgrade

CloudArray gateway device
adds cloud SANs

emerges from secrecy with ‘cloud enablement strategy’

The in-cloud
also lets business pierce information between cloud platforms in box a primary cloud
provider suffers an outage, pronounced TwinStrata CEO Nicos Vekiarides. “If we have your information in the
cloud, what’s your disaster liberation strategy? Now we can emanate copies to another cloud provider.
It’s real-time replication, so if we have a duplicate of data, we can entrance [it] from another cloud
provider,” he said.

The practical apparatus offers adult to 50 petabytes of storage ability per server.

The TwinStrata
creates it probable for companies to emanate an iSCSI storage
area network
(SAN) in a cloud formed on intent storage, Vekiarides said. “With CloudArray, you
create a SAN cloud,” he said. “You can build a large storage pool. You can conduct it yourself. It
gives we a capability to sequence small-instance servers and use intent storage and map it directly
so applications can entrance it. A lot of providers use object
, though they don’t have a approach to bond to intent storage around plug-and-play.”

TwinStrata’s in-cloud choice is now accessible on Amazon, IBM and Rackspace clouds, and is
expected to be accessible on Google by a finish of September.

Article source:

Nasuni adds retard capabilities to cloud storage appliance

Saturday, July 14th, 2012

Cloud storage businessman Nasuni Corp. currently pronounced it is adding a retard interface to a cloud
storage controller
so it can broach unified storage for
enterprises’ remote
and bend offices.

Nasuni launched in 2010 with a NAS cloud storage
, and is now charity iSCSI
capability to go with NFS and CIFS
support. It is also adding a incomparable controller to make it easier for business who wish retard and
file storage in one box.

A Nasuni controller is placed on a customer’s site that works as a translator so information can be
stored in a Amazon
or Microsoft Azure cloud. Nasuni’s controller is accessible on a hardware apparatus or as a
software virtual
that business can implement on any hardware.

“Many of a business have been asking, ‘It’s good that we residence my record needs though why
don’t we do something that addresses my retard needs?'” Nasuni CEO Andres Rodriguez said. “This is
an enlargement of a use box of what a singular controller can do. It can now do all record protocols
and retard data. We can give we one storage, though can do it with a consolidated

Rodriguez pronounced a Nasuni cloud storage apparatus is not designed to run full enterprise
resource formulation (ERP) systems or an Oracle database, though can offer as a domain controller
or “anything we can do on an entry-level NetApp box.”

The hardware or practical appurtenance binds a duplicate of information locally on a patron site and a master
copy resides in a cloud. Customers emanate a volume for block-based information that is mirrored to the
master chronicle in a cloud. File-based information is copied to a cloud around snapshots.

Rodriguez pronounced Nasuni followed NetApp’s proceed to one storage by starting out as a file server and then
turning blocks into files.

“The biggest technical disproportion for business is that when we muster on a record side, you
get thin
on a volume. On a retard side, you’re stealing a counterpart volume of a block,” he
said. “You wish to equivocate latency that would generate
a time-out on a focus level. On a iSCSI side, we pledge blocks will be delivered in a
timely conform by gripping them local.”

Nasuni also rolled out a 2U NF-400 dual-processor system, that has twice as many RAM and
scales to twice a internal cache storage of a 1U NF-200. The NF-400 has 32 GB of RAM and holds
between 6TB to 12TB of cache for record information and a same ability in internal storage for retard data.
The NF-400 has 10
Gigabit Ethernet (10 GbE)
connectivity and supports adult to 900 users, while a NF-200 uses
Gigabit Ethernet (GbE) and supports 300 users. Price for a NF-400 indication ranges from $12,500 to

Customers can also muster Nasuni as a practical appurtenance on VMware or Microsoft Hyper-V platforms
for adult to 50 users.

Enterprise Strategy Group (ESG) Senior Analyst Steve Duplessie pronounced a value of Nasuni’s cloud
storage apparatus is that it lets remote offices conduct reduction data.

“This is anti-storage,” Duplessie said. “It’s unequivocally a controller that sits in a remote office
and allows all a tangible storage to reside and be managed in a cloud by a executive IT
department. Remote infrastructure support is a nightmare, so stealing it is a blessing to many
organizations. In many ways this could be deliberate a easiest and many profitable approach for
companies to build their private clouds.”

Article source:

Can Fibre Channel tarry Ethernet’s assault?

Sunday, May 6th, 2012

Fibre Channel, a high-speed information ride custom for storage area networks (SAN), is underneath augmenting vigour as information centers pierce toward Ethernet for all information network trade and SAS for hardware interconnects.

By no means is Fibre Channel down and out. In fact, new total prove it’s still display low single-digit, year-over-year growth. The custom is now used in $50 billion value of apparatus around a world, according to investigate organisation Gartner.

Because corporate information centers are delayed to change out technology, a Fibre Channel networking marketplace will approaching continue to uncover indolent expansion for a subsequent 5 to 10 years. After that, Ethernet looks to be a custom of a future.

“The opposite winds opposite Ethernet is that there’s a lot of politics and a lot of sacrament around Fibre Channel,” pronounced Forrester researcher Andrew Reichmann. “[But] Ethernet can do many all Fibre Channel can do. Ethernet is cheaper, some-more ubiquitous.”

And it allows IT managers to find a best fit for specific focus workloads, he said. “As those decisions pierce some-more toward a workload-centric approach, a one that creates a many clarity is Ethernet. For example, it creates some-more clarity to put your [virtual machine] infrastructure on iSCSI or NFS [network record system] since there’s unequivocally small disproportion in a opening we get compared to Fibre Channel.”

Slowing a pierce to Ethernet – for now – are a common IT territory battles. Storage networks and hardware are purchased by a storage team, that controls that apportionment of a altogether IT budget. Moving to an all-Ethernet infrastructure means giving that bill divided to a networking group, according to Reichmann.

On tip of that, some storage administrators simply don’t trust Ethernet is strong adequate for information storage traffic. They’ve always used Fibre Channel and see it as a fastest, many arguable approach to pierce information between servers and back-end storage.

“All those factors make it tough to pierce divided from Fibre Channel,” Reichmann said.

Market investigate organisation IDC predicts Fibre Channel will sojourn during a core of many information centres (supporting mission-critical mainframe and Unix-based applications), while many destiny IT item deployments will precedence 10GbE (and after 40GbE) for a underlying storage interconnect. This transition will lead eventually to marketplace income waste for Fibre Channel horde train adapters (HBA) and switch products.

As a Fibre Channel marketplace shrinks, IDC predicts “rapid and postulated income growth” for 10GbE storage interconnect hardware such as converged network adapters (CNA), 10GbE network interface cards (NIC) and switches. (A CNA is simply a network interface label that allows entrance to both SANs and some-more common LAN networks by charity churned protocols such as Fibre Channel, iSCSI, Fibre Channel over Ethernet (FCoE) and true Ethernet.)

SAS and Fibre Channel drives

Although Fibre Channel switch revenues have remained comparatively prosaic over a past dual years, according to Gartner, Fibre Channel hoop expostulate sales have plummeted. Vendors are approaching to stop shipping them within 5 years.

“We’re forecasting SAS will replacing Fibre Channel since it provides some-more coherence and it lowers engineering costs,” pronounced Gartner researcher Stan Zaffos.

High-performance applications such as relational databases will be upheld by SANs done adult of 5 percent solid-state drives and 95 percent SAS drives, according to Forrester’s Reichmann. SAS, or serial-attached SCSI drives, are dual-ported for resilience and are usually as discerning as their Fibre Channel counterparts.

Unlike Fibre Channel, SAS shares a common backplane with cheap, high-capacity sequence ATA (SATA) drives, so they’re transmutable and can be churned among expostulate trays. It also allows for easier information emigration in a tiered storage infrastructure.

IP storage is a buyer’s market

Gartner recently expelled total display that over a past dual years, shipments of Fibre Channel HBAs and switches have remained comparatively flat, while 10GbE section shipments have soared. According to Gartner, shipments of 10GbE NICs rose from 259,000 in 2009 to some-more than 1.4 million final year. And it’s a buyer’s market, with prices descending by a building due to extreme foe between 7 component vendors, including Intel, Broadcom, QLogic and Emulex.

“Prices for 10GbE hardware is going into a Dumpster. The marketplace has to brace around 3 vendors before we see something from a income side,” pronounced Sergis Mushell, an researcher during Gartner.

According to Mushell, single-port 10GbE NICs sell for $43 to $60 dollars; a year ago they went for $100. Dual-ported 10GbE NICs now go for about $300. And CNA cards sell for between $700 and $1,000.

In comparison, a 4Gbps Fibre Channel HBA sells for $337, while an 8Gbps HBA ranges between $1,000 and about $1,900 on sites such as

In a initial entertain of 2010, Fibre Channel switch income totaled $1.59 billion; a year after it strike $1.66 billion; and in a third entertain of 2011, it was $1.58 billion. (Those total includes both 4Gbps and 8Gbps modular and bound switches.)

Sales of Fibre Channel HBAs – network interface cards that are compulsory for servers and storage arrays comparison – have also struggled. In a initial entertain of 2010, HBA income totaled $781 million. While it rose to $855 million in a initial entertain of 2011, it forsaken behind to $811 million by a third entertain of a year.

According to IDC, as a mercantile retrogression abated in 2010, IT shops began server upgrades that had been deferred, with an augmenting use of server and storage virtualization. To conduct those virtualised infrastructures, IT managers sought out a set of customary elements: x86 processors for computing, PCI for complement buses, Ethernet for networking and SAS for tough expostulate and SSD interfaces.

“The thought is no longer to muster and conduct any component individually, though to build a optimal (e.g., densest, greenest, simplest) information center,” IDC wrote in a news ” Worldwide Storage Networking Infrastructure for 2010 Through 2014.”

The underlying thought behind converged IT infrastructure is that companies wish to muster and conduct IT resources in predefined “chunks” (e.g., a rack, an aisle or an whole information center) rather than as graphic products (e.g., servers, storage or network switches), according to IDC. Thanks to technologies like server and storage virtualization, these “chunks can afterwards be allocated to support specific focus sets. They can also be used many some-more efficiently,” IDC said.

Mazda creates a move

For example, Mazda’s North American Operations virtualized a focus servers, slicing a server count from 300 earthy machines to 33 VMware ESX horde servers with 522 practical machines (VM). The pierce reduced Mazda’s 2009-2010 IT bill by 30 percent, in vast partial by virtualising scarcely all of a applications, including IBM WebSphere, SAP, IBM UDB and SQL Server. But a virtualisation plan also caused storage network I/O bottlenecks since of all a total VMs.

“The backup times usually kept growing, from 6 hours to 8 hours all a approach to 16 hours,” pronounced Barry Blakeley, Mazda’s infrastructure designer for craving infrastructure. “In a workday, we can’t have a 16-hour backup window.”

So Mazda changed a 85TB of storage from NetApp arrays to Dell Compellent iSCSI storage arrays trustworthy by 10GbE networks. Mazda chose a practical backup product from Veeam Software, following Blakeley’s mantra for a project: “Keep it simple, stupid.”

“Once we muster things correctly, we can get all a opening we need over iSCSI and we don’t need Fibre Channel,” he said.

Blakeley pronounced a Veeam backup software, total with a 10GbE network, helped open adult his storage network bandwidth, dropping his backup windows to 6 hours and augmenting backup opening to about 6Gbps. “The revive times were unequivocally discerning too,” he added.

Fibre Channel over Ethernet

One networking custom that’s gotten a large pull – mostly from Cisco – in new years is Fibre Channel over Ethernet (FCoE). While Cisco doesn’t mangle out sales total for FCoE-enabled switches, a FCoE custom was used in a small reduction than 10 percent of all SAN deployments final year, according to Stuart Miniman, an researcher during a Wikibon Project, a Web 2.0 village for IT professionals. Those figures, Miniman said, paint a extensive success for FCoE.

“Most deployments of FCoE are in blade server environments; business don’t need to consider about a technology, it usually works a approach stream SANs do,” he settled in a new blog post.

Miniman formerly worked in a EMC CTO’s office, where he was “an evangelist” for FCoE.

In contrast, Gartner’s Mushell pronounced his investigate organisation is not presaging strong expansion for FCoE.

Zaffos echoed that view. “Does it urge information availability? No. Does it urge performance? No. Does it facilitate a infrastructure? Potentially. Does it facilitate management? Perhaps,” he said. “But it’s not changing how LUNs are created. It doesn’t change how they’re zoned or being allocated.”

Unlike iSCSI, FCoE still requires organisations to occupy a Fibre Channel executive to hoop storage provisioning.

“When we demeanour during simplifying an infrastructure, many users follow a [keep it simple, stupid] process and select to keep apart LAN and SAN infrastructures,” he said. “If you’re gripping dual apart environments… a simplification of a infrastructure [by regulating FCoE] might be ethereal.”

Miniman argues that FCoE is a good approach for an craving storage organisation to start a change to a some-more Ethernet-centric sourroundings while progressing a information detriment resiliency of Fibre Channel.

Miniman points out that organisations regulating FCoE tend to have infrastructures with some-more than 200 servers, and therefore have a bill for a fulltime Fibre Channel admin and need a strong inlet of a protocol. “If there’s reduction than 200 servers, they tend to use iSCSI,” he said.

FCoE encapsulates Fibre Channel frames in Ethernet packets permitting for server I/O consolidation. In an FCoE environment, converged network adapter cards (CNA) reinstate both NICs and HBAs. An FCoE-enabled switch afterwards provides connectivity to both an existent LAN and a back-end SAN.

Bob Fine, product selling executive for Dell Compellent, argues that iSCSI can also be used in mixed with a some-more strong Lossless Ethernet and asks, “So what’s a advantage to FCoE?”

Yet a small some-more than half of all Dell Compellent SAN ports sojourn Fibre Channel.

“I’d also contend many of a business are regulating churned protocols. Very few are usually regulating one,” he said. “That’s a good thing about giving business a record choice. They can select what record is good for them.”

Sitting among Storage Networking World (SNW) attendees final month, Rod Patrick, lead IT systems operative during Atmos Energy, was one of 3 people to lift his palm when a orator asked a assembly either anyone was regulating FCoE for server-to-storage connectivity.

That was an alleviation over final year, when Patrick was a usually one to lift his hand.

“Even to this day we consider we were early on in a diversion for FCoE,” he said. “It wasn’t totally though incidents or pain… though we’ve been unequivocally blissful to be on a heading edge.”

Atmos Energy consolidates a network

Atmos Energy, one of a largest healthy gas distributors in a U.S., built a code new information centre dual and a half years ago. That authorised a association to start from blemish in consolidating a network infrastructure.

“It was especially about cost and morality of a design,” pronounced Patrick, who was hired about 6 months after a new information core was built. “You’re saving all sorts of rigging as distant as top-of-rack. So instead of carrying to run Fibre everywhere and Ethernet, we apparently usually run Ethernet. It’s a common path.”

Atmos had been regulating a mixed of 2Gbps, 4Gbps and 8Gbps Fibre Channel for a SAN. Including 3 primary storage arrays, several midrange arrays and a archives, a association stores about 1 petabyte of data.

The association runs about 1,000 servers, 60 percent of that are virtualised. All of a practical machines run over FCoE, as do about 100 earthy machines that support higher-end databases.

Atmos deploys VDI

About a year and a half ago, Atmos deployed a practical desktop infrastructure (VDI) that includes about 500 terminals in dual call centres. Its VDI, formed on VMware View, also runs over FCoE.

To assistance with VDI-related foot storms in a morning, Patrick and his organisation commissioned about 10TB of solid-state storage on a primary, high-end storage arrays to boost performance.

The association uses CNAs on a blade servers, that concede it to run both customary LAN information trade over Ethernet and storage trade regulating FCoE. But a association has nonetheless to run a FCoE all a approach to a back-end storage.

The Fibre Channel storage arrays are connected to Cisco MDS switches, that are Fibre Channel only. Those MDS switches bond to Cisco Nexus 5000 switches, that bond to blade servers regulating FCoE.

“We are looking during FCoE direct-connect options to discharge a [Cisco] MDS switches eventually, though that is a few years divided probably,” Patrick said.

Patrick also pronounced he wouldn’t be against to deploying iSCSI or NFS (network record system) as his IP-over-Ethernet storage custom in a future, though he gifted problems in a past with courtesy to high-end storage opening needs.

For his practical appurtenance record complement data, Patrick pronounced he wanted to use a “more proven standard.”

“I’m kind of aged propagandize when it comes to Fibre Channel,” he said.

Forrester’s Reichmann has no doubt how a tragedy between Fibre Channel and Ethernet will get resolved. Despite a adherents, Fibre Channel is on a prolonged delayed slip toward obscurity.

“In a prolonged run, Ethernet is going to win,” he said. “How prolonged it’s going to take to get there is unclear.”

Article source:

Dell EqualLogic PS Series iSCSI SAN review

Saturday, May 5th, 2012

As IT contemplates a quick expanding star of storage options, during slightest one fact has turn clear: In a infancy of infrastructures, many information usually sits around, feeling lonely, while a tiny commission is some-more or reduction constantly in use. Addressing this emanate in an superb and cost-saving approach paves a highway to reduce collateral expenditures for storage, as good as reduced energy and cooling costs, with a side sequence of opening gains. What’s not to love?

Several storage tiering solutions are accessible today, though they tend to be on a top finish of a market. For many solutions, we select SAS disks, maybe with an comparison SATA-based section that’s already in place; we competence supply another array with solid-state disks for additional juice. Without any smarts to connect these together, we breeze adult with primer tiering: Old information sits on a SATA/SAS boxes, and a high-turnover information lives on a SSDs. It’s a applicable solution, though requires caring and feeding to say correct chateau for any form of data.

Dell’s EqualLogic iSCSI SANs now offer programmed tiering opposite arrays, even opposite arrays of manifold types. In a lab, we ran a Dell EqualLogic PS4100E with 12 SAS drives and a PS6100XVS with a hybrid hoop set — 8 SSDs and 16 SAS drives. Each section was versed with surplus controllers and dual 10GbE interfaces per array.

Multiple arrays, one complement

The PS4100E and PS6100XVS were placed in a same storage organisation and managed as a singular entity. The Dell EqualLogic government program allows a use of groups to say volumes that can be widespread opposite mixed particular arrays. In a days of yore, it was vicious to say coherence between a arrays so that volumes wouldn’t be widespread opposite faster disks in one section and slower disks in another, though it’s no longer a requirement.

Because both arrays are members of a organisation with a singular IP residence and iSCSI gateway, hosts that connect to a several iSCSI LUNs know usually a singular storage horde on a other side. iSCSI trade is bucket offset between a active interfaces on a controllers and a arrays themselves.

Further, operative in unison with a programmed storage tiering features, a controllers know that storage blocks are experiencing a many turnover. The controllers pierce these “hot” blocks to and from a fastest storage, ensuring that a information wanting faster entrance will not breeze adult on a slower array, though will be prioritized on a set of SSDs, should they be available. This capability is also accessible with normal disks, though a inclusion of a SSDs – specifically, a hybrid 6100XVS joined with a lower-cost PS4100E – unequivocally shows off a advantages of these facilities in prolongation workloads.

Let’s prognosticate a sincerely normal storage effort for a medium-size infrastructure. We have a garland of hypervisors pushing several hundred VMs, along with general-purpose record sharing, and a passel of databases that expostulate a Web focus tier to yield vicious line-of-business applications.

It’s common to prove all of these storage mandate by a same homogenous storage array, though there are drawbacks. For instance, it means that a long-forgotten, never-again-to-be-accessed 2GB film record that a user once stored in his home office will lay right subsequent to a pieces that a core database servers are constantly reading and writing. In a ideal world, these files wouldn’t mix, though we all know that a universe we live is abundant with identical examples.

With programmed tiering, that neglected film record will eventually breeze adult on a slowest disks in a information centre, while a database volume will breeze adult on a fastest – though any executive involvement required.

In practice, this routine is as elementary as environment adult a manifold arrays in a same organisation and introducing a workload. As a controllers get an thought of that information is issuing where, they will automatically discharge a blocks via a arrays according to a demand.

In a example, this would meant that a database volumes and high-transaction VMs would breeze adult on a SSDs, while a film record winds adult on a SATA drives. As a bucket changes, a resolution automatically adapts. If a user common a couple to that film with a whole association and a film began streaming to a few hundred people, a controllers would quit it to faster storage. Thankfully, a Dell EqualLogic SAN HQ program provides a controls to safeguard that an peculiar effort change such as this does not strike some-more vicious information sets from a fastest disk.

Another advantage of programmed tiering is that weekly or monthly workloads can be postulated a advantage of quick hoop usually when they indeed need it. As a monthly collection pursuit progresses and a obliged databases start churning for a 24-hour period, they will reap a advantages of a SSD-backed storage, afterwards tumble behind to a slower hoop as their estimate completes. Another instance competence be a practical desktop infrastructure that practice complicated loads during a morning log-ins and a dusk log-offs, when desktop VMs are being fast spun adult and put away, respectively, with reduce hoop I/O function in a middle.

Performance in numbers

Automated tiering isn’t totally new to Dell EqualLogic, though a ability to extend a tiering opposite mixed high-speed and low-speed arrays such as a PS6100XV and a PS4100E puts a opening advantages in confidant relief. Rather than carrying to supplement 3 or 4 arrays of incompatible storage forms to entirely comprehend a benefits, a 6100XVS accomplishes many a same idea internally, as it can expostulate both SSD and SAS drives in a singular 24-disk 2U chassis. And demonstrating a effects of storage tiering is comparatively simple, requiring usually a repeated effort that extends for a reasonable duration of time.

Using IOMeter to exam a PS6100XV and a PS4100E arrays was a simplest approach to examine a solution. When strike with a brew of streaming and pointless reads and writes, a throughput grew almost in some cases, reduction so in others, depending on a far-reaching accumulation of variables such as retard distance and a turn of pointless reads and writes. As with any storage device, your mileage might change depending on a workload, though my general-purpose contrast shows that a multiple of a PS6100XV and a PS4100E should adjust really easily to many infrastructures.

Article source:

SMB NAS product survey: Non-scale-out/clustered NAS still fills a need

Friday, October 14th, 2011

The NAS marketplace is a truly mature marketplace yet is also a fast changing one. Its latest
iteration, clustered or scale-out
—which allows a joining of mixed NAS inclination underneath a singular record system—has risen
rapidly to accommodate a needs of organisations’ needs to store vast amounts of unstructured data. But,
there is still a need for normal NAS products to accommodate a needs of SMB
use cases such as tiny business and departmental/branch bureau record serving.

While higher-end NAS products have left scale-out/clustered, SMB NAS products have in some cases
evolved to offer iSCSI and Fibre Channel retard access
connectivity options in further to support for normal NFS and CIFS
protocols. In this they have arguably turn multiprotocol storage subsystems, yet majoring in
NAS. Other products have remained loyal to record entrance and combined opening enhancers such as SSD.  

In this essay we inspect some of a stream offerings in a non-scale-out SMB NAS
marketplace for low and midrange use cases and a advantages for organisations that wish to
consolidate storage.


NetApp probably invented a NAS product space, or during slightest done itself synonymous with it. Its
FAS filer products can be related together to offer files from mixed nodes, yet there are severe
limits on this capability, and so it is not a truly scale-out NAS product set. Products start at
the entry-level FAS2000 range. The FAS2020, for example, offers 20 onboard hoop slots, externally
expandable to 68, and a sum ability of 68 TB on possibly SAS or SATA drives with Fibre Channel, Fibre
Channel over Ethernet
(FCoE) and iSCSI connectivity as good as NFS and CIFS record access
options. The hardware comes bundled with an endless operation of NetApp software, including options
for thin
, snapshotting and information deduplication. Optional extras can be added, such as
remote-volume mirroring. Dual controllers in a same framework powered by a NetApp Data Ontap
operating complement yield failover in a eventuality of controller failure. The operation extends to include
full-scale craving systems in a FAS6000 family around a FAS3000 midrange devices.


EMC’s Celerra NX4 combines normal NAS with iSCSI and discretionary Fibre Channel connectivity
and has a choice to supplement a second X-Blade controller for failover capability. It has a maximum
disk ability of 60 drives, that can be SAS, SATA or a reduction of both; built-in record system
deduplication; practical provisioning; programmed volume management; and entirely programmed storage
. The NX4 is a usually section in a Celerra storage operation that does not underline the
Celerra Multi-Path File System (MPFS), that allows scale-out NAS deployments. Consequently, it is
billed as an entry-level storage system.


HP’s X1000 G2 Network Storage System is somewhat reduction feature-rich than NetApp’s FAS2000 series
and EMC’s NX4. Powered by Windows Storage Server 2008 R2, it offers iSCSI connectivity and can be
managed by HP X1800sb G2 Network Storage Blades. The X1000 has a limit tender ability of 24 TB with
either SATA or SAS drives. Its underline list also boasts record deduplication, share management, file
screening, reporting, Microsoft Windows Volume Shadow Copy Service (VSS) snapshots, Windows Active
Directory formation and Windows Distributed File System (DFS) Replication. In HP environments,
administrators can make use of formation with other HP products, such as a HP BladeSystem.


IBM’s N-series complement storage NAS operation is OEMed NetApp hardware and so offers iSCSI, NAS and
Fibre Channel connectivity. The N3000 Express is a entry-level complement of a N array and is
presented as a converging resolution for information before hold in direct-attached storage (DAS). The
rebadged FAS2020 section offers SAS or SATA hoop forms and facilities a NetApp Data Ontap operating
system, that manages skinny provisioning and dual-controller options for information protection. This fits
into a 24 TB array, that comes customary with a initial N3000 2U unit. The N array allows
interoperability with outmost storage units and controllers from aloft adult in a range. The N
series is an affordable tiny and medium-sized craving (SME) NAS resolution that can be scaled up
easily to an enterprise-level array with minimal emigration pain.


Dell offers a array of NAS and multiprotocol storage product families, including a NS and NX
product lines. In a NX series, a entry-level indication is a tower-format NX200, that provides
capacity of adult to 8 TB on 4 hot-swappable SATA drives. The NX4 is a rebadged EMC NX4 and offers
NFS, CIFS, iSCSI and Fibre Channel entrance in a 5U shelve with as many as 120 TB of SAS or SATA
drives. Meanwhile, a NX300 is NAS protocol-only, 1U in distance and provides adult to 8 TB of internal
capacity. The NX3000 and NX3100 offer CIFS and NFS access, with discretionary iSCSI entrance on the
NX3000. Both are in a 2U rack-mount format, with 24 TB of inner ability on a NX3100 and 12
TB on a NX3000. Dell’s NS products are rebadged EMC devices, providing NFS and CIFS record access
with iSCSI and Fibre Channel retard access. They come in an 8U rack-mount format; a NS120 offers
expandable ability adult to 120 SATA or Fibre Channel drives, while a NS480 can scale to 480


Nexsan announced a launch of a E5000 NAS operation in Aug this year and has taken an
interesting proceed to compute a product operation from competitors. Rather than adding block
data functionality, a association has strong on record complement opening optimisation, and the
E5310 (the midlevel complement in a range) showcases Nexsan’s FASTier SSD cache for high-speed data
access. The E5310 facilities a limit of 240 outwardly connected SAS or SATA disks, with a maximum
capacity of 720 TB, and an SSD cache consisting of twin 100 GB drives, with an additional 8 GB of
DRAM cache. Thin provisioning, snapshotting and riposte facilities come standard, and
dual-controller options can make a complement volatile opposite hardware failure. Readers should note
the entry-level E5110 complement does not support a SSD caching underline accessible in a higher-level


The SnapServer NAS operation from Overland Storage is set during a identical marketplace turn to a HP X1000.
It is powered by GuardianOS, grown by Adaptec, from that Overland acquired a SnapServer line
in 2008. The SnapServer N2000 section is stackable adult to 6 units with a limit ability of 12 drive
slots per array. The complement supports possibly SAS or SATA drives and offers NFS and iSCSI
connectivity options around twin 1 Gbps Ethernet ports. Snapshotting is enclosed around Microsoft Windows
VSS. Replication services are an discretionary additional around a Snap Enterprise Data Replicator add-on.


While not charity a extended capabilities of scale-out NAS, a SMB NAS products here still
have an critical purpose to play in SME complement environments. NAS systems have developed from being
dedicated NFS/CIFS record portion solutions into products that also offer block-level storage. This
extended functionality is now within a strech of SMEs and will concede them to residence their data
consolidation needs while also charity a cost-effective storage height for practical server

This was initial published in Oct 2011

Article source:

Server virtualization storage opening metrics: Which ones matter?

Sunday, October 9th, 2011

When it comes to assessing and monitoring how good a storage systems that support your server
virtualization sourroundings are doing around storage
performance metrics
, there are many layers to consider.

Storage for practical servers can be served adult by a hypervisor or served adult from block- or
-based storage inclination over a network connection. When storage is served adult by a hypervisor, the
hypervisor controls a entrance and as such is a some-more engaging of a dual given what impacts
the hypervisor impacts all regulating on a hypervisor; metrics—as good as bargain how
they impact a altogether workloads on a hypervisor— matter.

Storage served adult by a hypervisor looks only like a SCSI device to a practical machine, while
network-served storage might need specialized drivers, such as iSCSI.

Hypervisor-served storage can be in a form of Fibre

Channel, iSCSI, NFS (and,
in a box of Hyper-V, CIFS)
or internal storage, though by a time a practical appurtenance attaches to a storage device, it acts just
like a normal SCSI device and
therefore uses a normal SCSI motorist from within a guest handling system. The hypervisor uses
binary interpretation of a customary practical appurtenance SCSI motorist commands into those that can be
handled by a other technologies, possibly that is Fibre Channel, iSCSI, NFS or a internal SCSI
device. Binary interpretation happens possibly within a hypervisor or within a CPU regulating Intel VT-x
or AMD RVI authority structures. In possibly case, a practical appurtenance sees a storage as SCSI, while
the hypervisor sees a storage as something else entirely.

Metrics and how to appreciate them

Given these dynamics, there are mixed sets of metrics to cruise associated to virtualization

  • Storage opening metrics seen by a guest
    operating system
  • Storage opening metrics seen by a hypervisor
  • Storage opening metrics seen by a storage hardware

Each set of metrics is critical for specific reasons, though we can’t count on them to always
guide we to a right decision. Some of these metrics could tell an untruth and therefore by

The slightest profitable metric is a information seen by a guest handling system, given the
virtual appurtenance does not indispensably accept full CPU cycles, in that box a information within the
virtual appurtenance is suspect. The metrics involving CPU cycles for practical machines are always
suspect given a practical appurtenance might or might not accept full CPU cycles. But, some virtual
machine metrics don’t have anything to do with CPU cycles, and those non-CPU VM metrics are

Storage opening metrics seen by a hypervisor are valuable, and a many frequently viewed,
but these can also be dubious as they’re formed on information that could be cached or queued by the

That leaves metrics as seen by a storage hardware, and they’re a best ones to use, given the
hardware covering provides fine-grained data, down to a shaft being used. In many cases, this data
is a same as seen by a hypervisor, though in rarely implicit subsystems, this might not be a case.
Unfortunately, not all hardware subsystems concede this information to be seen, in that box we need to
concentrate on a metrics accessible to a hypervisor given metrics from within a VM are not
completely reliable.

From a virtualization storage hardware layer, a many critical metrics are a review and
write latency values, or how prolonged a information took to be review from a hoop or created to a disk
once perceived by a specific layer. Next adult in significance is a series of IOPS. You can't only
look during  IOPS though bargain a review and write kilobits per second, or Kbps. IOPS
refers to a operations; Kbps refers to a tangible information review or created by a system. IOPS is the
most-looked-at metric possibly from a hypervisor or storage device. However, latency is a better
metric given it indicates possibly there are issues with a storage. The IOPS series varies with
the series of blocks to be created and, with NFS (and CIFS), can’t be simply tracked given latency
metrics are not fundamental within a protocol.

Which collection can do a job?

To accumulate all these storage opening metrics, demeanour to collection from companies such as NetApp
(Balance), SolarWinds (Storage Manager), Quest (vFoglight Storage) and others that speak directly to
the hardware. Such products can check a storage hardware covering regulating a Storage Management Initiative
(SMI-S) program covering or directly by storage manufacturer protocols.

For a hypervisor level, collection such as VMware vCenter Operations, vKernel, VMturbo and Quest
vFoglight, to name a few, check a hypervisor covering by querying a hypervisor directly or
indirectly regulating a hypervisor’s executive government console (such as VMware vCenter).

Finally, guest handling systems yield their possess collection for entertainment storage performance

A mixed of SMI-S and hypervisor-based collection yield a best brew of functionality for
determining a latency, IOPS and bytes created or read. That’s especially given a numbers produced
by these collection tend to be in sync unless a hypervisor is intensely busy. In that case, the
hardware numbers are best for pristine storage performance, though given all resources hold one another
within a hypervisor, ensuring that a hypervisor does not strech that impassioned bustling state is
always a good thing.

If a storage-level collection do not exist for your environment, such as with some iSCSI servers,
the hypervisor-based collection turn a many important.

Edward L. Haletky is a author of mixed books about a VMware virtualization platform
and a CEO of AstroArch Consulting Inc. and The Virtualization Practice.

This was initial published in Sep 2011

Article source:

ATA over Ethernet offers SAN storage on a budget

Thursday, September 8th, 2011

If we wish block-access common storage, we need some kind of SAN. Overwhelmingly, your choice
has been singular to Fibre Channel (FC) or iSCSI, though in a credentials another technology, ATA over
, is staking a explain as a viable alternative.

ATA over Ethernet, or AoE, brings enterprise-level opening and reduce cost compared with FC
or iSCSI SAN hardware. It works analogously to iSCSI, that wraps SCSI commands in TCP
packets, though during a reduce network layer. Instead of regulating TCP as a ride mechanism, ATA over
Ethernet encapsulates ATA
block-level commands into Ethernet frames during network layer 2. Removing a SCSI
command estimate beyond lowers latency and estimate requirements, creation ATA over Ethernet
arrays cheaper to make, buy and operate.

Coraid has claimed that a ATA over Ethernet arrays “offer adult to a 5x to 8x price/performance
advantage over bequest Fibre Channel and iSCSI solutions” and mislay a need for a covering 3
multipathing and program when regulating IP networks.

ATA over Ethernet users have reported a record is pardonable to set up, while researcher organisation ESG’s
lab validation tests
have shown that merger costs are rebate than half those for an
equivalent iSCSI SAN and rebate than 20 percent of those for a Fibre Channel SAN.

Operating during covering 2 also removes a probability of routing, that works usually during layer 3 and above.
This boundary scalability in a distributed network, nonetheless some users have argued on forums that
this is not an emanate for them. It also means a custom is inherently some-more secure, as outside
agents can't bond to it directly.

While a name ATA over Ethernet suggests that ATA disks are a aim for a protocol, Coraid
connects a series of forms of expostulate mechanisms in a array products. The association offers its
EtherDrive operation of SAN products in a series of sizes, from 2U to 4U, containing adult to 108 TB per
box, any of that can residence SAS, SATA or SSD drives, accessed over possibly 6 1 Gigabit Ethernet
(GbE) or 4 10 GbE ports.

ATA over Ethernet was open-sourced by Coraid in 2003, and it has been a local component of the
Linux heart given Version 2.6.11 in 2005. It has, however, seen small adoption by other vendors,
so Coraid stays a solitary blurb vendor.

The fact that ATA over Ethernet has usually one businessman is a pivotal reason since it has not been adopted
on a widespread basis. Despite a reduce costs of ATA over Ethernet, a reserve cause concerned in
buying from apparent code names has trumped a opening and cost benefits. However, as
downward cost pressures, ceiling information expansion rates and an capricious tellurian economy all seem set for
the foreseeable future, ATA over Ethernet’s cost advantage is expected to convince more
organisations, generally those in a open sector, to cruise piloting this technology.

Palmer’s College adopts ATA over Ethernet

For Dan Byne, network and systems developer during Palmer’s College in Essex, a resolution to his
storage problem was apparent and hugely cheaper than a alternatives.

The sixth-form college’s IT department, that serves some-more than 2,500 users’ laptops and
desktops, designed to quit a servers from direct-attached to network-attached storage for a new
virtualised infrastructure, partly paid for with a £115,000 executive supervision loan for greener

The college’s existent complement was “a cluster of IBM storage and one SAN box clustered opposite a
couple of servers delivering Windows home directories,” Byne said.  The college was about to
purchase an EMC iSCSI SAN when a beside college suggested it cruise Coraid’s systems.

“We took a look, we were assured and we sealed adult for it,” Byne said, notwithstanding that a EMC
reseller subsequently offering a poignant cost reduction.

The pivotal determining cause for Byne was a opening boost that Coraid’s EtherDrive SRX systems
offered. “Our tests showed that installing an OS is most quicker, and Windows Server 2008 startup
time is 10 seconds, for example.

“FC SANs are allied in speed and throughput—though a tests showed that in fact Coraid was
faster—but there is not one manufacturer who will even get tighten to Coraid’s price/performance
advantage,” Byne said. He pronounced he also favourite Coraid’s morality and speed of setup as good as the
fact that it can be managed regulating a authority line interface.

“Now we run vSphere 4.1, and it all sits on Coraid appliances,” Byne said. “We have dual SRX3200
SANs. The practical machines do everything—databases, SQL servers, record and print, and they horde our
intranet site. Everything is flattering most virtualised.” The storage is mirrored for redundancy
reasons over a 300-metre twine couple to a other side of a 1-acre campus.

“Coraid is so most easier than iSCSI. It done most some-more clarity to go with this,” Byne said.
“Coraid lonesome all we were looking for with no singular indicate of failure. We competence have
skipped it since a lot of people haven’t listened of it, and lots of resellers go for a EMC route.
IT managers will always be regressive on their storage, though with poignant bill cuts in all
areas of government, they need to have a demeanour outward a large players in a storage field. They
will be agreeably surprised.”

This was initial published in Sep 2011

Article source:

How to perform storage network opening research and repair storage network problems

Thursday, September 8th, 2011

With a appearance of solid-state drives (SSDs) and their lightning speeds, a subject of storage
systems opening is apropos increasingly important. While SSD will expected urge storage
in roughly each situation, to entirely maximize a capabilities of a technology,
everything else between a server and a storage contingency run during rise potency as well.

In a new article, we discussed how to weigh a performance
of customers’ storage systems
. Next adult is storage network
performance analysis

Network capacity

The initial step in diagnosing storage
network performance
problems is to know a I/O capabilities of a interface card
installed in a performance-challenged host. To do this, magnitude a storage bandwidth—both peak
and average–that a server

To continue reading for free, register subsequent or login

is regulating and review that with what a label is able of delivering.
If rise function is anywhere tighten to full function of fanciful bandwidth, a storage
network is a claimant for a hardware upgrade. 

This ascent can be finished in one of dual ways. The many apparent and normal choice is to
install a faster label and a faster network infrastructure, during slightest for a servers that  have
the opening problem.

Another, newer choice is to revoke a volume of trade on a network with server-based
solid-state caching. This technology—as we plead in a essay “What
is server-based solid-state caching?
”—lowers a volume of information transmitted opposite a network
by gripping a many active segments of that server’s information on high-speed SSD in a server. Since it
is a cache, a peep government is rubbed in an programmed way, but user interaction. For
environments with only a few servers to accelerate, this choice might be reduction costly than an
entire network upgrade, and it might broach softened performance.


Storage network opening research becomes some-more severe if a network label doesn’t appear
to be a opening bottleneck. While a default subsequent step is to censure a storage system, there
are some-more network stones to overturn in your office of a source of a problem. To endorse that
you unequivocally do have a network problem (or, conversely, to infer that a problem stems from the
storage system) demeanour during a hoop queues on a storage system. If hoop queues are low (4 or below),
disk IOPS are not consistently high and server CPU function is low (less than 50%), some-more than
likely a opening problem stems from a storage network. The CPU is watchful on something, it
just is not a storage system.  

If a above routine indicates a storage network opening problem, it’s time to spin your
attention to a adapter type. The use of software
—iSCSI used in and with a customary (and mostly cheap) Ethernet card—can means the
server to spend too many time estimate IP-to-SCSI conversions.

To residence this problem, there are 3 options: You could ascent a server’s processor so
that it can perform a iSCSI acclimatisation some-more quickly; ascent to a label that offloads a iSCSI
traffic; or switch to a technology, like ATA over
(AoE) or Fibre
Channel over Ethernet
(FCoE), that does not need a IP-to- SCSI acclimatisation nonetheless still works
with customary high-quality Ethernet cabling.


Another area to inspect in storage network opening research is a peculiarity of a cabling.
In some cases, inadequate or improperly extended cables can impede performance, typically by causing
more errors interjection to delivery retries. The best approach to magnitude this is on-the-wire physical
analysis by some arrange of network daub resolution that can news parcel detriment in genuine time. While
it might seem like a lot of additional effort, it can compensate off by saving days’ value of


The switch itself also requires research during storage network opening analysis. While it
may seem rather obvious, make certain a customer’s ports are all set for their limit speed. In
our experience, a inter-switch links (ISLs) are mostly not set to limit speed. This is typically
done to perform some back harmony requirement given many infrastructures are not
converted from one speed to a subsequent overnight. Simply sourroundings a limit pier speeds on the
switches can severely urge opening during no additional cost.

IP infrastructure

The final area for care is an ascent to complicated switches and storage systems that
support a Data
Center Bridging
(DCB) standard, that provides for a lossless IP infrastructure. As prolonged as the
host label and a storage complement support it, a lossless sourroundings will perform softened because
performance won’t be mislaid in delivery retries. Dell has already announced softened performance
with a recently combined DCB support in EqualLogic storage systems. And we should design to see
several some-more vendors announce DCB exam formula in a nearby future.

George Crump is boss of Storage
, an IT researcher organisation focused on a storage and virtualization segments.

This was initial published in Sep 2011

Article source: