Posts Tagged ‘storage managers’

How to make storage techs even some-more confusing

Monday, August 6th, 2012

There’s a lot function techwise in a information storage world, though a ever-growing hype, hoopla
and misty denunciation are creation it even harder for everybody to understand.

This competence be a busiest time we’ve ever seen for information storage given a presentation of
the tough hoop some-more than 50 years ago. I’ve been on a storage kick exclusively for a final eight
or 9 years, covering it intermittently before that, and we can’t remember when there’s been more
activity in a storage locus on so many opposite fronts. It’s an engaging brew, baked adult with
a healthy apportionment of new technology, some reconstructed ideas about IT and business processes, and
spiced adult with a standard overselling and hype that accompany these kinds of technical shifts.

Cloud, solid-state storage, virtualization (server, storage, network … take your pick), “big
data,” no-backup information protection, consumerization, virtual/cloud DR and some-more have all bubbled adult at
roughly a same time. Any one of those techs would be copiousness to cope with, though it’d take a
five-armed juggler to keep all those balls in a air. Marketers are half-crazed with delight, and
we in a media never miss something to news on. But if you’re a storage manager, your life is
just removing harder and harder.

Just gripping adult with all this things is some-more than a full-time job, and implementing any one of
these things requires a plain bargain of a record and a implications, along with a
lot of careful

    When we register for, you’ll also accept targeted emails from my organisation of award-winning editorial writers. Our idea is to keep we sensitive on a hottest topics, a latest news and a biggest hurdles we face as a storage veteran today.

    Rich Castagna, Editorial Director

formulation and maybe a tiny prayer, too, as all can be pricey and many are tough to

At a final Chicago Storage Decisions conference, a series of attendees told me how this was
perhaps a many perplexing time in their careers. These folks aren’t rookies — they’re plain storage
pros who make adult a core organisation of repeat attendees who have been display adult during Storage Decisions for
years and have been doing storage for even longer. Their gripe? They’re confused by too most tech,
too most hype and too most misinformation.

Taken one by one, a technologies aren’t all that tough for these storage pros to parse and
evaluate. But with such an rare fusillade of tech and talk, it’s commencement to overwhelm.
Given a tiny time and some information, any of these gifted storage managers can figure out
if their association can advantage from some new tech, how it competence fit into their existing
infrastructure and what needs to be finished before, during and after implementation. They’ve finished this
stuff copiousness of times before.

But this time around, it’s not so easy since they’re not removing a information and assistance they
need from storage vendors. In fact, some of them pronounced they’re removing a finish opposite:
confusion and obfuscation.

It doesn’t take most for me to go off on how vendors’ hype machines are a lot some-more effective at
muddying a waters than providing some clarity. (But afterwards again, businessman exaggeration provides enough
material to fill dozens of these columns.) Cloud, solid-state and vast information are severe enough
on their own, though vendors still like to spin their tales of forced joining where somehow cloud
+ solid-state + vast information = a fugitive IT resolution to … well, all it seems.

The commotion around vast information has always been treacherous since a word concurrently refers to
very vast files (a few of them) and really tiny files (a lot of them). And only when we’re almost
able to grok that concept, we get something like this, excerpted from a startup storage vendor’s
press release:

“Big Data currently can impersonate any information set that consists of vast volumes of data, lots of
variety, or high quickness information send (or all three).”

Hold it — that positively broadens an already extended definition. So does that meant vast information is
any data? All data?

(The title of a press recover admitted that 71% of U.K. firms contend vast information is their “top
storage challenge” this year. we doubt 71% of any organisation could even determine on a clarification of big

That’s a kind of assistance storage managers simply don’t need. Storage vendors will have to do a
much improved pursuit of removing their summary across. And we folks who buy and use their products have
to do a improved pursuit of holding their feet to a fire. At a really least, let’s all try to pronounce the
same language.

BIO: Rich Castagna is
editorial executive of a Storage Media Group.

This was initial published in Aug 2012

Article source:

New forms of direct-attached storage emerge to accommodate VM I/O demands

Saturday, November 5th, 2011

used to be ubiquitous. Servers and storage were during one time synonymous, to a point
that it became common use that servers were transposed when they ran out of hoop capacity. As
data centres grew and storage became some-more costly and strategically valuable, generally with the
advent of server virtualisation, a box for pity it became irresistible.

The outcome was a storage array, in SAN
or NAS
form, and DAS scarcely dead from craving storage. But while SAN and NAS have turn a de
facto standards in a information centre, choice forms of arrays and a new multiply of DAS are

The storage opening gap

Traditional forms of storage arrays work good for single-application servers, though with
virtualisation generating some-more I/O per physical
machine, entrance has turn increasingly random. That’s done it harder for storage to compare server

Storage managers can buy some-more spindles to urge I/O, though this choice brings with it cost,
space, feverishness and energy issues. Effectively, horde opening has outstripped that of a array
because a storage opening boost bend is reduction high than Moore’s Law.

Workarounds exist. Virsto
Software has applications that retard into a hypervisor and perform a classification compulsory to
sequentialise storage array I/O and so boost performance. Because this program is new and Virsto
is a startup, there are still few IT organisations regulating this approach. The some-more common approach
for softened pointless I/O opening is to supplement solid-state drives (SSDs) to storage arrays.  

The new multiply of direct-attached storage goes one step further. Bringing storage closer to the
server alleviates a problems of costly common storage. It also means a storage system’s
resources — including a network — are dedicated to a singular machine, that afterwards enjoys better
performance. Effectively, a singular practical appurtenance (VM) horde can now safeguard that a storage runs
at full utilisation, so because share that complement and supplement to a workload?

Use cases for a new era of DAS embody rarely unenlightened installations such as private
clouds and practical desktop infrastructures, as good as those requiring entrance to vast volumes of
storage, such as images and video. It is used mostly by companies not nonetheless prepared to buy into the
locked-in costs entailed in a normal SAN.

Vendors and products

Buyers of DAS systems currently are roughly spoilt for choice. Taking a root out of Google’s book is
The company’s Complete Cluster apparatus is a clustered DAS complement that consists of a scale-out
compute and storage unit. This is integrated in a 2U retard that contains 4 x86 server nodes,
each with twin Intel Xeon 5640 processors, 320 GB of PCIe flash, 300 GB of SATA SSDs, 5 TB of SATA
drives and from 48 GB to 192 GB of RAM. Each server includes a VMware hypervisor with a controller
from Nutanix regulating in a VM. To scale, we can supplement servers numbering in a hundreds, according to
the company.

Another instance — from VM6
— combines direct-attached storage with program that brings manifold storage into a cluster. VM6
software installs on servers with Microsoft Hyper-V and turns a servers’ hoop drives into a
logical pool of networked storage, mirroring it opposite a servers. VM6 says a program creates a
virtual SAN that uses continual riposte to yield resilience.

Additionally, a association claims that by formulating a practical SAN, a program allows smaller
businesses to repurpose existent servers and storage to assistance discharge opening bottlenecks
created by a pointless I/O bucket from virtualisation. The product runs on Windows and needs usually a
network tie between servers.

Storage businessman StoneFly has grown a Storage Concentrator Virtual Machine
(SCVM) software, that integrates VM server hosts and storage in a singular 2U, 3U or 4U box.
SCVM is designed not only to facilitate a formation of storage though also to boost storage
performance. A singular framework delivers adult to 108 TB organised in mixed RAID configurations with
SATA, SAS or SSD, tiered as compulsory and expandable to 324 TB. The integrated servers access
storage regulating from 4 to 11 1 Gbps iSCSI links.

Oracle’s Database Machine is a some-more impassioned instance of clustered DAS in a box, and it’s an
application-specific instance of a company’s Exalogic-branded systems, that confederate servers and
storage. Consisting of 14 Linux or Solaris servers with 128 CPU cores, a 42U complement is attached
directly over InfiniBand to 5.3 TB of peep and adult to 150 TB of hoop storage. The complement uses a
flash cache for faster entrance to information and speeds adult record writes regulating a battery-backed DRAM cache in
the hoop controller. Dedicated to database serving, a aim of a pattern is to boost performance
of both SQL queries and storage I/O. Oracle claims a together storage entrance design can
deliver adult to 75 Gbps of I/O bandwidth and adult to 75,000 database IOPS. The appurtenance can be expanded
by adding InfiniBand links and switches.

Fusion-io’s ioTurbine goes serve in integrating storage into a server. It uses
PCIe-connected peep memory to broach server-side caching for practical appurtenance hosts around a software
layer that plugs into VMware’s ESX hypervisor and represents a flash-based ioDrives as an
available storage pool.  According to a company, front-ending a customary SAN or NAS in this
way accelerates opening of virtualised workloads and I/O bottlenecks, that afterwards enables more
VMs per host, all though pattern changes for guest machines.

Virtual storage appliances to reinstate controller hardware?

Another process of bringing storage closer to a VM horde is a practical storage apparatus (VSA),
an instance of that is Hewlett-Packard’s LeftHand P4000 Virtual SAN Appliance Software. Run atop a
hypervisor, a VSA can yield facilities some-more ordinarily found in controller hardware, including
snapshots and replication, that can reinstate normal backups. This is a large advantage when data
volumes are so outrageous that subsidy adult in a normal demeanour would take days.

Today, software-based controllers can't broach a opening of dedicated hardware, but
Moore’s Law guarantees this will change as horde estimate energy increases. As such appliances
become means to hoop incomparable volumes of data, horde servers will share their storage between

With softened opening levels and no additional, costly hardware to buy, VSAs joined to a
high-performance DAS could start to symbol a decrease of a normal storage array. Instead of
sharing storage between earthy servers, a DAS will be common by VMs regulating on networked hosts.
This brings advantages of easier government and resilience. VMware’s vSphere Storage Appliance does
this (as does HP’s P4000), permitting a advantages of server virtualisation to be used opposite dual or
three servers.

This will not be an overnight shift, though a expansion of server virtualisation and a storage
demands — even in SMB environments — seem expected to yield a flourishing marketplace for a new multiply of
direct-attached storage.

Manek Dubash is a business and record publisher with some-more than 25 years of

This was initial published in Nov 2011

Article source:

Hybrid cloud indication appealing though still has diseased spots

Sunday, September 18th, 2011

What you’ll learn in this tip: In his latest Storage repository column,
Jeff Byrne, comparison researcher and consultant during Taneja Group, shares his thoughts on because a hybrid cloud model competence be a right resolution for those doubtful about turning
to a cloud. He also explains why, notwithstanding their strengths, hybrid clouds still need a significant
amount of development—such as opening as it relates to vicious applications.

The initial half of 2011 won’t be remembered as a best of times for a cloud. Despite
optimistic predictions, it’s been a inclement few months for cloud storage services. An Amazon Web Services (AWS) networking glitch in Apr caused a multi-day
interruption in use for some news-sharing and amicable networking sites. Earlier that month, Iron

Mountain Digital announced it would be exiting a commodity-oriented,
public cloud storage business over a subsequent integrate of years (although a association will continue to
provide enterprise-class cloud storage services to business business by an agreement with
Autonomy). Finally, Cirtas Systems withdrew a cloud storage charity in Apr and laid off much
of a engineering staff.

That was a vast news, though we’ve also remarkable that some tiny vendors are struggling to gain
traction for their cloud storage and discriminate offerings.

More on hybrid cloud storage

Cloud storage options: Public vs. private vs. hybrid cloud storage

Hybrid cloud addresses impediments to adoption of open cloud storage

Hybrid clouds: Three routes to implementation

These developments competence not be startling in what’s still a fledgling market, though they’ve shaken
data storage managers’ certainty in a public
. To sidestep their bets, some users are now deliberation choice strategies, including hybrid
, that capacitate storage and compared apps to be deployed opposite both open and private
cloud infrastructures. In fact, loyal hybrid cloud storage will camber open and private
and be optimized for a user’s specific applications and service-level requirements.

Granted, not many companies are regulating hybrid clouds today. But while a record that will
power hybrid clouds is still developing, a intensity advantages are already entrance into focus.
Hybrid clouds yield a advantages users already design from open cloud storage deployments,
like pay-as-you-go coherence and self-service. They also guarantee to yield a enterprise-level
capabilities typically found usually in a private cloud, such as secure multi-tenancy and a ability to broach quality-of-service levels for
availability and performance.

Major storage, systems and virtualization vendors are all operative on hybrid cloud strategies and
roadmaps they wish will give them a leg adult in what’s approaching to be a fast-growing market. Dell,
Hewlett-Packard and IBM have hybrid cloud skeleton that ring servers and storage. EMC, Hitachi
Data Systems and NetApp have hybrid storage stories and even some petrify offerings.

Before hybrid clouds can enter a mainstream, some elemental technical issues contingency be
resolved. Security of information in movement and during rest is a peerless regard of users, quite in
light of new information breaches. Storage vendors and cloud confidence startups are building new encryption, firewall, temperament government and compared technologies.
Performance of vicious applications is another pivotal issue, and several vendors now offer innovative
on-premises caching products that revoke information entrance latency and speed adult information recovery.

Business issues are another regard for storage managers deliberation cloud deployments. Some attention regulations foreordain how and where critical
data can be stored, that might, for example, forestall users from regulating open clouds that have data
centers travelling mixed geographies. The awaiting of removing sealed into a sold provider’s
public cloud is another worry. It’s easy to upload information into many open clouds, though relocating that
data months or years after to a opposite provider can be formidable and costly.

While 2011 competence not be a “knee-of-the-curve” year when hybrid cloud storage takes off, a number
of engaging applications are throwing on. Hybrid clouds competence not nonetheless have a capabilities to
support primary storage for vicious applications, though several vendors offer cloud-based disaster recovery, backup and gateway solutions. TwinStrata is building a clever cloud storage gateway business that enables
on-demand enlargement of storage ability as good as information insurance capabilities, joining into
several opposite cloud providers. Another startup, StorSimple, helps users control vast sets of distributed, unstructured data
by surrounding it with a full element of information lifecycle services. Many of these solutions aren’t
just prepared for primary time, they’re already gratifying flourishing numbers of early adopters.

At slightest one provider — Nirvanix — is delivering on a prophesy of hybrid cloud storage for the
enterprise. Nirvanix hNode provides private cloud storage services that front-end a company’s
Storage Delivery Network (SDN) open cloud storage offering. The company’s Cloud Sideloader
technology lets users quit files directly from providers such as AWS and Iron Mountain into
Nirvanix information centers.

Beyond storage, hybrid clouds need a networking infrastructure that enables high availability
and opening for a different set of workloads relocating between open and private clouds, along with
the monitoring and government collection to safeguard it all works. As many IT managers are good aware,
bigger pipes alone aren’t adequate to solve this problem. Rather, it takes optimizing information services
across a sparse locations where apps competence move, regardless of where a information is entrance from,
while providing prominence into a information flitting by a network during an application, user and
server level. Riverbed Technology, as one example, provides these enabling capabilities for
a hybrid cloud currently by a Steelhead and Cascade product families, and with a Akamai
partnership looks expected to broach new ways to optimize all demeanour of information and calm no matter
where a endpoints competence reside.

While offerings such as these advise that mainstream adoption of hybrid clouds competence be fast
approaching, we’re not there yet. Clouds are still in their “wild west” expansion phase, and the
hybrid indication is still evolving. But we see variety as a stabilizing force in a cloud market,
bringing together a best of private and open clouds to residence a final of midsized and
enterprise users. As we consider some of a early hybrid cloud storage solutions and demeanour brazen to
the innovations that distortion ahead, we’re confident that a dim storms of this past open are
behind us.

BIO: Jeff Byrne is a comparison analyst
and consultant during Taneja Group.

This essay creatively seemed in Storage magazine.

This was initial published in Sep 2011

Article source:

Storage opening management: 10 storage complement opening tips

Thursday, August 18th, 2011

What you’ll learn in this tip: When it comes to storage opening management, there’s no tip recipe for present success.
But there are intelligent ways to proceed a problem. Read these 10 tips to find out how things like
continuous information insurance (CDP), active multipathing and infrequently simply doubt a number
of queries users contention can make a disproportion in your shop.

Given a choice between fine-tuning information storage for ability or for performance, many data
storage managers would select a latter. Tips and tricks to boost storage speed are common, but
they’re not all equally effective in any environment. A accumulation of products and technologies do
have good intensity for many shops, from optimizing server-side entrance to improving the
storage-area network (SAN). We’ll demeanour during some effective, nonetheless mostly overlooked, methods to speed up
storage complement performance.

Networked storage is impossibly complex, requiring a diverse

set of hardware and software
elements to interoperate smoothly. Not surprisingly, one of a many common causes of delayed storage performance is a misconfiguration or tangible disaster of one or more
of these components. Therefore, a initial place to demeanour for softened opening is in a existing
storage I/O stack.

Check server and storage array logs for signs of earthy faults; I/O retries, trail transformation and
timeouts along a organic couple are certain signs. Try to besiege a unwell element, nonetheless start with
cable-related components. Flaky transceivers and cables are common, and can severely impact
performance while still vouchsafing things run good adequate to go unnoticed. These apparatus mostly fail
after being physically disturbed, so be generally observant after installation, emigration or
removal of information core equipment.

1. Freshen firmware and drivers

Manufacturers are constantly regulating bugs, and new capabilities can hide in with software
updates. It’s scold to stay on tip of driver and firmware
updates for all components in a storage network, with scheduled and active testing, tuning and
upgrading. Microsoft Corp. and VMware Inc. have been actively adding new opening facilities to
the storage stacks in Windows and vSphere, mostly nonetheless many fanfare. SMB 2.0 and 2.1, for
example, dramatically accelerated Windows record sharing, generally over slower networks. Updates to
NTFS and VMFS have also
routinely softened opening and scalability. Stay tuned to storage blogs and publications to
keep on tip of these developments.

But we should note that not all updates are value a time and effort, and some can be
downright perilous. Make certain your pattern is upheld by all vendors concerned and has been
thoroughly tested, and never use beta formula in production. As a systems administrator, we tend to be
conservative about what we hurl out, watchful for reports from others before holding a plunge

2. Question a queries

Most of a tips concentration on locating and expelling bottlenecks in a storage stack, nonetheless one should also cruise shortening the
I/O bucket before it’s created. Working with database administrators (DBAs) to balance their queries for
efficiency and opening can compensate off vast time, given a reduced I/O effort advantages everybody and
every application.

3. Break down backup bottlenecks

Traditional backup applications are intensely fatiguing on storage resources, transfer massive
volumes of information according to a daily and weekly schedule. Improving a performance of backups so they can fit within their reserved “window” has
become a priority for information insurance pros, nonetheless a techniques employed can assistance urge altogether data storage performance as well.

One effective routine to revoke a backup break is to widespread it out regulating continuous information insurance (CDP) technology. Built into many products intended
for practical servers, CDP ceaselessly copies information from a server rather than collecting it in a
single, strong operation. This is generally profitable in virtual appurtenance (VM) environments given a nightly backup “kick off” across
multiple guest can vanquish storage responsiveness, from a train to a horde train adapter (HBA) or
network interface label (NIC) to a array. Microsoft and VMware also have technologies to offload
backup-related snapshots to storage arrays that are softened means to hoop information movement.

4. Offload practical appurtenance I/O with VAAI

The recover of VMware vSphere 4.1 enclosed many new features, nonetheless one of a many vicious was
the vStorage API for Array Integration (VAAI). This new interface allows VMware ESX to coordinate certain I/O tasks with upheld Fibre Channel (FC)
or iSCSI storage systems, integrating a hypervisor and array to work some-more closely and effectively

VAAI includes 3 “primitives,” or formation points:

  1. Unused storage can be expelled for thin provisioning regulating a fit “write_same” SCSI command, increasing
    capacity function and shortening I/O overhead.
  2. Snapshot and mirroring operations can be offloaded to a storage array,
    greatly shortening a network, hypervisor and handling complement I/O workload.
  3. Access locking can take place during a turn some-more granular than a whole LUN, shortening contention
    and wait time for practical machines.

Although nothing of these screams “storage opening tuning,” a net outcome can be a dramatic
reduction in a I/O effort of a hypervisor as good as reduction trade over a SAN. Analysts
expect serve improvements (including NFS support) in destiny versions of VMware vSphere, and one
imagines that Microsoft is operative on identical formation facilities for Hyper-V.

5. Balance practical appurtenance I/O with SIOC

While not a opening acceleration record per se, VMware vSphere Storage I/O Control (SIOC) is a “quality of service” mechanism
that creates I/O performance some-more predictable. SIOC monitors a response latency of VMFS
datastores and acts to stifle behind a I/O of lower-priority machines to say a performance
of others. In practice, SIOC reduces a impact of “noisy neighbors” on prolongation virtual
machines, improving their responsiveness. This helps keep concentration developers and managers
happy, bringing a coming of softened opening even nonetheless sum throughput stays the

What doesn’t work

Besides looking during how to ratchet adult storage network performance, we also need to cruise some
not-so-effective approaches to improving performance. Testing can exhibit engaging outcomes:
Enabling jumbo frames on Ethernet networks, for example, typically hasn’t yielded many of a
performance benefit.

One common doubt relates to a merits of several storage protocols and a common belief
that Fibre Channel is inherently faster than iSCSI, NFS or SMB. This isn’t a box generally,
although implementations and configurations vary. Similar architectures furnish identical levels of
performance regardless of protocol.

One should also be discreet about contracting “bare-metal” technologies in practical environments,
including paravirtualized drivers, proceed I/O like VMDirectPath and tender device mapping (RDM). None
delivers many opening alleviation and all meddle with fascinating facilities like VMotion.

6. Streamline a server side

Today’s multicore servers have CPU energy to spare, nonetheless network interface cards (NICs) and HBAs have traditionally been sealed to a
single processor core. Receive-side scaling (RSS) allows these interface cards to distribute
processing opposite mixed cores, accelerating performance.

Hypervisors face another charge when it comes to classification I/O and directing it to
the scold practical appurtenance guest, and this is where Intel Corp.’s practical appurtenance device queues
(VMDq) record stairs in. VMDq allows a Ethernet adapter to promulgate with hypervisors like
Microsoft Hyper-V and VMware ESX, organisation packets according to a guest practical appurtenance they’re
destined for.

Technologies like RSS and VMDq assistance accelerate I/O trade in perfectionist server virtualization applications, delivering extraordinary levels of performance.
By leveraging these technologies, Microsoft and VMware have demonstrated a correspondence of
placing perfectionist prolongation workloads on practical machines.

7. Get active multipathing

Setting adult mixed paths between servers and storage systems is a normal proceed for high
availability, nonetheless modernized active implementations can urge storage opening as well.

Basic multipathing program merely provides for failover, bringing adult an alternative
path in a eventuality of a detriment of connectivity. So-called “dual-active” configurations assign
different workloads to any link, improving function nonetheless restricting any tie to a single
path. Some storage arrays support trunking mixed connectors together or a full active-active
configuration, where links are many-sided and a full intensity can be realized.

Modern multipathing frameworks like Microsoft MPIO, Symantec Dynamic Multi Path (DMP) and VMware
PSA use storage array-specific plug-ins to capacitate this arrange of active multipathing. Ask your
storage businessman if a plug-in is available, nonetheless don’t be astounded if it costs additional or requires a
special craving license.

8. Deploy 8 Gbps Fibre Channel

Fibre Channel throughput has ceaselessly doubled given a initial 1 Gbps FC products appeared,
yet retrograde harmony and interoperability have been confirmed along a way. Upgrading to
8 Gbps FC is a elementary approach to accelerate storage I/O, and can be remarkably
affordable: today, 8 Gbps FC switches and HBAs are widely accessible and labelled approximately the
same as common 4 Gbps parts. As SANs are stretched and new servers and storage arrays are purchased,
buying 8 Gbps FC rigging instead of 4 Gbps is a no-brainer; and 16 Gbps FC apparatus is on the

Remember that throughput (usually voiced as megabytes per second) isn’t a usually metric of
data storage performance; latency is usually as critical. Often gifted in terms of I/O operations
per second (IOPS) or response time (measured in milliseconds or nanoseconds), latency is a speed
at that particular I/O requests are processed and has turn vicious in virtualized server
environments. Stacking mixed practical servers together behind a singular I/O interface requires
quick estimate of packets, not usually a ability to tide vast amounts of consecutive data.

Each doubling of Fibre Channel throughput also halves a volume of time it takes to routine an
I/O operation. Therefore, 8 Gbps FC isn’t usually twice as discerning in terms of megabytes per second, it
can also hoop twice as many I/O requests as 4 Gbps, that is a genuine bonus for server

9. Employ 10 Gbps Ethernet

Fibre Channel isn’t alone in cranking adult a speed. Ethernet opening has recently jumped by
a cause of 10, with 10 Gbps Ethernet (10 GbE) apropos increasingly common and affordable, nonetheless 10
GbE storage array accessibility lags rather behind NICs and switches. Environments regulating iSCSI or
NAS protocols like SMB and NFS can knowledge large opening improvements by relocating to 10 Gbps
Ethernet, supposing such a network can be deployed.

An choice to end-to-end 10 Gb Ethernet is trunking or fastening 1 Gbps Ethernet links using
the couple assembly control custom (LACP). In this way, one can emanate multigigabyte Ethernet
connections to a host, between switches or to arrays that haven’t nonetheless been upgraded to 10 GbE.
This helps residence a “Goldilocks problem” where Gigabit Ethernet is too delayed nonetheless 10 Gbps Ethernet
isn’t nonetheless attainable.

Fibre Channel over Ethernet (FCoE) brings together a Fibre Channel and
Ethernet worlds and promises softened opening and larger flexibility. Although one would assume
that a 10 Gbps Ethernet links used by FCoE would be 20% faster than 8 Gbps FC, a disproportion in
throughput is an considerable 50%, interjection to a some-more fit encoding method. FCoE also promises
reduced I/O latency, nonetheless this is mitigated when a overpass is used to a normal Fibre Channel
SAN or storage array. In a prolonged term, FCoE will urge performance, and some environments are
ready for it today.

10. Add cache

Although a quickest I/O ask is one that’s never issued, as a means of speeding things up,
caching is a
close second. Caches are appearing via a I/O chain, earnest softened responsiveness by
storing frequently requested information for after use. This is frequency a new technique, but
interest has strong with a appearance of affordable NAND flash

There are radically 3 forms of cache offering today:

  1. Host-side caches place NVRAM or NAND peep in a server, mostly on a high-performance PCI Express card. These keep I/O off a network nonetheless are usually useful on a
    server-by-server basis.
  2. Caching appliances lay in a network, shortening a bucket on a storage array. These serve
    multiple hosts nonetheless deliver concerns about accessibility and information coherence in a eventuality of an
  3. Storage array-based caches and tiered storage solutions are also common, including NetApp’s
    Flash Cache cards (formerly called Performance Acceleration Module or PAM), EMC’s Fully Automated
    Storage Tiering (FAST) and Hitachi Data Systems’ Dynamic Tiering (DT).

Still no china bullet

There are many options for improving storage performance, nonetheless there’s still no singular silver
bullet. Although storage vendors are discerning to explain that their latest innovations (from tiered storage to FCoE) will solve information storage opening issues, nothing is
foolish adequate to concentration on usually one area. The many effective opening alleviation strategy
starts with an research of a bottlenecks found in existent systems and ends with a devise to
address them.

BIO: Stephen Foskett is an eccentric consultant and author specializing in enterprise
storage and cloud computing. He is obliged for Gestalt IT, a village of eccentric IT
thought leaders, and organizes their Tech Field Day events. He can be found online at, and on Twitter during @SFoskett.

This essay was creatively published in Storage magazine.

This was initial published in Aug 2011

Article source:

Remote office, bend bureau (ROBO) storage presents unbending challenges

Saturday, August 13th, 2011

Enterprise IT shops face poignant hurdles handling their remote office, bend bureau (ROBO) storage, either they store and behind up
data locally, during a executive site or by third-party use providers.

Coping with bomb information growth, improving backup and liberation processes, containing storage
costs and securing information are usually a few of a concerns that storage managers contingency residence in their

In this podcast on ROBO storage, Senior Writer Carol Sliwa talks with Bob Laliberte, a senior
analyst during Enterprise Strategy Group (ESG) in Milford, Mass., who outlines his firm’s latest
research on handling information in remote offices, describes a pros and cons of internal vs. centralized
storage and backup options, and also offers tips on how to cope with vital ROBO storage challenges.
Read his answers subsequent or download a MP3. From an craving storage perspective, what are a categorical challenges
associated with handling information in remote offices?

Laliberte: A lot of a hurdles that we see are identical to those encountered in the
centralized information center. For instance, a biggest plea during a remote and bend bureau tends
to be managing information growth. The other areas remote offices and bend offices report
as being severe are improving backup and recovery, a cost of deploying their storage systems
and media, and a ability to secure trusted data. The final thing we wish to have is a data
breach that gets publicized. Other areas of regard are around carrying adequate space to residence all the
storage and IT gear, a fast expansion of unstructured information and, final though not least, miss of
qualified IT staff with a suitable storage skills.

These hurdles change by distance during these ROBOs. Typically a some-more storage we have, a some-more of
these hurdles you’re going to have; obviously, a reduction [storage we have], a reduction of a
challenge. With remote offices and bend offices, craving IT shops have the
option to muster and conduct their applications locally, from a executive IT site or by a
third-party provider with a Software-as-a-service (SaaS) model. How do storage problems differ
based on a proceed they take?

Laliberte: If we go from a retreat sequence and demeanour during a Software-as-a-service and third-party vendors, fundamentally you’re outsourcing all
your storage government to those providers. As such, you’re giving adult some control. So, what would
be critical for organizations to take into caring is that they need to safeguard information is going
to be means to be retrieved if they need it or if they finish a contract/change vendors; [they] also
need make certain it won’t get lost, and that a businessman is reasonably safeguarding it so no information is
lost in a eventuality of an outage.

From a centralized site, it’s unequivocally not usually about a storage though a applications. In that
case, a biggest regard is about focus opening behind to those remote and bend offices
and ensuring that a record transfers get finished in an fit manner. And, if it’s being finished at
the remote site, apparently it’s around handling that information growth, backup and recovery, etc. Explosive information expansion is a outrageous emanate for craving storage managers. Are
there any tips you’d offer privately for coping with a expansion in remote and branch

Laliberte: Yeah, we consider a biggest disproportion between a explosive information expansion issues in a information core and a remote/branch office
is unequivocally around a turn and ability sets of a IT users. The technologies they’ll use are pretty
much going to be a same: deduplication, thin
, automated tiering, etc. The large doubt is going to be either or not the
organization has a ability sets during those sites to muster and conduct those effectively and if
they’ve got a bill to muster those technologies during those sites. Protecting information in bend offices stays difficult. What are a pros and
cons of subsidy adult information locally vs. subsidy adult to a executive location?

Laliberte: From a internal perspective, when you’re subsidy adult data, generally if you’re
using disk-based backup, you’ve got a ability to fast revive a data. That’s something
that’s of outrageous benefit. If you’re going to a centralized location, we can apparently revoke the
cost during that remote site by stealing that infrastructure and leveraging a existent apparatus you
have during a information center. But a hurdles turn a small different. It’s all that’s over
the handle or your WAN costs. Do we have suitable WAN
and so forth? On a and side, by doing that, you’re means to safeguard compliance,
consistency and unequivocally get a lot some-more confidence taken caring of than we would have during a remote
office. What pivotal pieces of recommendation would we offer to craving storage managers
on data insurance during ROBOs?

Laliberte: From what we see, a trends clearly are around centralizing a data. Back in
2007, when we [first] did this report, usually 7% were subsidy adult to a executive location. Today, 26%
report that they’re doing a centralized backup. The other area is around disk backup. We’re saying a diminution in people doing fasten only. We’re saying an
increase in people that are doing hoop and fasten together, and an boost in people that are just
doing disk-based backup. Which of a latest record developments competence assistance craving storage
managers with their remote offices and bend offices?

Laliberte: Again, this is identical to what’s going on in a craving information centers.
Technologies like virtualization, skinny provisioning, deduplication, and a ability to leverage
management tools that can offer centralized views and prominence opposite not
only a information core though also a remote locations are going to be big. From a networking
perspective, there is WAN optimization, and we’re saying some-more hoop backup to a cloud and apparently SaaS is personification a bigger purpose here. Respondents to our
survey indeed cited that, over a subsequent 3 years, they’re going to be 4 times some-more likely
to demeanour during SaaS for their remote bureau and bend bureau focus delivery.

This was initial constructed in Aug 2011

Article source:

Cloud-based archiving for e-discovery/compliance: Five "need to knows"

Friday, July 29th, 2011

The expansion of cloud-based archiving binds good guarantee for information storage managers: more
options to outsource a firm’s underlying infrastructure and a sparkling intensity to emanate a
seamless user knowledge with probably total capacity. In addition, storage managers can take
advantage of volume-based subscription pricing to compensate for what they use though a vast up-front
investment. Yet there are issues to cruise before holding a thrust into cloud-based archiving or
when deliberation a excellent imitation on a service-level agreement (SLA). It’s easy to find cloud storage users who
report anecdotally that each dollar they save with a Software as a Service (SaaS) resolution could simply be double two- or
three-fold in destiny retrieval costs — a ones they meant to equivocate in a initial place.

Retrieval requests, such as those

compared with e-discovery and compliance, aren’t a problem for many companies and their storage managers . .
. until they are. While many IT administrators are good wakeful that information volumes are rising as data
types and locations turn some-more varied, few are prepared for a impact of carrying to collect and
preserve their information as justification for litigation. Those situations typically strike without

More on cloud-based archiving

Company digitizes paper annals with cloud-based

Cloud archiving: Choosing a best facilities from a best cloud service

The plaque shock, work interruptions and high-profile coercion compared with an ad-hoc
litigation response customarily comes on too unexpected for new litigants to do some-more than hurl with the
punches. But companies that have been burnt once mostly guarantee themselves from destiny events by
deploying an repository or improved utilizing an existent one. Under a stream heightened regulatory
environment, repository are gaining even some-more traction for their active advantages in storing and
retrieving information quickly. This usually doesn’t request to email, that is a many common source of electronic evidence, though to newer formats such as present messaging and social
media, that are also legally discoverable.

Five cloud-based archiving questions to answer

For information storage administrators deliberation cloud-based archiving, there are a handful of key
questions to answer when drafting your strategy. Knowing a responses in allege can assistance you
avoid some of a headaches that seem to roughly always accompany e-discovery and compliance

1. What are a risks in outsourcing justification doing and compliance?

Cloud archiving adoption isn’t quite a cost-driven decision. There are other
tradeoffs, both certain and negative. While some companies exclude to pull justification outside
their firewall given of a viewed miss of control, others are happy to compensate a reward to
outsource a risk of handing it to third parties with some-more infrastructure and updated security.
This depends on your organization’s risk tolerance, meaningful and guileless your provider, and
negotiating a SLA that suits your requirements. Private clouds need their possess due diligence; with a multi-tenant open cloud, your information lives on common servers with that of
other clients, that can be an unsuitable tradeoff to some firms.

2. Do we have control and control of a information if it’s hosted by a third party?

Data hosted by a third celebration might not be in your evident control or control, though you’re still
responsible for retrieving it for justice or regulators within a suitable timeframe and in an
acceptable format. If needed, can your provider supply adequate availability, capacity, speed and
throughput for mass retrieval? Just as importantly, can your provider trade vast amounts of data
under parsimonious timeframes, quite over a network? Cloud providers rest on facilitating data
ingest to capacitate discerning and easy import and usage, boost user information volumes and promote
“stickiness” among clients. The costs and logistics of acid and exporting information en masse can be
less straightforward. For some users, a hybrid
proceed can offer an excusable compromise. Either way, negotiating mandate up
front and doing a exam run is advisable.

3. Can a repository sufficient support correspondence and lawsuit response?

Most archives — both on- and off-premises — were creatively designed for storage
management, not lawsuit response. To safeguard correspondence and confirmed e-discovery, they must
support a required high information volumes, a accumulation of information formats, 100% information accountability,
real-time index updates, granular search, large-scale multi-user querying and information export
requirements. In instituting a influence and showing policy, there strait be assurances that
deleted information is left though a snippet to keep it from being discoverable in destiny cases or
regulatory events. Conversely, it’s critical to note that though a subpoena, deleted information isn’t
recoverable in a cloud given it’s not on a internal tough drive. Although digital forensics isn’t
required in polite litigation, it can make or mangle rapist investigations, and stays a common
method of e-discovery collections in a enterprise.

4. Does a repository support and confederate with broader information government or corporate
governance provisions?

Requirements for interoperability and customization should be taken into account. Will a cloud archive devalue governance issues by formulating an additional information silo,
as good as duplicating information volumes and governance efforts? Furthermore, will a C-level executive
resist storing their information in a cloud, requiring an additional particular hunt for every
investigation? Companies attempting to minimize information expansion by assertive ability quotas for
users infrequently disagree that total storage usually aggravates a hoarding behaviors causing the

5. What’s a misfortune that could happen?

It’s advisable to examine protections or strait plans, if any exist, in expectation of
worst-case scenarios such as a information breach, amazing downtime, an “act of God” information loss,
provider bankruptcy, or subpoenas from law coercion or Homeland Security. Multi-national
litigants have additional concerns, as regulating clouds for cross-border e-discovery runs a risk of
violating general data
privacy regulations
. These laws change by country, though they typically oversee a retrieval and
processing of worker information some-more stringently than in a U.S. and might not support America’s broad
legal find requirements. Where does a information reside, where are a users and what kind of
regulations apply? Users might wish to examine these factors for all jurisdictions in that data
is stored and used, as good as in a jurisdictions where their association is expected to go to

BIO: Katey Wood is an researcher during Enterprise Strategy Group, Milford, Mass.

This was initial published in Jul 2011

Article source:

Cloud strategies: Plan delicately before jumping on a cloud bandwagon

Tuesday, June 28th, 2011

What you’ll learn in this tip: With some-more people jumping to a cloud any day, many
are tempted to betimes pointer with a cloud storage provider. Although a cloud is quickly
becoming a poignant partial of many information storage infrastructures, it’s critical not to adopt cloud
storage if we aren’t prepared for it.

In Arun Taneja’s latest Storage repository column, learn about a cloud strategies we should take into care before implementing cloud
technology in your IT infrastructure and what questions your peers are asking. According to Taneja,
if we have doubts your storage businessman has all a required pieces in place to build your organisation a
private cloud, we aren’t alone.

Everything is “cloudy” these days. Hardly a day goes by though nonetheless another actor jumping on
the cloud bandwagon. Some are legitimately tied to a cloud concept, though others are “cloud washing” or force-fitting their products to a cloud judgment since they

if they don’t they’ll tumble out of preference with IT users.

However, a questions I’m asked many by IT users are customarily on a sequence of a following:

Our executive IT supports several divisions, any of that also has a possess IT. One division
decided to make a understanding with Amazon Web Services and eliminated some information to S3 storage. Managers in
another multiplication have finished deals with Nirvanix or Rackspace or ATT Synaptic, and sent company
data to them. What should we do? We don’t wish to conceal innovation, though we feel like we’re
losing control.

and . . .

Our storage businessman is seeking us to emanate a private cloud regulating mostly a same products as before though now with additional
federation products. Is a record prepared for building a private cloud?

Here’s how we see it. The cloud is happening, either we like it or not. It’s a lot like what we
saw with storage virtualization in 2000. we felt afterwards that a judgment had so many merit
it was firm to happen, though it took many longer than seemed logical. That’s simply a existence of
IT. Even when a paradigm-shifting record comes along, it takes time for it to get into daily
use. The cloud is similar. Implemented correctly, it’s ostensible to urge storage
while permitting we to scale adult or down during will. You can compensate as we grow and enjoy
an easy-to-use storage system. So, a doubt isn’t why, though when and how.

Follow a cloud

My initial square of recommendation is don’t quarrel a cloud. You’ll need to rise in-house imagination to
understand what cloud technology is, what’s genuine and what’s not, who’s in a diversion and so on.
Next, you’ll wish to examination with public cloud offerings regulating information we can means to disaster around with. You can
test a waters to see how scaling works, how services yield security, if information send speeds
are adequate and so on. You’ll also wish to exam out recuperating files, full volumes and more. These
tests should assistance we to rise discipline we can yield to business groups defining what
data might or might not be sent outward a company, and how it needs to be managed. This will bring
consistency to a craving while ensuring that creation in cloud record is being

More on cloud strategies

Comparing 4 cloud computing strategies

Private cloud strategy: A four-step devise for success

Preparing a network for a cloud computing implementation

Perhaps a easiest approach to get into a diversion is to use a cloud gateway product as an on-ramp to a cloud. You wish to equivocate writing
your possess cloud interface formula even if you’re informed with a Web services APIs used by many cloud services. The gateway vendors have already finished a complicated lifting and
provide a customary approach of interacting with existent applications (via NFS, CIFS, iSCSI, Fibre
Channel), while easy a idiosyncrasies of any open cloud on a behind end. Vendors in
this difficulty embody LiveOffice, Mimecast (for email archiving), Nasuni Corp., Nirvanix Inc.,
StorSimple Inc., TwinStrata Inc. and Zetta Inc. among others.

Building your possess cloud

For a private cloud, find out what your primary storage businessman is planning. Vendors are at
different stages of product growth and availability. EMC Corp. seems to be forward right now,
having announced and shipped Vplex, an critical association record that’s essential in building large
clouds. But all vital storage vendors have critical skeleton to broach private storage cloud products
and services. Not surprisingly, any wants we to build your private cloud roughly exclusively with
components from them, though from my perspective, no one in a marketplace has all a pieces yet. You may
consider other alternatives. Nirvanix, for instance, has combined something it calls hNode, or a
hybrid node. Essentially, it lets we emanate a private cloud regulating a same program Nirvanix uses
for a possess Storage Delivery Network (SDN); this would concede your private cloud to interface with a
public cloud formed on a Nirvanix architecture.

Long-term considerations

Whatever track we confirm to take, keep in mind that it’s one of a many vital decisions
you’ll make. Once we pointer on with a businessman you’re expected to be sealed in for a prolonged time. Vendors
are all in training mode today, only as we are. So take a time to investigate and examination before
jumping uncontrolled on a cloud bandwagon.

BIO: Arun Taneja is owner and boss during Taneja Group, an researcher and consulting group
focused on storage and storage-centric server technologies. He can be reached during

This essay was formerly published in Storage magazine.

This was initial published in Jun 2011

Article source: