Enterprise storage trends: SSDs, ability optimization, automobile tiering

What we will learn: It competence seem as if information storage technologies are a small stodgy
and out of date, though there are copiousness of enterprise
storage trends
rising during both vast storage vendors and smaller upstarts. Solid-state storage,
virtualization-optimized storage, ability optimization and automated
storage tiering (AST)
have all turn some-more distinguished in IT shops newly given of a success
users have garnered from their use.

Solid-state
storage
, generally peep storage, has gained traction due to a opening and
mechanical characteristics. Flash storage competence be expensive, though usually on a ability basis; most
applications can run totally in flash. And as some-more people burst on a server virtualization
platform, vendors are prickly for an event to arise new technologies such as “VM-aware”
storage systems. In addition, capacity
optimization
has strike a belligerent running, generally now that information storage is flourishing during such a
fast rate among users. Many users are also jumping on a auto-tiering
bandwagon given of a technology’s ability to automatically tier storage and urge system
performance.

Read some-more about these trending information storage technologies and what they can do for your data
storage environment.

ENTERPRISE STORAGE TRENDS TABLE OF CONTENTS

Solid-state cache
Virtualization-optimized storage
Capacity optimization
Automated tiered storage
Innovation in storage

The craving information storage attention doesn’t have a repute as a hotbed of innovation, but
that characterization competence be unfair. Although bedrock technologies like RAID and SCSI have
soldiered along for some-more than dual decades, new ideas have flourished as well. Today there are
several craving storage
trends
value articulate about. Technologies like solid-state storage, ability optimization and
automatic tiering are gaining prominence, and specialized storage systems for practical servers are
being developed. Although a craving arrays of tomorrow will still be utterly recognizable,
they’ll adopt and allege these new concepts.

All-Flash storage

Although peep is dear on a ability basement compared to tough hoop technology, many
applications can be run totally in flash. iSCSI colonize Nimbus Data Systems Inc. transitioned to
an all-flash charity final year and has seen good results. “Our S-Class craving storage arrays
deliver 90% revoke appetite costs and 24x improved I/O performance,” pronounced CEO Thomas Isakovich. “And
since we embody inline
deduplication
and thin
provisioning
, we’re rival on a cost-per-used-capacity basement as well.”

All-flash storage in a PCI label form cause is renouned in high-performance applications as well.
Fusion-io has gained traction with a ioDrive cards, and LSI, OCZ Technology Group Inc., Texas
Memory Systems Inc. and Virident Systems Inc. have also found craving success with solid-state
systems. Flash builder Micron Technology Inc. recently jumped into this marketplace with a PCI Express
flash storage label labelled 25% revoke than a competition.

Solid-state cache

Spinning captivating disks have been a substructure for craving information storage given a 1950s,
and for usually about as prolonged there’s been speak of how solid-state storage will excommunicate them. Today’s
NAND
flash
storage is usually a decade old, nonetheless it has already gained poignant traction interjection to
its opening and automatic characteristics. Hard hoop drives (HDDs) won’t go divided anytime soon,
but NAND peep will expected turn a informed and constant member opposite a spectrum of
enterprise storage.

Hard disks surpass during delivering ability and consecutive review and write performance, though modern
workloads have changed. Today’s hypervisors and database-driven applications direct discerning random
access that’s formidable to grasp with automatic arms, heads and platters. The best enterprise
storage arrays use RAM as a cache to accelerate pointless I/O, though RAM chips are generally too
expensive to muster in bulk.

NAND peep memory, in contrast, is usually as discerning during servicing pointless review and write requests as
it is with those that start tighten together, and a fastest craving NAND peep tools challenge
DRAM for review performance. Although reduction expensive, peep memory (especially a enterprise-grade
single-level
cell [SLC]
variety) stays an sequence of bulk some-more dear than tough hoop capacity. Growth
in a deployment of solid-state
drives (SSDs)
has slowed and isn’t expected to excommunicate captivating media in capacity-oriented
applications anytime soon.

Flash memory has found a niche as a cache for tough hoop drive-based storage systems. Caching
differs from tiered storage in that it doesn’t use solid-state memory as a permanent plcae for
data storage. Rather, this record redirects review and write requests from hoop to cache
on-demand to accelerate performance, generally pointless I/O, though commits all writes to disk
eventually.

Major vendors like EMC Corp. and NetApp Inc. have placed peep memory in their storage arrays
and designed controller program to use it as a cache rather than a tier. NetApp’s Flash Cache
cards use a inner PCI train in their filers, while EMC’s Clariion FAST Cache relies on
SATA-connected SSDs. But both precedence their existent controllers and enhance on a algorithms
already in place for RAM caching.

Avere Systems Inc. and Marvell Technology Group Ltd., a integrate of relations newcomers, take a
different tack. With a story in a scale-out
network-attached storage (NAS)
space, Avere’s group grown an apparatus that sits in-band
between existent NAS arrays and clients. “No singular record is best for all workloads,” pronounced Ron
Bianchini, Avere’s owner and CEO, “so we built a device that integrates a best of RAM, flash
and disk.” Bianchini claims Avere’s FXT apparatus delivers 50 times revoke entrance latency regulating a
customer’s existent NAS devices.

Marvell’s arriving DragonFly Virtual Storage Accelerator (VSA) label is designed for placement
inside a server itself. The DragonFly uses fast non-volatile RAM (NVRAM) as good as
SATA-connected SSDs for cache capacity, though all information is committed to a storage array eventually.
“This is focused on pointless writes, and it’s a new product category,” claims Shawn Kung, executive of
product selling during Marvell. “DragonFly can produce an adult to 10x aloft practical appurtenance I/O per
second, while obscure over cost by 50% or more.” The association skeleton to broach production
products in a fourth quarter.

EMC, famous for a vast craving storage arrays, is also relocating into server-side caching.
Barry Burke, arch plan officer for EMC Symmetrix, pronounced EMC’s Lightning plan “will integrate
with a programmed tiering capabilities already delivered to VMAX and VNX customers.” EMC previewed
the plan during a new EMC World discussion and skeleton to boat it after this year.

Editor’s tip: Although there are several options for solid-state
storage, including cache, PCI Express and more, users will expected notice a opening boost no
matter what choice they choose. Listen to this podcast to find out a differences between the
various solid-state
options
, and learn how any one can boost your performance.

The finish of a SAN?

Although SCSI is still a widespread craving information storage custom in a form of Fibre
Channel and iSCSI, that competence change in a future. The arise of PCI Express storage
suggests that centralized networked storage competence not always dominate. Internal cards dramatically
reduce entrance latency, and a opening of these solutions is an sequence of bulk improved than
traditional SCSI-based technology.

The arise of practical machine-specific and cloud storage suggests that other changes are imminent.
In both cases, some products eschew normal retard or record entrance in preference of an application
programming interface (API). These inclination are designed to be integrated, programmed components of a
larger environment, focus height or hypervisor,
and would no longer need storage architects and managers.

Virtualization-optimized storage

One common motorist for a adoption of high-performance storage arrays is a expanding use of
server virtualization. Hypervisors concede mixed practical machines (VMs) to share a singular hardware
platform, that can have critical side effects when it comes to storage I/O. Rather than a delayed and
predictable tide of mostly consecutive data, a bustling virtual
server environment
is a glow hose swell of pointless reads and writes.

This “I/O blender” hurdles a simple assumptions used to arise storage complement controllers
and caching strategies, and vendors are fast bettering to a new rules. The deployment of SSD
and peep caches help, though practical servers are perfectionist in other ways as well. Virtual
environments need impassioned flexibility, with fast storage provisioning and energetic mutation of
workloads from appurtenance to machine. Vendors like VMware Inc. are fast rolling out technologies to
integrate hypervisor and server management, including VMware’s renouned VAAI.

Virtual server environments are an event for creation and new ideas, and startups are
jumping into a fray. One such company, Tintri Inc., has grown a “VM-aware” storage system
that combines SATA HDDs, NAND peep and inline information deduplication to accommodate a opening and
flexibility needs of practical servers. “Traditional storage systems conduct LUNs, volumes or tiers,
which have no unique definition for VMs,” pronounced Tintri CEO Kieran Harty. “Tintri VMstore is managed
in terms of VMs and practical disks, and we were built from blemish to accommodate a final of a VM
environment.”

Tintri’s VM-aware storage target, isn’t a usually option. IO Turbine Inc. leverages PCIe-based
flash cards or SSDs in server hardware with Accelio, a VM-aware storage acceleration software.
“Accelio enables some-more applications to be deployed on practical machines though a I/O performance
limitations of required storage,” claims Rich Boberg, IO Turbine’s CEO. The Accelio driver
transparently redirects I/O requests to a peep as indispensable to revoke a bucket on existent storage
arrays.

Editor’s tip: For a some-more in-depth demeanour during vStorage APIs for Array Integration, check
out this tip from SearchVMware.com on a pros and cons of VAAI.

Capacity optimization

Not all information storage innovations are focused on performance. The expansion of information has been a major
challenge in many environments, and deletion information isn’t always an excusable answer. Startups like
Ocarina and Storwize updated existent technologies like application and single-instance
storage (SIS)
for complicated storage applications. Now that these companies are in a hands of
major vendors (Dell Inc. and IBM, respectively), users are commencement to give ability optimization
a critical look.

Reducing
storage
has sputter effects, requiring reduction ability for replication, backup and disaster
recovery (DR) as good as primary information storage. “The Ocarina record is stretchable adequate to be
optimized for a platforms we’re embedding a record into,” pronounced Mike Davis, marketing
manager for Dell’s record complement and optimization technologies. “This is an end-to-end strategy, so
we’re looking closely during how we can extend these advantages over a storage platforms to a cloud
as good as a server tier.”

Data
deduplication
is also relocating to a primary storage space. Once usually used for backup and
archiving applications, NetApp, Nexenta Systems Inc., Nimbus Data Systems Inc., Permabit Technology
Corp. and others are requesting deduplication record in arrays and appliances. “NetApp’s
deduplication record [formerly famous as A-SIS] is optimized for both primary [performance and
availability] as good as delegate [capacity-optimized backup, repository and DR] storage
requirements,” pronounced Val Bercovici, NetApp’s cloud czar. NetApp integrated deduplication into its
storage program and claims no latency over on I/O traffic.

Editor’s tip: As information continues a unusual rate of growth, users are struggling to
find ways to minimize a volume of information they need to store in their information storage
environment. In this podcast with Arun Taneja, owner and consulting researcher during Taneja Group,
Taneja discusses a storage
optimization and ability reduction
products that are accessible in a market.

Automated tiered storage

One prohibited area of creation for a largest craving storage vendors is a mutation of
their arrays from bound RAID systems to granular, automatically tiered storage devices. Smaller
companies like 3PAR and Compellent (now partial of Hewlett-Packard Co. and Dell, respectively) kicked
off this trend, though EMC, Hitachi Data Systems and IBM are delivering this record as well.

A new stand of startups, including Nexenta, are also active in this area. “NexentaStor leverages
SSDs for hybrid storage pools, that automatically tier frequently accessed blocks to a SSDs,”
noted Evan Powell, Nexenta’s CEO. Powell also pronounced that his firm’s program height allows users
to supply their possess SSDs, that he claims reduces a cost of entrance for this technology.

EMC has combined practical provisioning and programmed tiering opposite a product line. “EMC took a
new storage record [flash] and used it to broach both larger performance, as good as cost
savings,” pronounced Chuck Hollis, EMC’s tellurian selling arch record officer. “Best of all, it’s
far easier to set adult and manage.”

Like caching, programmed tiered storage improves information storage complement opening as most as it
attacks a cost of capacity. By relocating “hot” information to faster storage inclination (10K or 15K rpm disks
or SSD), tiered storage systems can perform faster than identical inclination though a responsibility of
widely deploying these faster devices. Conversely, programmed tiering can be some-more energy- and
space-efficient given it moves “bulk” information to slower though larger-capacity drives.

Editor’s tip: Although automobile tiering is gaining traction in a industry, many people
are seeking themselves if it’s right for them. Listen to this podcast to get tips on how to
implement an automated
tiering
project; how to confirm if automobile tiering is gaining too most control over your
environment; and get a dip on sub-LUN tiering.

Innovation in storage

Enterprise storage vendors contingency say compatibility, fortitude and opening while
advancing a state of a art in record — goals that competence infrequently seem during odds. Although
smaller companies have been a small some-more nimble during introducing new innovations like capacity
optimization and virtualization-aware storage access, a vast vendors are also relocating quickly.
They’ve put into use solid-state caching and programmed tiered storage, and are relocating forward
in other areas. Whether by invention or acquisition, creation is alive and good in
enterprise storage.

BIO: Stephen Foskett is an eccentric consultant and author specializing in enterprise
storage and cloud computing. He is obliged for Gestalt IT, a village of eccentric IT
thought leaders, and organizes their Tech Field Day events. He can be found online at
GestaltIT.com, FoskettS.net and on Twitter during @SFoskett.




This was initial published in Oct 2011

Article source: http://www.pheedcontent.com/click.phdo?i=030f2149e1058dc97be399b2c5a51be1

Be Sociable, Share!

No Comments so far.

Leave a Reply

Before you post, please prove you are sentient.

what is 8 in addition to 8?