Posts Tagged ‘fibre channel switches’

FCoE adoption: Its impact on storage VARs

Tuesday, June 28th, 2011

FCoE
adoption
has been comparatively delayed though steady. And a advantages of Fibre
Channel over Ethernet
, such as a rebate of a series of switches, cards and cables
required for storage networking, continue to make it an appealing choice to IT organizations. Storage
VARs
and integrators might find expected FCoE business among those who are rising new server
virtualization
projects.

In this podcast interview, Bob Laliberte, a comparison researcher during Enterprise Storage Group,
discusses a marketplace invasion of FCoE and what this means for a channel. Find out what FCoE is
and since an classification would advantage from it, how fast FCoE is throwing on in a industry, the
opportunity FCoE provides for storage VARs and integrators, either FCoE is a flitting technology,
and either IT organizations are prepared to adopt FCoE.

Listen to a podcast on the marketplace invasion of
FCoE
or review a twin below.

What is FCoE and since would an classification wish to use
it?           

Basically, FCoE is Fibre Channel over Ethernet, and it’s unequivocally a ability to consolidate
storage traffic, Fibre Channel that is, over a Ethernet transport. But it’s not customarily any
Ethernet. It started out as information core Ethernet, or converged extended Ethernet. [It’s] now known
as data
center bridging
. Basically, what that Ethernet is, is a small bit opposite than a normal
Ethernet since it enables organizations to have no dropped
packets
, and unequivocally that loss-less Ethernet for a information center.

The categorical reason organizations are looking to precedence Fibre Channel over Ethernet is that it
really helps them revoke a series of switches, cards and cables that they’re putting into the
racks. Previously, we would have Fibre Channel cards or Fibre Channel cables going adult a Fibre
Channel switches, and afterwards another set for all a Ethernet traffic. What this unequivocally does is
enables we to intersect all that and connect it all down onto a singular set of cards, cables and
network switches during a tip of a rack.

Another advantage of going to Fibre Channel over Ethernet is that it puts Fibre Channel on the
Ethernet roadmap. So instead of going from 4 GB to 8 GB to 16 GB and eventually 32 GB Fibre
Channel, Fibre Channel can now run on a Ethernet roadmap, that is 10
GB
, 40 GB and 100 GB.  

How fast is a adoption and deployment of FCoE throwing on in a industry? Is that
number reduce than you’d expect, on standard with your expectations or forward of them?

That’s unequivocally a large question, isn’t it, either or not it’s being fast adopted. So far
we’ve seen it be adopted, though maybe during a slower gait than was anticipated. There was a lot of
market hype around this, a lot of “Fibre Channel is dead.” We’ve seen this before in the
industry—the mainframe
was dead, fasten is going to be dead, etc. Those technologies have apparently continued on. we think
what we’ve seen is [FCoE] is being adopted; it’s a small bit slower, though what we’ve seen is steady
growth. [This means] that a technology’s come out, it’s been approved by several technology
associations and attention associations, and some-more importantly, it’s been approved by a industry
vendors. We’ve seen a lot of that over a final integrate of years, where they’ve come out with their
Fibre Channel over Ethernet cards, and a switch vendors have adopted those cards, and even the
storage array vendors themselves have started entrance out. NetApp we consider was a first, and EMC as
well. So we are starting to see adoption, and we’re unequivocally starting to see as good a converged
infrastructure products, if we will, so a VCE Vblocks, a NetApp
FlexPod
architectures, things like that that are leveraging a Cisco
UCS
.

The other area we’re saying it is in those top-of-rack installations as we mentioned earlier,
where they’re means to connect to a top-of-rack and afterwards separate off a traffic. Part of that’s
also due to some of a record stipulations of FCoE, in that it was unequivocally customarily a singular hop
technology, though that’s starting to change. Multihop
FCoE
is entrance out that will capacitate some-more than one bound from a server down to a storage
array. That’s also going to capacitate it to start expanding some-more and grow more.

For storage VARs and integrators, what kind of event does FCoE represent?

For all a storage VARs and integrators out there, right now a biggest event for them
is to take partial [in] that converged infrastructure play. Any of those vendors who are putting
together products formed on a UCS, etc., apparently will commend a good play for FCoE. The other
thing for these organizations to comprehend is that FCoE typically isn’t an beginning unto itself,
meaning no one’s going to go out and say, “we need to muster FCoE, and we’ll start pulling cards.”
It’s customarily partial of a incomparable initiative, typically like server
virtualization
, for instance. It shouldn’t be tough to find an classification that’s increasing
their use of server virtualization, bringing in new technology. So what we expect is that FCoE
will be deployed alongside a new doing of a rarely virtualized infrastructure, that right
now, as we’ve pronounced earlier, those converged infrastructures are creation adult a bulk of that.

Critics advise that FCoE is a flitting fad. Is FCoE a temporal record or is it here to
stay? And, what does this meant for a market?

I consider a lot of a naysayers out there will contend it’s a flitting fad; others have said
everything else, “Fibre Channel is dead.” we consider a existence is somewhere in between that, in that
things take time to develop. iSCSI
took a small while to rise though it’s still clearly around. Mainframes are still around. But,
clearly, it’s not going to reinstate Fibre Channel. Fibre Channel is still going to be around, and a
lot of companies are creation poignant investments in Fibre Channel. However, when we demeanour during the
overall marketplace and we see what a lot of a record vendors are doing, many of them are making
significant investments in this joining over Ethernet and Fibre Channel over Ethernet. But I
think what we’re going to see is that it’s going to be that this joining store is probably
going to be bigger than customarily Fibre Channel over Ethernet. What we’re starting to see is
organizations—and we see a [Emulex’s, Broadcom’s, etc.]—trying to expostulate each record over
Ethernet. So you’re starting to see a Fibre Channel over Ethernet, you’re saying iSCSI, regular
Ethernet, even expecting RDMA as well, for that server-to-server connectivity, all over
Ethernet. we consider what people need to comprehend is that a change is going to be slow, though it is
happening, and that all these other technologies are going to exist alongside of it for utterly some
time to come.

Finally, are IT organizations prepared to adopt FCoE?

I consider they positively are in those converged solutions. In that case, it’s something they’re
getting as partial of a package, and they might or might not even be wakeful that they’re a Fibre
Channel over Ethernet. Certainly, other organizations that I’ve talked to are prepared to do it during the
top of rack, though they’re unequivocally waiting; we haven’t seen a lot of them that are doing end-to-end
FCoE yet. we consider in sequence for that to happen, we’re going to need to see a lot of opposite proof
points from other companies that are in this space that are doing it, documenting, and validating
what a advantages are of it. we consider it’s also going to be, in part, how most those technology
vendors who have already upheld and validated, indeed are pulling it out to a market. In big
part, we consider it’s going to be partial of a incomparable initiative. As organizations are completing that
journey to a private
cloud
, rarely virtualizing and sappy those practical environments, those vendors need to lay
out a advantages of how this joining will assistance them and advantage them along that path.




This was initial constructed in Jun 2011

Article source: http://www.pheedcontent.com/click.phdo?i=9fe64e27eb87c2b1e978c3b05dfb7314

Hyper-V P2V acclimatisation routine and storage considerations

Thursday, June 2nd, 2011

There are several reasons because it might be compulsory to modify earthy systems directly into
virtual machines (VMs). Aging, out-of-warranty or unwell hardware might obligate such a
migration, as might a elementary need to connect underutilized systems to revoke handling costs.
It is positively probable to manually build guest VMs and quit a applications and resources
from a earthy systems over to them, though this can be a time-consuming and error-prone operation.
A easier choice is to perform a earthy to practical (P2V) emigration to get these servers into
a virtualized server environment. Here we’ll plead Hyper-V
P2V conversion
, as good as a storage considerations around such a conversion.

In a Hyper-V environment, in further to a primer process, there are dual methods of migrating
physical servers to practical servers that are extremely reduction error-prone than primer methods.
These are to use Microsoft System
Center Virtual Machine Manager
(SCVMM)


2008 R2 SP1 and, rather surprisingly, regulating VMware
vCenter Converter Standalone
and Vmdk2Vhd
(VMDK to VHD) tools. Of these two, SCVMM provides a some-more seamless and finish migration
experience.

Pre-planning

No matter that routine we select to emanate a VMs, in sequence to equivocate emigration difficulties,
it is endorsed that before relocating VMs to a Hyper-V server, a suitable hardware such as
NICs, RAID controllers, Fibre Channel HBAs, etc., be commissioned in a Hyper-V server. In addition,
pre-configuring outmost resources such as Ethernet switches, zoning in Fibre Channel switches,
storage LUN masking, etc., should be achieved before attempting to foot a newly combined VM. In
addition, elementary things such as formulation domain names and mechanism names should be finished in advance.
You should establish if we can or should run both a strange earthy server and a newly
created VMs in together for a duration of time, or if we will cut over to a VM only.

Storage for guest practical machines

In a conversions to guest practical machines in Hyper-V, we generally cite to use the
“pass-through” mode for a focus server storage devices. This allows a focus direct
access to a storage device or inclination in a demeanour identical to a approach things work in a physical
server environments. This routine is a same for iSCSI or Fibre Channel storage. For
iSCSI storage, a NIC in a Hyper-V server needs entrance to a same VLAN as a NIC in the
original earthy server.

Another choice for iSCSI is to use a iSCSI approach method, that involves allocating a LUN
directly from a iSCSI initiator. This routine bypasses a hypervisor though requires that a iSCSI
NIC (or offload card) be manifest to a guest VM. 

For Fibre Channel storage, a HBA of a Hyper-V server needs to be enclosed in a appropriate
Fibre Channel section on a switch. In both cases, a LUN masking in the
storage arrays needs to be practiced to yield entrance to a new guest practical machine.

Microsoft System Center Virtual Machine Manager 2008 R2 SP1

Microsoft offers SCVMM as a government application to promote handling VMs in Hyper-V. It can be
ordered from Microsoft or downloaded for evaluation. SCVMM offers what is substantially a simplest and
cleanest routine of P2V conversion.

SCVMM contingency be commissioned on a domain-managed Windows Server 2008 R2 system. It will not install
on a workgroup server and is a bit fussy per horde name convention. For example, a server
on that SCVMM is regulating can't have hyphens in a mechanism name. The apparatus needs to be installed
on a complement (or systems, if swelling a server, database and administration console across
multiple servers) with network entrance to a suitable Hyper-V server and source physical
systems.

Using SCVMM to perform Hyper-V P2V acclimatisation is really discerning and straightforward. Using the
SCVMM console, a sorceress is used to brand a Hyper-V horde that will accept a practical machines
and a practical appurtenance trail where a guest practical machines will be stored. After a Hyper-V
host has been combined to a SCVMM console, a Hyper-V P2V sorceress can be started to start the
process of relocating a earthy server into a VM on a Hyper-V host. The P2V
conversion
sorceress stairs by a routine of selecting a preferred resources that will be
migrated to a practical machine.

SCVMM will rate a suitable Hyper-V servers according to their capability to support the
resources compulsory by a earthy servers. When SCVMM has finished a charge of building and
configuring a VM, it can be started in a Hyper-V environment.

VMware vCenter Converter Standalone client

Although apparently dictated for a acclimatisation of earthy systems to VMware practical machines,
VMware offers utilities that can be implemented to perform acclimatisation to Hyper-V as well. While not
providing utterly as seamless of a conversion, a VMware collection have a advantage of being available
for giveaway download from VMware, as of this writing. Both a VMware vCenter Converter Standalone
client and a Vmdk2Vhd collection need to be downloaded. These collection can be commissioned on any server or
desktop appurtenance that has network connectivity to a source earthy server.

Because this apparatus is designed for conversions to a VMware sourroundings and not directly for a
Hyper-V environment, a VMware practical appurtenance contingency be comparison as a destination. The output, a
VMware VMDK file, will be combined and afterwards converted to a Hyper-V practical tough hoop (VHD) using
the Vmdk2Vhd apparatus in a after step. As with SCVMM, some or all of a resources from a original
physical server can be comparison to be migrated to a new VM. After a VM is combined in a VMDK
format, afterwards converted to a VHD, a new VM can be built in Hyper-V regulating a normal stairs in
Hyper-V.

Once a VM is built, it can be started in Hyper-V. The VM might need a reboot to reconcile
hardware differences between a Hyper-V server and a strange earthy server. This is normal
and expected.

Now some post-conversion work might be in order. Since a VMware vCenter Converter Standalone
client does not perform a aim Hyper-V server analysis that SCVMM does, it is utterly possible
that some-more formidable source complement configurations will direct some additional pattern steps.

BIO: Dennis Martin is boss during Demartek LLC, an Arvada, Colo.-based attention researcher firm
that operates an on-site exam lab.

This was initial published in May 2011

Article source: http://www.pheedcontent.com/click.phdo?i=344199418c92716d36343986944f71c3

Using NFS to support a practical server environment

Friday, May 27th, 2011

Using
NFS
to support your virtual
server environment
provides advantages for IT managers in terms of cost and complexity. But
there are disadvantages, such as a miss of support for multi-pathing and a fact that a vStorage
APIs for Array Integration (VAAI)
don’t support NFS.

In this podcast interview, VMware
expert Eric Siebert discusses a best approach to use NFS for practical server environments. Discover the
benefits and complications that can arise when implementing network-attached
storage
(NAS) in
your virtual
server platform
, a opinion on VAAIs, miss of support for NFS, how NFS opening compares
with that of iSCSI and Fibre Channel, and how to set adult an NFS device to get a best performance
from it.

Listen to a podcast on NFS for virtual
servers
or review a twin below.

What are a advantages of regulating NFS to support a practical server platform?

Cost is a large one. Having common storage is roughly a contingency with virtualization
if we wish to take advantage of some of a some-more modernized features, such as high-availability
[HA]
and vMotion,
that need common storage. The cost of implementing a standard Fibre Channel resolution for shared
storage is typically flattering high. A NAS solution, on a other hand, can severely revoke a amount
of a responsibility of implementing a shared-storage resolution since NAS uses common NICs instead of
expensive Fibre Channel adapters. [NAS also] uses normal network components instead of
expensive Fibre Channel switches and cables.

Complexity is another [benefit]. When sourroundings adult a NAS solution, it’s typically many easier
compared to a SAN solution. And specialized storage administrators aren’t customarily required in many
cases to do it, so a record is a lot easier than sourroundings adult a SAN. Many server or
virtualization admins customarily set adult a NAS though any arrange of special training. [Also, the]
overall government of a NAS is typically easier compared to a some-more difficult SAN.

What disadvantages or complications can we run into with regulating NAS for a practical server
platform? When would we contend it’s a bad thought to use NFS?

NFS might be a bit opposite since of a record turn protocol. But that’s not unequivocally a bad
thing. Overall, it’s a good and effective resolution for all, though there are a few caveats that you
should be wakeful of when regulating it. The initial is if we wish to foot directly from a storage device
with carrying diskless servers. That’s not upheld by NFS. NFS also uses a program customer that was
built into a hypervisor, instead of a hardware I/O adapter. Because of that there’s a bit of CPU
overhead as a hypervisor contingency use a program customer to promulgate with a NFS
device
. Normally this isn’t too large of an issue, though on a horde it can means plunge of
performance since a CPUs are being common by a VM. That can unequivocally delayed down your storage.
They have a unequivocally high, unequivocally bustling storage device that does a lot of transactional I/Os. In that
case we would contend that maybe a Fibre Channel resolution might be some-more attractive.

Some vendors don’t suggest regulating NFS
storage
for certain transactional applications that are supportive due to latency that can occur
with NFS. However, this is contingent on many factors such as horde resources, configuration, as well
as a opening of a NFS device that you’re using. It unequivocally shouldn’t be a problem for a
well-built and scrupulously sized NFS system.

Finally, NFS doesn’t support regulating multi-pathing
from a horde to an NFS server. So typically we can set adult mixed paths from a horde device for
fault toleration and for doing bucket balancing. With NFS, usually a singular TCP event will be open to
an NFS information store, that can extent a performance. This can be alleviated by regulating multiple
smaller information stores instead of fewer, incomparable information stores, or by regulating 10 Gigabit Ethernet [GbE],
where a accessible throughput from a singular event will be many greater. It doesn’t really
affect high availability, that still can be achieved with NFS regulating mixed NICs in a virtual
switch.

I know there’s a miss of support for NFS in VAAI. What kind of impact does this have on
users who wish to use NFS for their practical server environment?

Well, a vStorage APIs for Array Integration are still a sincerely new technology, and [are]
continually elaborating with any new recover of vSphere. A lot of a vendors are still kind of late
to a diversion to being means to support it. But now [VAAI] usually supports VMFS
data stores, and it doesn’t support NFS storage. But while NFS is not upheld yet, some of the
NFS solutions, like those from NetApp, have identical facilities for loading and offloading that can
provide some of a same advantages as a vStorage APIs, though it seems that support for NFS always
seems to loiter behind compared to support for block-level storage and vSphere.
So we consider it’s flattering many a matter of time before vStorage APIs for Array Integration locate up
and start ancillary NFS as well.

How does a opening of NFS smoke-stack adult to that of iSCSI and Fibre Channel?

It unequivocally depends on a design and a forms of storage inclination that we use for NFS. But
overall NFS opening is flattering tighten to iSCSI. They’re both flattering identical in program clients
and network-based protocols. Fibre Channel, on a other hand, is unequivocally tough to beat. And while
NFS can come tighten to that kind of opening turn that Fibre Channel provides, Fibre Channel is
really a aristocrat when it comes to performance. It’s unequivocally tough for those other forms of protocols to
match a turn of Fibre Channel.

It’s tough to contend that NFS performs poorly. It does yield good performance, and in many cases
it should be means to hoop many workloads. The critical thing with NFS is to not let a CPU
become a bottleneck. 10
GbE
can also yield a large opening boost for NFS if we can means it—to move it adult to the
level of opening that we can get with Fibre Channel.

How do we set adult an NFS device so we get a best opening from it?

As we mentioned, a initial thing is carrying adequate CPU resources accessible so that a CPU never
becomes a bottleneck of a NFS custom processing. That one is sincerely easy to grasp by simply
making certain we don’t totally overkill your practical horde CPUs with too many VMs. Network
architecture is a large one; opening of NFS is rarely contingent on network health and
utilization. So we should fundamentally besiege your NFS trade onto a dedicated earthy NIC that
[isn’t] common with any other practical machines. You should also besiege a storage network so that
it’s dedicated to a horde and a NFS servers [and is not] common with any other network traffic
at all.

Your NICs are fundamentally your speed limit. So if you’re regulating a 1 GB NIC, that’s adequate for
most purposes. But to take NFS to a subsequent turn and knowledge a best-possible performance, 10
GbE is unequivocally a best sheet to that.

Finally, a form of NFS storage device that you’re connected to can make all a disproportion in
the world. Just like any storage device, we unequivocally have to distance your NFS servers to accommodate the
storage I/O demands. So for your practical machines, don’t use an aged earthy server using a
Windows NFS server and design it to accommodate a direct of a effort of bustling VMs. In general, the
more income we spend on NFS solutions, a improved opening you’ll get out of it. There are many
high-end NFS solutions out there, like those from NetApp, that will accommodate a final of most
workloads. So basically, buy a resolution that will accommodate your needs and make certain a NFS server
does not turn a bottleneck. NFS is opposite from retard storage devices; we should really
architect and configure it accordingly to precedence a strength.

27 May 2011

Article source: http://www.pheedcontent.com/click.phdo?i=fd5cee224e24693b3713aa1fae1c9e4d