Posts Tagged ‘Servers’

The Rise of Cloud Computing and Virtualisation – PR

Sunday, January 12th, 2014

The overlap

Much of a difficulty about a dual terms stems from a fact that there is indeed utterly a bit of overlie between a dual concepts. Without a ability to virtualize servers, a cloud would not be means to operate.

Recently, Jeremy Gumley expelled an essay explaining a opposite between two. “Think about it this way: A cloud storage provider uses servers in a information core to horde their storage. Without virtualization, a provider would radically need one server per customer or per organisation of clients,” says Jeremy.

With many renouned storage providers carrying millions of users, they would need to have an pornographic volume of servers. So what they do is virtualize mixed servers and residence them in one server. In other words, virtualization allows a cloud to function.

It’s critical to comprehend that a cloud is still reliant on servers, only as virtualization is. The categorical disproportion is that when companies virtualize, they customarily horde a servers on-site. When companies go ‘to a cloud’, they customarily bond around a Internet to servers that are hosted off-site (outside of a organization).

For some-more information hit –
Address: Suite 33, 574 Plummer Street, Port Melbourne, 3207
Phone: 1300 327 948
Fax: 03 9982 7409
http://www.easyit.com.au
– ,

Article source: http://pr-bg.com/content/view/256442/80/

Stop treating your datacentre as if it were a laptop: Symantec

Tuesday, September 3rd, 2013

Speaking during a Symantec Symposium in Sydney today, a company’s information confidence use manager Adrian Covich pronounced that organisations are treating a confidence of their servers like laptops.

Despite servers staying in a datacentre and carrying vastly opposite confidence challenges, Covich pronounced businesses strengthen them as if they were finish points, like laptops, installing antivirus and information detriment impediment packages and ignoring a fact that they are have opposite challenges.

“The datacentre is being targeted, and it doesn’t matter if you’re a large organization or a tiny organisation. It’s where a value is, and we need to strengthen it.”

He pronounced that many of a challenges, such as traffic with mislaid laptops or antagonistic USB sticks, simply don’t request to servers in a datacentre, and it didn’t make clarity to provide them that way.

Furthermore, he argued that of a information that is stolen from organisations any year, most of it isn’t from laptops. He forked to a new Verizon Data Breach Investigation Report, that showed that 97 percent of information is indeed from servers.

Attacks on servers are also utterly different. Laptops are traditionally stable by antivirus products, penetration detection/prevention systems, and probable layers of firewalls. While these are simple necessities for servers, they do not do most for addressing user payoff escalation vulnerabilities, fortifying opposite SQL injection, and other attacks, he said.

Covich pronounced that organisations that wish to strengthen their servers like servers and not like laptops should be examining a use of measures such as sandboxing, even in virtualised environments.

One such advantage that administrators can take advantage of is a fact that servers are meant to do usually a few specific things, and their sourroundings is not frequently changing.

“I don’t wish to be loading a latest chronicle of iTunes since we can. we know what a programs are that a server is meant to run. I’m going to make certain it usually runs those.”

This goes palm in palm with focus whitelisting, another recommendation that Covich made, that is one of a tip strategies suggested by a Australian Signals Directorate for supervision departments.

Article source: http://www.zdnet.com/au/stop-treating-your-datacentre-as-if-it-were-a-laptop-symantec-7000020145/

Flash breakthrough promises faster storage, terabytes of memory

Tuesday, July 30th, 2013

In a ongoing query for faster entrance to data, Diablo Technologies has taken what could be a poignant subsequent step.

Diablo’s Memory Channel Storage (MCS) architecture, approaching to uncover adult in servers shipping after this year, allows peep storage components to block into a super-fast channel now used to bond CPUs with memory. That will condense data-access delays even some-more than stream peep caching products that use a PCI Express bus, according to Kevin Wagner, Diablo’s clamp boss of marketing.

The speed gains could be dramatic, according to Diablo, assisting to give applications such as databases, vast information analytics and practical desktops many faster entrance to a information they need most. Diablo estimates that MCS can revoke latencies by some-more than 85 percent compared with PCI Express SSDs (solid-state disks). Alternatively, a peep components could be used as memory, creation it affordable to supply servers terabytes of memory, Wagner said.

Other than on-chip cache, a memory channel is a fastest track to a CPU, Wagner said. Not usually do pieces fly faster over this link, there are also no bottlenecks underneath complicated use. The tie is designed to be used by many DIMMs (dual in-line memory modules) in parallel, so any member doesn’t have to relinquish a train for another one to use it. That saves time, as good as CPU cycles that would differently be used handling a bus, Wagner said.

The together pattern of a memory train also lets complement makers scale adult a volume of peep in a server though worrying about abating returns, he said. A second MCS peep label will truly double performance, where an combined PCIe SSD could not, Wagner said.

Diablo, that has been offered memory controllers for about 10 years, has figured out a approach to use a customary DDR-3 interface and protocols to bond peep instead of RAM to a server’s CPU. Flash is distant reduction costly than RAM, though also some-more compact. The MCS components, that come in 200GB and 400GB sizes, will fit into customary DIMM slots that typically accommodate usually 32GB or so of memory. The usually instrumentation manufacturers will need to make is adding a few lines of formula to a BIOS, Wagner said.

Enterprises are some-more expected to use MCS as high-capacity memory than as low-latency storage, pronounced researcher Jim Handy of Objective Analysis.

“Having some-more RAM is something that a lot of people are going to get really vehement about,” Handy said. His user surveys uncover many IT departments automatically get as many RAM as they can for their servers, since memory is where they can get a fastest entrance to data, Handy said.

“Basically, you’d like all to be in a RAM,” Handy said. Virtualized information centers, where many servers need to share a vast set of data, need a common store of data. But in other applications, generally with databases and online transaction processing, storage is usually a cheaper and some-more abundant — though slower — choice to memory. “Everything that’s on a storage is there usually since it can’t fit on a RAM,” he said.

To exercise a MCS architecture, Diablo grown program and a tradition ASIC (application-specific integrated circuit), that it will sell to member vendors and makers of servers and storage platforms. Flash businessman Smart Storage Systems, that progressing this month concluded to be acquired by SanDisk, will be among a companies regulating a MCS technology, Wagner said. In addition, a tier-one server businessman is scheming about a dozen server models with a record and will substantially boat a initial of them this year, Walker said.

For a many part, Diablo doesn’t design consumers or tiny enterprises to implement MCS peep on their possess computers. However, Diablo might work directly with enterprises that have really vast information centers they wish to accelerate, he said.

Using MCS peep to addition DRAM would dramatically revoke a per-gigabyte cost of memory though also would concede for serve converging of a servers in a information center, Wagner said. A vast amicable networking association with 25,000 servers analyzed a MCS record and pronounced it would make it probable to do a same volume of work with usually 5,000 servers.

That’s since a stream DRAM-only servers can be versed with usually 144GB of memory, though MCS would concede any server to have 16GB of DRAM and 800GB of flash. With that many memory, any server can do some-more work so fewer are needed, Wagner said. Fewer servers would meant assets of space and energy, that would interpret into reduce costs, he said.

Stephen Lawson covers mobile, storage and networking technologies for The IDG News Service. Follow Stephen on Twitter during @sdlawsonmedia. Stephen’s e-mail residence is stephen_lawson@idg.com

Article source: http://news.techworld.com/data-centre/3461333/flash-breakthrough-promises-faster-storage-terabytes-of-memory/

How do we keep costs down when contrast in a cloud?

Tuesday, December 25th, 2012

Our association changed growth into a cloud since we wanted to exam simply and quickly.
Sometimes, we finish adult spending a small some-more than we should doing it. What is a best approach to keep
costs down when contrast in a cloud?

The biggest source of squandered costs in Amazon
Web Services
(AWS) comes from withdrawal instances
on since you’re meditative of them as servers. Instances are not servers and they don’t need to be
kept running. In fact, they should be suspicion of as impossibly disposable components — something
you can simply only close off during a moment’s notice.

More on contrast in a cloud

Performance
test
in a cloud

U.S. Army tests cloud
apps

Cloud
integration
severe testers

Testing in a cloud can be impossibly inexpensive and fit if we do it right. The initial order of
testing is to remember to “shut off a lights when you’re done.” You compensate for each hour that an
instance is running, so if we don’t need to exam during night, shut
the servers down
before we leave. Try to inspire your developers to invalidate any systems they
aren’t actively using. AWS arch record officer Werner
Vogels
describes this in his commandment, “Thou shalt close off a lights.”

Some companies have taken this to another level, carrying real-time graphs and lights on
“dashboards” regulating in their offices. These real-time graphs can be used to uncover a sum cost of
the stream regulating environment. Doing something like this encourages your developers to monitor
more closely their impact on a cost of a business and gives them something to be vehement about
when they reduce costs. Just like we wouldn’t leave a light on when you’re a final one to leave
the room, because would we leave a server on when you’re a final one to use it for a night?



This was initial published in Dec 2012

Article source: http://www.pheedcontent.com/click.phdo?i=6db492405fc29100ad8e9acffa7f8c8f

Telefónica Jumps On Joyent To Launch Another Amazon Rival

Wednesday, November 7th, 2012

Telefónica is to use Joyent program to energy a Infrastructure-as-a-Cloud (IaaS) use to plea Amazon Web Services’ prevalence of a open zone cloud marketplace in Europe.

Joyent usually entered a UK IaaS market itself in April, anticipating a SmartDataCenter and SmartOS program would assistance captivate business into a Amsterdam information centre. Now it is loaning that same program for use in Telefónica’s public cloud information centres powering a Instant Servers offering.

concept businessman email amicable cloud  James Thew ShutterstockTelefónica to a cloud

One of Telefónica’s big sells is a twine network, that should be means to send information opposite information centres and clients during high speeds, earnest low latency. The telecoms hulk pronounced it could offer 400 percent additional discriminate energy in genuine time, so when cloud-connected services knowledge high levels of traffic, they will mount up.

Telefónica has betrothed a  service turn agreement (SLA) of 99.996 percent per year with  financial remuneration offering in a eventuality of non-compliance.

“With a launch of Instant Servers Telefónica Digital seeks to accommodate a needs of thousands of  businesses that need a cloud services height that is simply scalable, with low latency and totally trustworthy,  enabling them not usually to fast respond to their possess needs, though also to a expectations of their customers,” pronounced Carlos Morales, Telefónica Digital’s cloud and machine-to-machine record director.

“This can all be finished with poignant cost assets as business usually compensate for a form of cloud services they need and a time they use them for.”

Think we know cloud computing? Test yourself with the quiz.

Article source: http://www.techweekeurope.co.uk/news/telefonica-joyent-amazon-cloud-98524

Telefonica creates cloud some-more accessible

Wednesday, November 7th, 2012

Telefónica Digital is bolstering a tellurian open cloud use with a toolkit that offers users larger control and provisioning of practical servers.

Dubbed ‘Instant Servers’, that sounds like something combined by adding a spoonful of granules to a mop of prohibited water, a use is formed on record from Joyent, that allows business to configure a distance of their practical server in terms of RAM memory, CPU and tough expostulate as good as select a Operating System (SmartOS, Ubuntu, CentOS, Windows Server, Fedora and Debian) a virtual server runs on.

All a hardware resides in Telefónica’s enterprise-grade information centres, connected to Telefónica’s twine ocular network, joined with an SLA of 99.996 per cent per year and a turbo-charge boost of computing energy by adult to 400 per cent in genuine time to hoop spikes in demand.

Carlos Morales, Telefónica Digital’s Cloud and M2M Director, said: “With a launch of Instant Servers Telefónica Digital seeks to accommodate a needs of thousands of  businesses that need a cloud services height that is simply scalable, with low latency and totally trustworthy,  enabling them not usually to fast respond to their possess needs, though also to a expectations of their customers. This can all be finished with poignant cost assets as business usually compensate for a form of cloud services they need and a time they use them for”.

In associated news, fortitude network and datacentre user Interoute has announced skeleton to serve build out a European datacentres to accommodate demand, with tip line revenues jumping 15 per cent year on year, to €296.7m, in a initial 3 buliding of 2012.

The association recently extended a Virtual Data Centre height into a Berlin trickery and prepared a Paris Data Centre for a launch of a same. The association was also comparison by a UK Government as an IaaS (Infrastructure as a Service) provider for that nation’s G-Cloud programme.

On Tuesday, cloud services organisation Skyscape also won a agreement with a Ministry of Defence in a UK for a hosting of a GEMS Online system, that will capacitate MoD and Armed Forces crew to make suggestions to assistance a MoD transform.

Article source: http://www.businesscloudnews.com/component/content/article/829-telefonica-makes-cloud-more-accessible.html

Calxeda Deploys Chips in Penguin Computing Servers

Wednesday, October 17th, 2012

Austin-based Calxeda, uninformed off a $55M appropriation round, pronounced currently that a chips are being shipped in a new line of servers from Penguin Computing. Calxeda pronounced that it has now shipped “thousands” of a chips to OEM business and finish users, as it starts rolling out a products into a market. Calxeda’s prolonged awaited chips take advantage of a low appetite expenditure of ARM processors, to revoke a appetite use in information centers. Those chips–typically used in mobile platforms–have not been used formerly in information core operations. Calxeda pronounced a EnergyCore ECX-1000 systems-on-a-chip (SoC) is being directed primarily during optimized racks for implementing open and private clouds, as good as for “warehouse-scale” datacenters. Calxeda is corroborated by ARM Holdings, Advanced Technology Investment Company, Austin Ventures, Battery Ventures, Flybridge Capital Partners, Highland Capital Partners, and Vulcan Capital.

Article source: http://www.texastechpulse.com/calxeda_deploys_chips_in_penguin_computing_servers/s-0045701.html

ARM: The inlet of servers has changed

Saturday, September 15th, 2012

Changes brought about by a arise in cloud computing have had a outrageous impact on servers and server design, and could be a pivotal cause in bringing ARM’s low-power RISC chips into a datacentre.

ARM believes that a arise of cloud companies like Facebook, Google and Amazon is bringing about a change in attitudes to processors that could make a chips applicable for these companies, posing a intensity hazard to Intel and AMD’s server businesses.

Over a past few years, “the inlet of servers has changed,” ARM’s ubiquitous manager of a processor and earthy IP divisions, Simon Segars told ZDNet this week. “It’s unequivocally a expansion in cloud [and] expansion in companies doing web hosting [and] amicable sites. As that has grown a inlet of servers has changed.”

These workloads place a importance on light computing tasks nonetheless with lots of correspondence and visit use of immeasurable amounts of data, Segar said. Because of this, people are apropos ever some-more endangered with a energy consumed by any particular processor, and this could move ARM chips into a datacentre.

Companies wish a thermal pattern energy – how most power, roughly, a chip uses – to be “as low as possible,” Segar says. ARM’s chips, that lay during a heart of a immeasurable infancy of a world’s mobile phones, including Apple’s just-released iPhone 5, devour most reduction energy than Intel’s processors.

Calxeda’s EnergyCore ARM server consumes around 5W, and a association published a formula of an ApacheBench benchmark in June that showed a record violence a 102W Intel Xeon processor. 

Over a past few years, ARM has been building new chips that are designed for servers as well. In Oct final year, it announced a 64-bit design. 64-bit is deliberate to be a must-have underline for servers, so in about a year when ARM licensees start churning out a processors, vast things could occur in a datacentre.

This change will be driven by a “mega-trend” of a expansion in data-intensive mobile devices, like smartphones, Segar said. “A lot of those information services can be streamed by ARM-based servers.”

ARM’s challenges

The opposite justification to ARM’s is that Intel has years of knowledge of building server processors and has grown a immeasurable volume of x86-specific technologies to support server workloads.

Another one is that as ARM designs new chips for a datacentre it will have to put additional facilities into a processors, that will lead to a arise in a volume of energy consumed.

Meanwhile, Intel is scheming to launch a Haswell processor, that will devour as tiny as 8W of power, holding a high-end discriminate chip into domain typically assigned by ARM.

Furthermore, ARM has reduction extended support for a forms of program that run on servers, nonetheless Segar records this is changing: “In a server space a Linux heart has been optimised for ARM for many years,” he said.

For a tiny and middle business there is no justification that ARM chips have a convincing play there, with these companies typically preferring x86 servers from vital craving like HP, IBM and Dell due to a multiple of a Intel processors, extensive use and support contracts, and a associated program ecosystem. There is not unequivocally an homogeneous ecosystem of program for ARM yet.

That said, these vendors are meddlesome in ARM themselves. HP has constructed a antecedent ARM-based server regulating Calxeda’s record as partial of a Project Moonshot hardware growth scheme. 

ARM gambles on success in a vital clouds

ARM’s bet, though, is that a really immeasurable cloud operators – Google, Amazon, Facebook – could be tempted by a chips due to a impact it will have on their datacentre electricity bills. Over a lifetime of a datacentre, it’s standard that during slightest 50 percent of a cost of a trickery comes from a electricity it uses, so pushing this down is a priority for these companies.

Related to this is a arise of software-as-a-service collection for tiny businesses like Salesforce for CRM or Microsoft Office 365 for productivity. Adoption of these collection means businesses have reduction need for hardware themselves and instead outsource a collateral cost to a cloud operator.

“If you’re focused on being a dental rehearse of a sanatorium a final thing we wish to do is conduct IT,” Ian Ferguson, ARM’s executive of server systems and ecosystem, told ZDNet. “Fundamentally those guys aren’t going to be shopping as most IT apparatus in a subsequent few years.”

Whether ARM’s chips have what it takes will turn apparent over a subsequent integrate of years as 64-bit designs come onto a market.

“I consider we can courtesy ARM in a cloud as a actor in 2014,” Ferguson said. “There are trials going on in end-users right now. we consider those will start to interpret to volume in that arrange of timeframe… it takes some time to get a silicon right, to get a platforms right, and on 64-bit we’ve got to get some-more of an ecosystem going.”

Article source: http://www.zdnet.com/arm-the-nature-of-servers-has-changed-7000004294/

ARM: The inlet of servers has changed

Saturday, September 15th, 2012

Changes brought about by a arise in cloud computing have had a outrageous impact on servers and server design, and could be a pivotal cause in bringing ARM’s low-power RISC chips into a datacentre.

ARM believes that a arise of cloud companies like Facebook, Google and Amazon is bringing about a change in attitudes to processors that could make a chips applicable for these companies, posing a intensity hazard to Intel and AMD’s server businesses.

Over a past few years, “the inlet of servers has changed,” ARM’s ubiquitous manager of a processor and earthy IP divisions, Simon Segars told ZDNet this week. “It’s unequivocally a expansion in cloud [and] expansion in companies doing web hosting [and] amicable sites. As that has grown a inlet of servers has changed.”

These workloads place a importance on light computing tasks nonetheless with lots of correspondence and visit use of immeasurable amounts of data, Segar said. Because of this, people are apropos ever some-more endangered with a energy consumed by any particular processor, and this could move ARM chips into a datacentre.

Companies wish a thermal pattern energy – how most power, roughly, a chip uses – to be “as low as possible,” Segar says. ARM’s chips, that lay during a heart of a immeasurable infancy of a world’s mobile phones, including Apple’s just-released iPhone 5, devour most reduction energy than Intel’s processors.

Calxeda’s EnergyCore ARM server consumes around 5W, and a association published a formula of an ApacheBench benchmark in June that showed a record violence a 102W Intel Xeon processor. 

Over a past few years, ARM has been building new chips that are designed for servers as well. In Oct final year, it announced a 64-bit design. 64-bit is deliberate to be a must-have underline for servers, so in about a year when ARM licensees start churning out a processors, vast things could occur in a datacentre.

This change will be driven by a “mega-trend” of a expansion in data-intensive mobile devices, like smartphones, Segar said. “A lot of those information services can be streamed by ARM-based servers.”

ARM’s challenges

The opposite justification to ARM’s is that Intel has years of knowledge of building server processors and has grown a immeasurable volume of x86-specific technologies to support server workloads.

Another one is that as ARM designs new chips for a datacentre it will have to put additional facilities into a processors, that will lead to a arise in a volume of energy consumed.

Meanwhile, Intel is scheming to launch a Haswell processor, that will devour as tiny as 8W of power, holding a high-end discriminate chip into domain typically assigned by ARM.

Furthermore, ARM has reduction extended support for a forms of program that run on servers, nonetheless Segar records this is changing: “In a server space a Linux heart has been optimised for ARM for many years,” he said.

For a tiny and middle business there is no justification that ARM chips have a convincing play there, with these companies typically preferring x86 servers from vital craving like HP, IBM and Dell due to a multiple of a Intel processors, extensive use and support contracts, and a associated program ecosystem. There is not unequivocally an homogeneous ecosystem of program for ARM yet.

That said, these vendors are meddlesome in ARM themselves. HP has constructed a antecedent ARM-based server regulating Calxeda’s record as partial of a Project Moonshot hardware growth scheme. 

ARM gambles on success in a vital clouds

ARM’s bet, though, is that a really immeasurable cloud operators – Google, Amazon, Facebook – could be tempted by a chips due to a impact it will have on their datacentre electricity bills. Over a lifetime of a datacentre, it’s standard that during slightest 50 percent of a cost of a trickery comes from a electricity it uses, so pushing this down is a priority for these companies.

Related to this is a arise of software-as-a-service collection for tiny businesses like Salesforce for CRM or Microsoft Office 365 for productivity. Adoption of these collection means businesses have reduction need for hardware themselves and instead outsource a collateral cost to a cloud operator.

“If you’re focused on being a dental rehearse of a sanatorium a final thing we wish to do is conduct IT,” Ian Ferguson, ARM’s executive of server systems and ecosystem, told ZDNet. “Fundamentally those guys aren’t going to be shopping as most IT apparatus in a subsequent few years.”

Whether ARM’s chips have what it takes will turn apparent over a subsequent integrate of years as 64-bit designs come onto a market.

“I consider we can courtesy ARM in a cloud as a actor in 2014,” Ferguson said. “There are trials going on in end-users right now. we consider those will start to interpret to volume in that arrange of timeframe… it takes some time to get a silicon right, to get a platforms right, and on 64-bit we’ve got to get some-more of an ecosystem going.”

Article source: http://www.zdnet.com/arm-the-nature-of-servers-has-changed-7000004294/

Virtual Internet Launches New Virtual Datacentre

Friday, July 27th, 2012

/PRNewswire/ –

Virtual Internet, a heading managed web horde given 1996, has launched a new product to support with a government and utilization of cloud computing.

The group during Virtual Internet have announced a new product called Virtual Datacentre, a elementary control row that allows users to launch mixed cloud servers automatically from 15 locations around a universe within an present and though carrying to understanding with other suppliers.

Patrick McCarthy, Managing Director of Virtual Internet, has been unequivocally upbeat about rising a new datacentre: “We found that what a business were unequivocally after was a control row that not usually let them conduct their stream servers, though also enabled them to launch a new VPS simply and quickly, though carrying to start a new contract. So that’s what we’ve built!”

The control row offers a quick, step-by-step routine to rising mixed servers. Once a patron has selected a plcae and a volume of resources they wish for a server, they are afterwards prepared to launch and a server is deployed within minutes.

Utilising a latest cloud technology, a servers are means to offer full scalability along with glorious speeds. The VI cloud mind is means to boldly allot a RAM and CPU as your cloud server needs. Virtual Internet are assured that this will solve a problem of delayed sites and downtime.

As aforementioned, business are means to launch servers from 15 opposite locations, including those in a UK, mainland Europe, USA and Asia. Virtual Internet is carefree that this will capacitate a product to have a tellurian strech and they demeanour brazen to receiving feedback per their new product. They will also be monitoring a impact a Virtual Datacentre will have.

Find out some-more about a Virtual Datacentre and watch a video by visiting their dedicated website: vdc.vi.net.

SOURCE Virtual Internet

Article source: http://www.sacbee.com/2012/07/26/4662917/virtual-internet-launches-new.html