Posts Tagged ‘model’

HP Research Reveals A Hybrid Future For Cloud

Wednesday, January 2nd, 2013

HP Research Reveals A Hybrid Future For Cloud

by Biztech2.com Staff
2nd January, 2013
in Cloud Computing

 
 

 
 
 

HP has disclosed investigate divulgence that enterprises see hybrid smoothness as a destiny of cloud and highlights a hurdles they face in formulating hybrid environments.

As cloud adoption gains momentum, it is pure that enterprises expect regulating a hybrid smoothness indication consisting of normal IT, managed cloud, private and open cloud offerings going forward. According to a new investigate consecrated by HP, 69 percent of organisations in Asia Pacific and Japan surveyed pronounced that they intend to pursue a hybrid cloud smoothness model.

The investigate also suggested that some-more than 60 percent of comparison business and record executives surveyed are endangered about businessman lock-in when implementing cloud solutions. Seventy-two percent of respondents pronounced that portability of workloads between cloud models also is critical when implementing cloud solutions.

When deliberation adoption of a open cloud solution, record executives settled that their organisations need an open, pure underlying infrastructure (68 percent), use turn agreements (60 percent) and craving billing (47 percent), before putting prolongation applications in a cloud.

Additionally, 62 percent of business and record preference makers pronounced it is critical for their organisations to be means to detonate to an outmost cloud services provider to benefit present entrance to additional ability and simply conduct disproportionate use demands.

Tags: HP, Cloud, APJ

 
 

 
 
 

« Previous Story

Synchronoss Technologies Buys…

There are no comments on this essay yet. Why don’t we post one?


Article source: http://biztech2.in.com/news/cloud-computing/hp-research-reveals-a-hybrid-future-for-cloud/150942/0

Law Firms And Cloud Computing

Saturday, December 29th, 2012

The tenure “cloud computing” has been tossed about as a new trend in IT. Unfortunately, only as mostly has we hear a tenure echoed as a “next large thing” a distinct clarification frequency follows. So what is cloud computing? The United States National Institute of Standards Technology (“NIST”) defines cloud computing as:

A indication for enabling ubiquitous, convenient, on-demand network entrance to a common pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be fast provisioned and expelled with minimal government bid or use provider interaction.  This cloud indication is stoical of 5 essential characteristics, 3 use models, and 4 deployment models.

Perhaps some-more succinctly put, cloud computing “involves a sum of a use to store, broadcast and routine information and employs a internet as a means to entrance and pierce a information.”  [1]

How does constrictive with a cloud use provider impact trusted information that is common with a third party? American Bar Association Model Rule 1.6 requires lawyers to say all information relating to customer illustration in confidence. [2] Under Model Rule 1.6 an profession contingency take “reasonable precautions” to safeguard opposite unconsidered disclosure.  The accurate plan of what constitutes “reasonable precautions” is never provided; rather, a discretion of a prevision is a fact supportive analysis.  Id.  An profession contingency cruise a risk of harm, a odds of disclosure, and a border to that a remoteness of a communication is stable by possibly law or confidentiality agreementId.

In Warshak v. United States 490 F.3d 455, 473 (6th Cir. 2007), a Court offering some superintendence as to what constitutes suitable precautions when traffic with a third celebration cloud service.  The Court reasoned that a pivotal member in progressing a reasonable expectancy of remoteness is a declaration from a third celebration horde that a information “provided” will conjunction be monitored nor audited.  [3]

In Sep of 2010, a New York State Bar Association’s Committee on Professional Ethics expelled Opinion 842.  Opinion 842’s subject was “using an outward online storage provider to store customer trusted information.”  Opinion 842’s digest states that:

A counsel might use an online information storage complement to store and behind adult customer trusted information supposing that a counsel takes reasonable caring to safeguard that confidentiality will be confirmed in a demeanour unchanging with a lawyer’s obligations underneath Rule 1.6.  In addition, a counsel should stay sideways of technological advances to safeguard that a storage complement stays amply modernized to strengthen a client’s information, and should safeguard a changing law of payoff to safeguard that storing a information online will not means detriment or waiver of any privilege.

In addition, Opinion 842 provides that an profession contingency also practice reasonable caring in preventing “others whose services are employed by a counsel from disclosing or regulating trusted information of a client.”  This requires certain stairs to safeguard opposite unconsidered disclosure.  The New York State Bar Association has summed adult advisory opinions from several other jurisdictions anticipating that sufficient precautions contingency be in place when regulating electronic storage of customer files.

New Jersey Opinion 701 (2006) ( counsel might use electronic filing complement whereby all papers are scanned into a digitized format and entrusted to someone outward a organisation supposing that a counsel exercises “reasonable care,” that includes entrusting papers to a third celebration with an enforceable requirement to safety confidentiality and security, and contracting accessible record to safeguard opposite pretty foreseeable attempts to penetrate data); Arizona Opinion 05-04 (2005) (electronic storage of customer files is slight supposing lawyers and law firms “take efficient and reasonable stairs to assure that a client’s confidences are not disclosed to third parties by burglary or inadvertence”); see also Arizona Opinion 09-04 (2009) (lawyer might yield clients with an online record storage and retrieval complement that clients might access, supposing counsel takes reasonable precautions to strengthen confidence and confidentiality and counsel intermittently reviews confidence measures as record advances over time to safeguard that a confidentiality of customer information stays pretty protected).

NYSBA Opinion 842 (emphasis added)

For some-more information per a use of cloud computing in law firms, hit Arturo Castro, an associate during Cullen and Dykman LLP. Arturo can be reached during acastro@cullenanddykman.com.

  1. [1] Peter M. Lefkowitz, Department: Practice Tips: Contracting in a Cloud: A Primer.  54 B.B.J. (Boston Bar Journal) 9.
  2. [2] Shellie Stephens, Recent Development: Going Google: Your Practice, a Cloud, and a ABA Commission on Ethics 20/20, 2011 U. Ill. J.L. Tech Pol’s (University of Illinois Journal of Law, Technology and Policy) 237.
  3. [3] Shellie Stephens, Recent Development: Going Google: Your Practice, a Cloud, and a ABA Commission on Ethics 20/20, 2011 U. Ill. J.L. Tech Pol’s (University of Illinois Journal of Law, Technology and Policy) 237.

Article source: http://www.jdsupra.com/legalnews/law-firms-and-cloud-computing-98577/

Two notation briefing: Software as a service

Wednesday, December 26th, 2012


2012-12-25
by

For this essay we hang a conduct in a Cloud and have a go during explaining program as a service.

There is an ancient by-law or something that dictates that all critical mechanism concepts contingency have a TLA – a 3 minute acronym – yet program as a use goes one better, and is mostly referred to as SaaS.

SaaS is a comparatively new computing judgment formed on a really aged computing concept, namely to have all that difficult geeky mechanism program and storage things looked after remotely, rather than in-house.
 
In days of aged this would have meant a mechanism like a one in a film “Billion Dollar Brain” (ask your Dad, or check your TV listings over Christmas and New Year) being during a centre of a information hub, pumping out information to “dumb terminals” – mechanism inclination with only adequate computing energy to inverse with a executive computer.

These days a executive mechanism is expected to be a array of networked tiny inclination located somewhere “out there” (points to a sky, hence a word “the Cloud”), and a terminals accessing a information are distant from dumb.

In fact, your normal mobile phone substantially has some-more computing energy than a mechanism NASA used to send a space boat to a moon.

The critical thing about a terminals, however, is not a computing energy they have, yet a fact they are connected to a Internet (and therefore a Cloud) and can entrance around a browser program and information that is hold remotely.

The good thing about this for a user is that a program is dubious about a handling system: we can use any essence of Windows, Linux, Apple, Blackberry, OS/2 or even DOS.

Furthermore, when a SaaS suppliers wants to ascent a product, it is all finished during a supplier’s end, that means that during a customer’s subsequent house assembly a financial executive does not have to fake to know what a information record executive is going on about when he talks of a need to ascent all a firm’s pack and program and also get 20 geeks in over a week-end on double-bubble to ascent a mechanism system.

From a supplier’s indicate of view, this has advantages as well. They can mostly poke yet mini-upgrades and rags though a patron noticing, for instance.

If we are invested in, or are meditative of investing in, a program association that is relocating over some-more to a program as a use model, a large thing to watch out for is a large income strike a association suffers when a patron moves over from a chartering model.

You see, with SaaS, a patron will compensate monthly or maybe quarterly for an concluded duration of time (say, dual or 3 years), since underneath a chartering model, a patron would compensate upfront for a program to be commissioned on a certain series of work stations.

Come renovation time, a chartering choice can get a bit eye-watering, since with a SaaS model, a patron has turn used to a drip-drip-drip subscription fees.

It is substantially also easier to sell new facilities around a SaaS model. Johnny Customer looks during a web page and sees a “audit” choice greyed out, clicks on it and finds he can have this use 30 seconds after a patron services user during a other finish has seen a supports eliminated into a supplier’s bank.

Typically, SaaS is really renouned for business applications such as payroll processing, crew records, patron attribute government and craving apparatus planning, yet there is no reason because a Cloud should not be casting a shade over a lot some-more business areas in a future.

Bookmark and ShareSubscribe

Article source: http://www.proactiveinvestors.com.hk/market_news/6434-two-minute-briefing-software-as-a-service.html

The purpose of software-defined networks in cloud computing

Saturday, December 15th, 2012

It might seem ironic, yet a many formidable thing about software-defined networking is actually
defining it. Given a agility of views about what a software-defined network is, it’s hardly
surprising that SDN’s specific purpose in a cloud is elusive.

There are dual software-defined
networking

models and dual opposite SDN missions in cloud computing. Since networks
create a cloud
, handling a interplay between these dual factors could be a pivotal to cloud
efficiency
and success.

As an information service, a Internet treats a network as a pure partner. With the
cloud, a user’s applications reside in, and turn partial of, a cloud. And many determine that means at
least some of a network contingency be integrated with a cloud. The stream accord is that a data
center has to be done cloud-specific, yet should the WAN also be a “resource” to
the cloud?

To answer that, let’s initial demeanour during given SDN contingency embody a information center.

In cloud computing, a user joins a village a cloud creates. Cloud computing service
providers face a issue of
multi-tenancy
during a network turn as many as they do during a CPU/server and database levels.
Shared resources contingency be common in such a proceed that one user’s applications do not impact another
user’s apps. Therefore, a resources of all users have to be partitioned so they are private and
secure. While network technologies such as IP and Ethernet any have virtual
network capability, these capabilities are singular in terms of how many tenants can be supported
and singular how removed any reside is.

Cloud program providers see a network as a partnership between a information core network and
the cloud. Amazon Web Services’ Elastic IP addresses an application-driven proceed to integrating
the network and a cloud; OpenStack
includes network services as one of a resources it virtualizes, along with storage and
CPU/server. OpenStack’s
Quantum
interface defines how a practical network can be combined to “host” CPU and database
elements, for example. However, Quantum doesn’t conclude a record used to emanate that virtual
network. Each businessman is obliged for mapping a record to a practical network models that
Quantum defines.

Two SDN models for cloud computing
The need to initial accommodate multi-tenancy and support cloud control of network services second
brings us to a record side of SDNs. Two models of SDN have emerged: a “overlay model” and
the “network
model
.”

In a conceal model, program (often cloud-linked software) creates a practical network; in the
network model, network inclination emanate those practical networks.

Overlay SDNs, such as VMware’s
recently acquired Nicira technology
, use program to assign IP or Ethernet addresses into
multiple practical subnetworks, identical to what TCP does with ports. A new set of
network APIs allows applications to entrance these subnetworks as yet they were IP or Ethernet
networks. The program keeps a trade of mixed subnetworks secure and isolated. Network
devices don’t “see” overlay
virtual networks
, so they don’t provide trade differently.

Network-hosted SDNs are built from network devices; therefore, they conduct SDN trade directly.
Some network vendors — Cisco in sold — introduce to supplement program control to stream devices
and networks by bettering stream network record and inclination that heed to SDN principles,
creating a “evolutionary SDN.”

Others network vendors, including many SDN startups, introduce to develop network inclination to a
simpler form, stealing routing comprehension and path/traffic government that are in inclination and
centralized in cloud-hosted software.

But both a conceal and network models and missions hit in a WAN. If cloud virtual
networks have to extend over information centers in a cloud and external to a user, afterwards it’s
difficult to see how network-hosted SDN implementations of virtualization could be avoided, for
three reasons:

  1. Overlay SDNs rest on a program component to emanate a practical networks. It’s harder to safeguard a
    user has a compulsory program to participate. 
  2. Network appliances with program that can’t be simply updated can't use conceal virtualization
    at all.
  3. WAN quality
    of service
    (QoS) can’t be positive with an conceal SDN given a SDN can’t conduct traffic
    handling. A network-hosted SDN can offer accurately a same interfaces and services to users as it
    did before, requiring no changes to program or devices. It could also conduct trade and insure
    QoS. Therefore, end-to-end SDN missions preference an SDN indication that is implemented in a network –
    not over it.

SDN is increasingly supposed as a trail to “cloud
networking
,” definition a mutation of networks and services to support a use of cloud
computing on a large scale. Navigating a several missions and record models of SDNs is
critical to scrupulously position cloud services and comprehend advantages of cloud computing. For cloud
users, meaningful their cloud providers’ SDN plans, as good as a skeleton of private cloud software
stack vendors, is a many vicious component in assessing these providers’ long-term value.

About a author
Tom Nolle is boss of CIMI Corp., a vital consulting organisation specializing in
telecommunications and information communications given 1982.



This was initial published in Dec 2012

Article source: http://www.pheedcontent.com/click.phdo?i=1b53cd86b1eb4aa45b86171509409969

RSA’s Art Coviello: 8 Computer Security Predictions For 2013

Friday, December 7th, 2012

Guest post created by Art Coviello

Art Coviello is executive Vice President of EMC, and Executive Chairman of RSA, EMC’s confidence division. 

Art Coviello

It’s that time of year again when we make my confidant (“somewhat safe” depending on your indicate of view) predictions about IT confidence for a arriving year – 2013.

The French journalist, writer and amicable commentator, Jean-Baptiste Alphonse Karr, is a author of a smart expression, “plus ça change, and c’est la même chose” which, as is roughly always a case, sounds most some-more symphonic than a English, “the some-more things change, a some-more they stay a same.” In reviewing my before years’ prognostications, that word immediately popped into my head. How not to be boring when we face many of a same challenges?

I am not certain we can because:

  • 1. The hackers will expected get even some-more sophisticated.

Evidence of criminals collaborating with brute republic states, exchanging methodologies, shopping and offered information, and even subcontracting their particular capabilities expands their common strech and enhances their mutual training curves.

  • 2. Our conflict surfaces will continue to enhance and any remaining emergence of a fringe will continue to swab away.

Both will certainly happen.

My EMC colleague, Chuck Hollis, in his set of themes for 2013 says that subsequent year organizations will come to terms with a pervasiveness of mobility and start to locate adult on a charity of services to their users. Bingo. Wider conflict surfaces. In addition, and rather unnecessary to say, though I’ll contend it anyway – a delayed though solid impetus to cloud-oriented services will once again enhance conflict surfaces during a responsibility of a perimeter.

This all leads me to my subsequent moments of déjà vu that include:

  • 3. These changes will start either confidence teams are prepared or not.

In too many cases, not. There is a vicious skills necessity of confidence professionals and many organizations can’t keep up.

  • 4. And, inhabitant governments will continue to diddle or, should we say, fiddle (while Rome burns),  failing to order on manners of evidence, information pity and a reforming of remoteness laws.

Lack of remoteness remodel is quite heavy formed on today’s realities since many organizations have literally been put in a position of violating one set of remoteness laws if they take a required stairs to strengthen information (which they are legally thankful to do formed on another set of remoteness laws). Confused? So am I, though how would we like to be confused – and liable?

I detest a word “Cyber Pearl Harbor” since we consider it is a bad embellishment to report a state we trust we are in. However, we honestly trust we are usually a hair divided from some form of obtuse inauspicious eventuality that could do repairs to a universe economy or vicious infrastructure.

  • 5. It is rarely expected that a brute republic state, hacktivists or even terrorists will pierce over penetration and espionage to try suggestive intrusion and, eventually, even drop of vicious infrastructure.

If all of this sounds depressing, well, it is. This isn’t fear mongering. It is a trustworthy extrapolation from a facts. But we can change a trajectory. There is already a tectonic change underway from a fringe to an intelligence-based confidence model.

In an age where breaches are probable, if not inevitable, organizations are realizing that static, siloed, fringe defenses are ineffectual opposite a elaborating hazard landscape. Only an intelligence-based indication that is risk-oriented and situationally-aware can be volatile adequate to minimize or discharge a effects of attacks.

So, now comes a good news:

  • 6. Responsible people in organizations from all verticals, industries and governments will pierce to that newer intelligence-based confidence indication and vigour governments to act on a common behalf.
  • 7. we also envision a poignant uptake in investment for cloud-oriented confidence services to lessen a effects of that critical necessity in cyber confidence skills.
  • 8. Big Data analytics will be used to capacitate an intelligence-based confidence model.

Big Data will renovate confidence enabling loyal invulnerability in abyss opposite a rarely modernized hazard environment.

One final note. If we wish to equivocate going over a “security” precipice and unequivocally wish change we can trust in, we contingency act some-more collaboratively and decisively than ever before. The stakes are removing too high for us to wait another year.

See also: Preparing For Cyberwar: An Interview With Art Coviello

Article source: http://www.forbes.com/sites/ericsavitz/2012/12/07/rsas-art-coviello-8-computer-security-predictions-for-2013/

Gridstore Hires Axcient Veteran Chris Sterbenc to Drive Channel Sales

Tuesday, December 4th, 2012

Former Axcient VP Chris Sterbenc has assimilated Gridstore as VP of sales, where he will expostulate channel enlargement and news directly to CEO Kelly Murphy. After focusing on MSP- and VAR-centric cloud disaster liberation for a past few years, Sterbenc now focuses his courtesy on software-defined storage, a flourishing marketplace that emerged from grid computing — a predecessor to cloud computing.

The tenure Grid Computing was popularized around 2005 as Oracle and other large record companies explained elastic, scalable, on-demand discriminate services to end-customers, both on and off premises. Gradually, open and private cloud computing became a mainstream terms for that on-demand proceed to computing.

Gridstore, meanwhile, claims to be “first to marketplace with an modernized program tangible storage height that beam as a business’s information needs scale, no overprovisioning, no squandered capacity. The Grid resolution revolutionizes a approach storage is managed, seamlessly integrating into existent environments.”

Sterbenc and Axcient parted ways behind in Oct 2012 or so. Now, Sterbenc will lead Gridstore’s channel and sales charge. The association announced a Gridstore Accelerate Partner Program in Nov 2012, targeting mid-market channel partners that wish storage solutions.  Gridstore claims a business indication is 100-percent channel focused, with a “pay-as-you-go sales model” for VARs and their end-customers.

Investors positively are interested. Gridstore lifted $12.5 million in Series A appropriation in Oct 2012.

Sterbenc, meanwhile, has formerly built a operation of channel programs for mixed companies. Before his time during Axcient, he hold VP posts during Untangle and Apparent Networks (now AppNeta). He also had sales and selling positions during Microsoft, NCompass Labs and Rainmaker Systems.

The VAR Guy’s large question: Is Gridstore an on-premises solution? And if so, how did Gridstore pattern a storage resolution for a pay-as-you-go sales indication that has held on with IT use providers and many customers? MSPmentor has some of a answers in this blog.

The VAR Guy and Sterbenc will expected speak in a subsequent few days. Stay tuned for some-more answers and insights.

Article source: http://www.thevarguy.com/2012/12/03/gridstore-hires-axcient-veteran/

AWS cloud confidence indication relies on common confidence partnership

Sunday, December 2nd, 2012

LAS VEGAS — Amazon Web Services LLC is unapproachable of a joining to securing a infrastructure
and enabling a business to accommodate correspondence mandates, though a tip AWS confidence and compliance
manager says a cloud provider simply can’t shoulder a correspondence weight though business doing
their part.

The common shortcoming indication is unequivocally something you
should know unequivocally good before building or even evaluating a deployment into AWS.

Chad Woolf,
director of risk and compliance, AWS

Speaking to attendees Thursday during a initial AWS re:Invent conference, AWS
Director of Risk and Compliance Chad Woolf touted a oath a Seattle-based cloud infrastructure
provider has finished to securing a infrastructure and undergoing countless third-party audits,
internal audits and risk assessments to safeguard a business can accommodate any required supervision or
industry mandate.

Woolf pronounced that when he assimilated AWS about 3 years ago, a provider didn’t have any
certifications it could uncover business to infer a due diligence, other than a singular SAS 70. Today, it binds Soc
1
and Soc
2
, ISO 27001, PCI
DSS
(Payment Card Industry Data Security Standard) and many other designations that customers
inherently commend as signs of a joining to a sound cloud confidence model.

“We have an intensely audited sourroundings and a unequivocally secure environment,” Woolf said. “In many
companies, a correspondence drives a security. … At AWS, we do all a right way. On the
back end, we make certain we have all covered.”

Woolf indicated, however, that many AWS business don’t know that a provider relies on a
shared confidence model, definition that AWS manages some specific responsibilities associated to the
underlying confidence of a environment, though any patron contingency secure a possess height instances,
applications and data.

“I spend 70% of my time explaining, directly with customers, a differences of what we’re
responsible for and what a patron is obliged for in handling confidence in a cloud,” Woolf
said. “AWS is obliged for a earthy hardware, a infrastructure, a information centers
themselves and a virtualization infrastructure — a hypervisor. The business are responsible
for all on tip of that.”

Woolf common a story about a businessman patron that built a cardholder information sourroundings on AWS
shortly after AWS warranted a nomination as a PCI-validated use provider. Unfortunately, the
customer didn’t know that it indispensable to bear a same stairs to secure a systems as it
would in an on-premises environment, including hardening handling systems, implementing firewall
rules and monitoring network traffic.

“They put it out there and it was fundamentally exposed,” Woolf said. “The company’s QSA [Qualified
Security Assessor
] came to city and said, ‘What are we doing?’”

To assistance a business equivocate those kinds of mistakes, AWS combined support that sum each
of a controls PCI DSS calls for, and either responsibilities distortion with AWS or a customer, Woolf
said. “The common shortcoming indication is unequivocally something we should know unequivocally well
before building or even evaluating a deployment into AWS,” he said.

Attendee Derrick Burton, a Washington D.C.-based IT executive for a consulting firm, expressed
some doubt per a volume of shortcoming AWS is peaceful to accept for security. He
said that his clients put their trust in his organisation to support or conduct their information security,
but in spin his classification has to trust AWS, which, formed on a statements, tries to strew as
much confidence shortcoming as it can.

“Executives from AWS say, ‘We’re building this height for we to lay on, though you’re in charge
of securing what’s in it,’” he said. “That doesn’t sound like ‘shared responsibility’ to me.”

Burton credited AWS for a transparency, as good as for assisting allege a state of cloud
computing to a indicate where so many companies now use it. Nevertheless, he pronounced he’d like to see
AWS change a denunciation a bit and work with a customers’ IT and information confidence teams more
actively to safeguard they know how to keep their cloud implantations secure.

Woolf’s classification is committed to assisting business accommodate correspondence mandates while regulating AWS,
he said. A bank regulating AWS recently underwent an review by sovereign banking regulators, he added, and
as partial of that process, he spent a full day assembly with a regulators, articulate about the
provider’s backup and liberation processes and a baseline confidence policies and procedures and
offering a demeanour during a infrastructure. As a outcome of that experience, he said, his group is in the
final stages of formulating a new anxiety request that will assistance supervision regulators review a
customer’s use of AWS.

Woolf also rarely endorsed that business review by a substantial attention superintendence on
cloud security, sold a National Institute of Standards and Technology’s recommendation on security
and remoteness in open cloud computing
and a Cloud Security
Alliance’s guidance
for vicious areas of concentration in cloud computing.

“I’m wholly assured that what we do can accommodate a needs of any patron that’s regulated or
that has correspondence requirements,” Woolf said. “Not usually have we finished this before with many
different customers, though also we’re ceaselessly putting out new reports and new things that will
make it easier for business to do that.”




Article source: http://www.pheedcontent.com/click.phdo?i=e85759ccb20e6889b54053a5b852d2a2

How CTO Joel Gilbert built his possess Exadata

Thursday, November 29th, 2012

An Oracle Exadata appurtenance would cost Joel Gilbert during slightest $200,000, and that’s usually the
hardware. Instead, Gilbert built his possess chronicle for one-fifth a cost.

A SaaS indication suffers from network latency anyway. Team that with
I/O latency and it’s usually a losing model.

Joel Gilbert,
CTO, Pipkins Inc.

True, it’s not as absolute as an Exadata.
Gilbert, arch record officer during workforce government program association Pipkins Inc.,
acknowledges that. But according to him, he doesn’t need something as absolute as Exadata. All he
needed to do was find a cheaper approach to solve a same problem that Exadata aims to solve: I/O
latency.

Any IT emporium doing a lot of database writes will run into a problem of I/O latency. The
database needs to pass information to a storage covering and clamp versa, and a siren is usually so
thick. Oracle is wakeful of a problem, and Exadata attacks it during a storage layer. In particular,
the storage cells that come with Exadata can do most of a estimate formerly left adult to the
CPUs. It’s called Exadata
Smart Scan
.

Pipkins, formed in Chesterfield, Mo., is a Software as a Service (SaaS)-based workforce management association founded almost
30 years ago. According to Gilbert, a large partial of his pursuit is traffic with and mitigating latency of
all kinds.

“A SaaS indication suffers from network latency anyway,” he said. “Team that with I/O latency and
it’s usually a losing model.”

In 2009, Gilbert said, a conditions reached a tipping point. The latency was brutal. So he
brought all a storage area network
(SAN)
vendors in and told them they indispensable faster I/O. EMC and a rest of them handed him
options that would highlight a bottom line.

“They wanted to sell us racks,” Gilbert said. “Big costly things.”

But he wasn’t shopping it. Gilbert is of a opinion that tough disks are going a approach of the
dinosaur, what he calls a “monolith of spinning spindles, watchful to die.” Not usually are they
costly, though they also sow adult a lot of room in a information center, and block footage there is during a
premium. Gilbert was faced with a tough choice: Buy a garland of SANs and afterwards spin around and raise
prices on Pipkins’ business — that they would hatred — or find some kind of alternative.

Gilbert found a alternative.

It came not in a form of hordes of new hardware he had to shelve and smoke-stack in his information center.
Instead he bought a garland of PCI-E peep memory cards from Fusion-io that he could block into his
existing commodity server hardware. The effects were immediately apparent. Engineers during a company
thought a reports were wrong during initial since a I/O latency “just disappeared.” Gilbert said
they can get some-more use out of any of their servers now as well. Before, CPU use was low since the
system was watchful on I/O; with a cards installed, I/O isn’t a problem and so a CPU can be
saturated.

Don’t get Gilbert wrong — he still loves Oracle. He was a vital proponent of relocating Pipkins to
Oracle
Database
from another exclusive database platform. But when it came to Exadata, Gilbert felt
like Pipkins usually didn’t need it.

Gilbert pronounced that Exadata does some of a same things Pipkins’ design does. For example,
he said, Exadata has architected a database to commend when some of a information is on flash
storage so that database algorithms can work some-more efficiently. But it also has some facilities that
Gilbert finds unnecessary.

“We don’t need all a things Exadata provides,” he said, “like involuntary updates. We’re
engineers. We know how to manually update. In a end, Oracle gets paid what they get paid [with
Pipkins using Oracle Database]. We don’t mangle a bank and we don’t have to assign a customers
prices they can’t afford.”




This was initial published in Nov 2012

Article source: http://www.pheedcontent.com/click.phdo?i=dc205337b5f688e8960ba5e7a1ef7c33

Is a open cloud a best place for bequest applications?

Thursday, November 29th, 2012

The open cloud is a place to run greenfield applications built with a latest collection and the
hippest programming languages. Could it also be a finish for a legions of aging applications
that spawn craving information centers?

If an focus is due for a refresh, since settle for usually a facelift around an incremental
hardware ascent or new GUI? Instead, since not go whole sow and re-platform a focus on a
state-of-the-art cloud height that delivers scalable performance, coherence and resilience—not
to discuss an operational output (opex) rather than collateral output (capex) model?

Indeed, a flourishing series of IT professionals are seeking those questions. They’ve voiced an
interest in regulating a cloud as a aim for bequest focus modernization efforts, pronounced Al
Hilwa, module executive for focus growth program investigate during researcher organisation IDC.

“There are certain kinds of workloads that lend themselves good to a cloud,” Hilwa said—for
example, outwardly confronting applications. But migrating an existent bequest focus to a cloud
involves several considerations that need to be evaluated before embarking on such a project.

Gotchas Galore

This summer, Pabst Brewing Co. changed a whole information core from a San Antonio, Texas, office
to Rackspace Hosting, regulating a brew of a firms’ cloud and managed services. The emigration went
relatively smoothly, until it came time to pierce dual comparison applications: Microsoft Dynamics GP, the
enterprise apparatus formulation (ERP) complement before famous as Great Plains, and Salient Margin
Minder, a income government tool.

Both applications had been adult and regulating for some-more than 5 years, and had undergone many
upgrades and patches, explained Stephen Blake, CEO of Virtessential, a Florence, Ky.-based IT
integrator that oversaw a migration. Pabst didn’t have entrance to a source designation files or
good support about pattern changes over a years.

“No one knew what was installed; [the apps] were kind of a black box,” Blake said.

Meanwhile, like many managed use providers, Rackspace was reluctant to yield a
service-level agreement (SLA) for feeble accepted applications.

“There aren’t too many managed use providers [MSPs] that are stretchable adequate to say, ‘Sure,
we’ll take your images.’ They don’t wish to take a risk of carrying to support problems that have
been there for a while,” Blake said.

Instead, MSPs typically determine to support usually applications that are commissioned fresh, and that
are managed regulating a MSP’s elite tools.

Virtessential circumvented these problems regulating focus virtualization program from AppZero
that extracted a focus and a dependencies into a unstable “virtual focus appliance”
package, and afterwards commissioned any of them on to a uninformed handling complement image.

“The servers demeanour like a uninformed build, though they’re not,” Blake said.

The focus descent and emigration processes took reduction an hour, and a ensuing builds
have been regulating though occurrence on Rackspace given August. Had he not stumbled on AppZero,
Blake pronounced migrating those dual applications would have combined 3 or 4 weeks to the
project.

In a end, migrating bequest applications to a cloud won out for Pabst, though it’s not always so
easy.

Multi-Tenant Madness

Enterprises are increasingly vast consumers of Software as a Service (SaaS) applications, the
classic examples being Salesforce.com for patron attribute government and Workday for payroll
services. Now, some inner IT departments are exploring possibly it creates clarity to follow fit and
re-architect in-house applications as cloud-hosted multi-tenant applications—private SaaS apps, as
it were.

As an example, suppose an vehicle manufacturer that has combined a financing focus for
individual dealers opposite a country. That focus was created as single-tenant application
destined to be commissioned and run on a dealer’s premises, confirmed by a internal IT professional.
That indication is diligent with difficulty, of course, with dealers struggling to troubleshoot and
maintain a apps.

What if, instead, that focus were re-tooled to run as a multi-tenant cloud-hosted SaaS app
that vehicle dealers simply logged in to, while a manufacturer rubbed upkeep, upkeep and
new development?

Independent program vendors that have charity on-premises and SaaS program contend that a move
to multi-tenancy has been good for their business.

“Is multi-tenant a approach to go? Yes, since a thought is to optimize upkeep cost,” said
Andrei Sergeev, comparison clamp boss during EMAS Pro, that provides enrollment government software
for colleges and universities. Simply put, SaaS-based collection are most easier to implement and
maintain, for both a provider and a consumer.

SaaS also puts modernized capabilities in a hands of users that are differently cost-prohibitive,
Sergeev said.

After several progressing attempts with on-premises solutions, EMAS Pro recently began charity a
SaaS-based apparatus called Retention Pro that helps colleges brand students who are during risk of
dropping out. The use consists of several opposite modules including Apache Tomcat, a rules
engine, and a business analytics and stating engine, all of that are firmly integrated.

“That’s a lot of opposite components, and if we wanted to run it on-prem, we would need
licenses for all of those things, that becomes an costly proposition,” Sergeev said. He pronounced he
could suppose many use cases for private multi-tenant SaaS apps in a enterprise.

Unfortunately, re-architecting a bequest single-tenant focus for multi-tenancy is easier
said than done.

“It’s a staggering task,” pronounced Brian Hoskins, principal product manager during LANDesk, a systems
management program provider that is on a three-year tour to “SaaS-ify” a normal service
desk tools, and that is doing a same for a systems and confidence government offerings.

Like a lot of bequest on-premises applications, LANDesk was built around a Windows console that
makes a lot of approach calls to a focus and database layers, Hoskins explained. That just
doesn’t work in a SaaS platform, so a association had to rewrite all those calls to go by Web
services.

Softening a Blow

For companies that don’t have a stomach for that kind of growth project, there are
startups like Apprenda and Corent Technology that explain they can facilitate bequest application
migration to multi-tenancy.

Corent Multi-Tenant Server (MTS), for example, can be used to renovate a single-tenant app to a
multi-tenant one, possibly with common or apart reside databases.

Open 4 Business Online (O4BO.com) is formed in Hong Kong, and recently used Corent MTS to create
a SaaS use that runs on a IBM SmartCloud from a customary catalog of open source business
software, including Openbravo for ERP, Pentaho for business analytics and SugarCRM. Mike Oliver,
the owner of O4BO and also a former Corent employee, pronounced acclimatisation times vary, though that he
could modify some applications in reduction than an hour.

“It depends on a application. Some are well-designed, though others have idiosyncrasies or,
frankly, feeble designed code,” Oliver said. Having entrance to a source code, however, is not a
requirement for Corent MTS, Oliver added.

Oliver pronounced he has talked about Corent MTS to a series of craving shops that are intrigued by
the possibilities. One U.S. health caring consortium, for example, is meditative about regulating it with
its subsidiaries opposite a U.S. Of sold seductiveness is that a converted focus can use
either common or dedicated databases, that is an critical caring in health care, where
regulations change from state to state.

Likewise, converting in-house applications to multi-tenancy competence yield an engaging approach for
different groups within an organization—end users, developers, peculiarity declaration employees—to gain
access to a singular focus estate, while providing any organisation with customized views.

No Pain, No Gain?

Shortcuts to migrating bequest applications to a cloud might be appealing, though there’s something
to be pronounced for doing a tough work of re-architecting for a new paradigm, experts said.

A scrupulously architected cloud focus has countless advantages over a normal on-premises
application, namely predictability, resiliency and agility, pronounced Michael Crandell, CEO at
RightScale Inc., a cloud government program vendor.

Predictability comes from “templatizing” cloud applications, expelling many opportunities for
human error. “When we make changes manually, that’s when all ruin breaks loose,” Crandell
said.

Agility comes from automation techniques such as auto-scaling and carrying a choice of where you
want to run a workload, and resilience is a outcome of conceptualizing applications around “the thought that
everything fails eventually,” and swelling applications opposite mixed nodes, regions and even
cloud providers.

As tough as it might be, “we suggest re-architecting any bequest focus that people might be
thinking about moving,” Crandell said. “The idea of holding an particular bequest application,
picking it adult and plopping on a server in a cloud is not during all realizing a advantages of the
cloud.”

About a author:
Alex Barrett is a editor in arch of Modern Infrastructure. Write to her during abarrett@techtarget.com or follow @aebarrett on Twitter.




This was initial published in Nov 2012

Article source: http://www.pheedcontent.com/click.phdo?i=94c31a7119249a82b02262f90cac2d06

Mastering SAP HANA information displaying for limit performance

Saturday, November 24th, 2012

SAP HANA is a height that offers new levels of information displaying that exceeds what’s probable with
traditional relational database government systems (RDBMS). But it requires information to be rubbed in
more worldly ways to grasp limit performance. 

In SAP
HANA, information is still stored in tables
, though how one designs a information storage indication differs
greatly compared with what’s compulsory for a normal database. Data is dense improved and
reads are achieved faster when it is stored in columnar tables. To take full advantage of this
structure, a data
model
has to be many agree than in a traditional
row-based RDBMS
, for dual reasons.

First, surplus information is not scarcely as many of an emanate as it is in row-based tables: The
columnar tables store repeating values usually once by providing pointers to anxiety a duplicate
data. Also, when information isn’t flattened into one list and widespread or normalized opposite multiple
tables in SAP
HANA
, a cost of joins can grow. SAP HANA has row-
and column-based engines
in that opposite forms of estimate occur, and join costs arise
when information contingency physically pierce from one engine to be processed in another. So, it is profitable to
keep a information estimate in one engine if possible, and preferably that engine in SAP
HANA is a mainstay engine
.

More on SAP HANA

One SAP TechEd attendee talks about his
company’s seductiveness in SAP HANA

How is SAP
HANA opposite from Oracle Exalytics?

Read about a growing
integration between Hadoop and HANA

Join cost is not an emanate in a normal database interjection to indexing and normalization, though it
can be poignant in SAP
HANA
when queries violate a functions upheld in columnar storage structures. If processing
is unsupported in a column-based, in-memory engine, a outcome set contingency be physically changed to
the quarrel engine. The opening strike from relocating a information in memory can be significant. So, in SAP
HANA there is still a need to model
the data
in a provisioning stage. It’s only many opposite from the
data indication we would design
for a traditional
RDBMS
.

After a information is modeled and provisioned, or loaded, companies contingency start to understanding with metadata.
This tour starts with displaying a “attribute” and “analytic” views that are formed on the
provisioned information in a bottom column-store
tables
. These views work many like normal database views. Attribute views are designed
to give a master
data
some context: They yield suggestive values, such as descriptions for ID columns, or
names instead of a tangible ID values or names.

Analytic views are where calculations and aggregations come into play. Both charge views and
analytic views will be a building blocks to finally emanate “calculation” views. Calculation views mix and
extend both analytic views and total views as a combination or intersection of a meaningful
description in an charge perspective while exploiting a calculations benefaction in an total view.
This metadata-driven indication of runtime calculations in memory is where SAP HANA unequivocally proves most
valuable, given this metadata covering mostly removes a need to insist information during any serve level
beyond a strange provisioned tables.

While displaying in SAP
HANA
mostly only refers to jumping true to a calculation, analytic or charge views, it
cannot be farfetched that provisioning a information scrupulously in memory is a essential exercise. Data must
always heed to a needs of a storage structures so a means accessible from a database or
platform can be many effectively exploited. SAP HANA is no opposite in this case. While data
models could be directly ported from a star schema in a normal RDBMS, there are many rewards
from initial examining, afterwards designing, a correct bottom indication for provisioning information in SAP HANA.

How does SAP HANA hoop data?

In early versions of RDBMS, information was initial modeled into physical, row-based tables and stored on
disks, given these were a technologies available. The information was afterwards indexed to capacitate faster
access around SQL queries. Indexing was (and still is) required given databases were designed
around a judgment of row-based transactional data.

This height did not have reading information as a primary purpose. The structures were designed
around a judgment of removing information in, not removing information out. Later, online methodical estimate (OLAP) database
structures
, infrequently referred to as “cubes,” were trumpeted as precalculated or pre-aggregated
solutions to a opening stipulations of row-based reporting. OLAP was a initial try at
addressing removing information out and was focused exclusively on information reading. The obstacle was that data
needed to be “built” by transforming it into additional determined storage. The routine is
laborious and costly in terms of both storage and processing.

Columnar-stored, disk-based databases afterwards flush as an choice approach to store data
structures tuned for fit reporting. In these, information is stored some-more well so additional
layer builds are not as necessary. Read opening is many better, though removing information in is more
time-consuming and formidable given of how a information is stored in normal columnar
databases.

Then, there is SAP HANA. In some ways, it is a perfection of all of these designs, though with a
unique distinction: SAP HANA also stores a full information sets in
memory
.

SAP HANA is singular in mixing all of these approaches while storing a information as tighten to the
CPU as possible: in memory. Data is physically persisted in memory in possibly row-based or columnar
structures. It can also be modeled in certain forms of judicious views to obey cube-based storage
structures.

Bio: Don Loden is a principal business comprehension consultant with full lifecycle
data room growth knowledge in mixed verticals. He is an SAP-certified application
associate on SAP BusinessObjects Data Integrator and speaks globally during countless SAP and ASUG
conferences. Contact him by email during
don.loden@decisionfirst.com or on Twitter
@donloden.




This was initial published in Nov 2012

Article source: http://www.pheedcontent.com/click.phdo?i=6a0dd1002b9d4f3a682c35b76a8039f4