Posts Tagged ‘CPU’

VM backups expostulate Idera rebrand of R1Soft, cost change

Thursday, November 15th, 2012

Idera rebranded a R1Soft backup product
and altered a pricing structure to stress practical machines this week with a launch of Idera
Backup Server 5.0.

Idera, that sells
performance monitoring and backup program for servers, acquired R1Soft in 2007 though kept a R1Soft
brand until now.

Idera CEO Rick Pleczko pronounced a R1Soft record is used in some-more than 1,000 hosted information centers
and backs adult some-more than 275,000 servers. Idera has spent dual years creation it a improved fit for
enterprises, he said, adding scalability, height support and some-more emphasis
on practical appurtenance (VM) backups
.

“We wish to behind adult all your servers,” Pleczko pronounced of Idera’s medium skeleton for a product.

Idera Server Backup 5.0 supports Windows and Linux servers, and VMware, Microsoft Hyper-V and
Citrix Xen hypervisors. It retains R1Soft’s continuous
data insurance (CDP)
and bare
metal restore
capabilities for discerning backups and
restores
. It also supports single-file restores.

Idera Server Backup targets business who use apart applications to strengthen earthy and
virtual servers, according to David Wartell, R1Soft’s owner and Idera’s server backup division
vice president.

“Customers tend to wish one backup product that does earthy and practical backup,” he said.
“Maybe they’re regulating Veeam for practical and [Symantec] Backup Exec for earthy servers, and are
looking for a product that does both.”

Idera Backup Server has a new pricing indication for VM
backups
. Instead of charging per CPU socket, Idera now sells licenses per practical machine.

“We don’t wish to ask business how many CPU sockets they have, though everybody knows how many
virtual machines they have,” Pleczko said.

Licenses cost $995 for 50 VMs, $3,995 for 250 VMs, $14,995 for 1,000 VMs and $49,995 for an
unlimited site license. Physical server licenses cost $995 for 5 servers, $12,995 for 100
servers and $24,995 for 250 servers.

“Idera is perplexing to be disruptive on pricing and it’s going after a volume play,” Gartner
Research Vice President Dave Russell said.

Russell pronounced Idera is perplexing to gain on changes that make it easier for companies to shop
for new backup apps now.

“A integrate of factors make this a good time to make a move,” he said. “Backup times and retention
periods are declining. People are regulating archiving for long-term
retention, and they’ve dialed down backup influence times from years to 90 days. That creates it
easier to switch. People are peaceful to try a new product now.”

Russell pronounced Idera can’t compare a underline and support list of a vital players, though has enough
for SMBs and mid-market shops.

“If we could collect one handling system, one hypervisor and one focus to support, Windows,
VMware and Exchange would be a good place to start,” he said. “Idera supports that.”




Article source: http://www.pheedcontent.com/click.phdo?i=e00b5180b3e4da6b145b284b30a6b864

VM backups expostulate Idera rebrand of R1Soft, cost change

Thursday, November 15th, 2012

Idera rebranded a R1Soft backup product
and altered a pricing structure to stress practical machines this week with a launch of Idera
Backup Server 5.0.

Idera, that sells
performance monitoring and backup program for servers, acquired R1Soft in 2007 though kept a R1Soft
brand until now.

Idera CEO Rick Pleczko pronounced a R1Soft record is used in some-more than 1,000 hosted information centers
and backs adult some-more than 275,000 servers. Idera has spent dual years creation it a improved fit for
enterprises, he said, adding scalability, height support and some-more emphasis
on practical appurtenance (VM) backups
.

“We wish to behind adult all your servers,” Pleczko pronounced of Idera’s medium skeleton for a product.

Idera Server Backup 5.0 supports Windows and Linux servers, and VMware, Microsoft Hyper-V and
Citrix Xen hypervisors. It retains R1Soft’s continuous
data insurance (CDP)
and bare
metal restore
capabilities for discerning backups and
restores
. It also supports single-file restores.

Idera Server Backup targets business who use apart applications to strengthen earthy and
virtual servers, according to David Wartell, R1Soft’s owner and Idera’s server backup division
vice president.

“Customers tend to wish one backup product that does earthy and practical backup,” he said.
“Maybe they’re regulating Veeam for practical and [Symantec] Backup Exec for earthy servers, and are
looking for a product that does both.”

Idera Backup Server has a new pricing indication for VM
backups
. Instead of charging per CPU socket, Idera now sells licenses per practical machine.

“We don’t wish to ask business how many CPU sockets they have, though everybody knows how many
virtual machines they have,” Pleczko said.

Licenses cost $995 for 50 VMs, $3,995 for 250 VMs, $14,995 for 1,000 VMs and $49,995 for an
unlimited site license. Physical server licenses cost $995 for 5 servers, $12,995 for 100
servers and $24,995 for 250 servers.

“Idera is perplexing to be disruptive on pricing and it’s going after a volume play,” Gartner
Research Vice President Dave Russell said.

Russell pronounced Idera is perplexing to gain on changes that make it easier for companies to shop
for new backup apps now.

“A integrate of factors make this a good time to make a move,” he said. “Backup times and retention
periods are declining. People are regulating archiving for long-term
retention, and they’ve dialed down backup influence times from years to 90 days. That creates it
easier to switch. People are peaceful to try a new product now.”

Russell pronounced Idera can’t compare a underline and support list of a vital players, though has enough
for SMBs and mid-market shops.

“If we could collect one handling system, one hypervisor and one focus to support, Windows,
VMware and Exchange would be a good place to start,” he said. “Idera supports that.”




Article source: http://www.pheedcontent.com/click.phdo?i=e00b5180b3e4da6b145b284b30a6b864

VM backups expostulate Idera rebrand of R1Soft, cost change

Thursday, November 15th, 2012

Idera rebranded a R1Soft backup product
and altered a pricing structure to stress practical machines this week with a launch of Idera
Backup Server 5.0.

Idera, that sells
performance monitoring and backup program for servers, acquired R1Soft in 2007 though kept a R1Soft
brand until now.

Idera CEO Rick Pleczko pronounced a R1Soft record is used in some-more than 1,000 hosted information centers
and backs adult some-more than 275,000 servers. Idera has spent dual years creation it a improved fit for
enterprises, he said, adding scalability, height support and some-more emphasis
on practical appurtenance (VM) backups
.

“We wish to behind adult all your servers,” Pleczko pronounced of Idera’s medium skeleton for a product.

Idera Server Backup 5.0 supports Windows and Linux servers, and VMware, Microsoft Hyper-V and
Citrix Xen hypervisors. It retains R1Soft’s continuous
data insurance (CDP)
and bare
metal restore
capabilities for discerning backups and
restores
. It also supports single-file restores.

Idera Server Backup targets business who use apart applications to strengthen earthy and
virtual servers, according to David Wartell, R1Soft’s owner and Idera’s server backup division
vice president.

“Customers tend to wish one backup product that does earthy and practical backup,” he said.
“Maybe they’re regulating Veeam for practical and [Symantec] Backup Exec for earthy servers, and are
looking for a product that does both.”

Idera Backup Server has a new pricing indication for VM
backups
. Instead of charging per CPU socket, Idera now sells licenses per practical machine.

“We don’t wish to ask business how many CPU sockets they have, though everybody knows how many
virtual machines they have,” Pleczko said.

Licenses cost $995 for 50 VMs, $3,995 for 250 VMs, $14,995 for 1,000 VMs and $49,995 for an
unlimited site license. Physical server licenses cost $995 for 5 servers, $12,995 for 100
servers and $24,995 for 250 servers.

“Idera is perplexing to be disruptive on pricing and it’s going after a volume play,” Gartner
Research Vice President Dave Russell said.

Russell pronounced Idera is perplexing to gain on changes that make it easier for companies to shop
for new backup apps now.

“A integrate of factors make this a good time to make a move,” he said. “Backup times and retention
periods are declining. People are regulating archiving for long-term
retention, and they’ve dialed down backup influence times from years to 90 days. That creates it
easier to switch. People are peaceful to try a new product now.”

Russell pronounced Idera can’t compare a underline and support list of a vital players, though has enough
for SMBs and mid-market shops.

“If we could collect one handling system, one hypervisor and one focus to support, Windows,
VMware and Exchange would be a good place to start,” he said. “Idera supports that.”




Article source: http://www.pheedcontent.com/click.phdo?i=e00b5180b3e4da6b145b284b30a6b864

Inside Intel’s Next Unit of Computing (DC3217BY)

Friday, November 9th, 2012

Back during IDF Intel gave us a hands on demo of a Next Unit of Computing (NUC), a tradition form cause motherboard that fits into an Intel-supplied 4″ x 4″ x 2″ chassis. The first-generation NUC is built around a dual-core ULV Ivy Bridge CPU, a Core i3 3217U (17W TDP, 1.8GHz frequency, no turbo, HD 4000 graphics regulating during 350MHz – 1.05GHz).

Intel will be offered dual versions of a NUC: a DC3217IYE and a DC3217BY:

Intel sent along a DC3217BY that it expects to see on sale around Amazon and Newegg around early Dec for $300 – $320. For that cost we fundamentally get a motherboard (including CPU) and chassis. Memory, mini PCIe cards and even a energy cord all come separately. The energy cord you’ll need to buy is a C6 form that plugs into a energy adapter’s C5 form connector. The 3-plug C6 connector is also famous as a cloverleaf connector. My arrogance here is to keep costs down Intel avoided including this partial as they’d need to have a opposite wire depending on what partial of a universe a NUC was being sole into. The pack also comes with a VESA ascent bracket.

Gallery: Intel’s Next Unit of Computing: Teardown

Building a NUC is impossibly simple. There are 4 screws that reason a framework together, stealing them gives we entrance to a motherboard:

You don’t indeed need to go any serve if we usually wish to get a NUC adult and running. From here we can implement adult to dual 8GB DDR3 SO-DIMMs. The bottom mini-PCIe container accepts a half tallness label (perfect for WiFi) while a tip container can take a full tallness label or an mSATA drive. The receiver pigtails for WiFi are already routed to a suitable mark inside a chassis. This indication has an integrated Thunderbolt controller that we can see in a tip right of a machine.

Intel sent along a mSATA SSD 520 (180GB), that is a SandForce formed mSATA expostulate from Intel regulating 25nm MLC NAND. SandForce controllers work really good in mSATA form factors given they don’t need any outmost DRAM. There are usually 4 IC packages on a mSATA 520: a controller itself and 3 x 64GB 25nm MLC NAND devices. Intel’s SSD Toolbox labels a expostulate as an SSD 525, however a partial numbers above prove 25nm NAND that would make this a 520.

Going further, there are 4 screws that reason a motherboard in place, mislay them and we can lift a residence out completely:

On a underside of a motherboard you’ll find a heatsink/fan covering a QS77 chipset and a Core i3 CPU:

Under complicated bucket a fan will flog in, though it’s hardly heard from some-more than 18″ divided from a chassis. The tip of a cosmetic framework does get utterly comfortable (48.7C) while a CPU is regulating full tilt. The 65W energy adapter will lift around 10W for a full complement during idle and rise energy expenditure for a NUC tops out during 19.3W when regulating a x264 HD test.

Performance is apparently going to be in line with other 17W mobile Ivy Bridge CPUs. We don’t have a outrageous library of x264 HD 5.0.1 tests to review to, though this should give we a bit of an thought of how a NUC would review to a full blown 65W Core i3 formed desktop PC:

Windows 8 - x264 HD 5.0.1 - 1st Pass

Windows 8 - x264 HD 5.0.1 - 2nd Pass

Compute firm tasks will apparently be slower, though lighter use models will be usually fine. Remember that this isn’t an Atom formed system, you’ll indeed get decent opening out of it. I’ll be regulating some some-more benchmarks on a appurtenance over a entrance weeks, including a demeanour during GPU performance.

The NUC is a nifty small judgment and I’m blissful Intel is bringing it to market. Obviously we don’t see a NUC replacing everyone’s desktop, though if you’ve got a specific focus where form cause matters some-more than comprehensive opening (albeit one where we still need good performance) there might be a good fit here. What I’d adore to see is for a NUC to be incited into a customary form factor, with a genuine ecosystem of mixed tools suppliers building components. Intel gripping it all in house, during slightest for a initial revision, creates clarity in sequence to settle a good baseline.

Article source: http://www.anandtech.com/show/6444/intels-next-unit-of-computing-hands-on

An in-depth demeanour during IBM’s transactional memory support facility

Friday, November 9th, 2012

In a new announcement, IBM touted a zEC12 as a initial commercially accessible appurtenance to
support transactional memory by a doing of Transaction Execution Facility
(TEF).

Transactional memory potion half full

Normal mainframe serialization relies on enqueues, latches and locks. At a machine-code level,
Z systems offer instructions such as

    When we register, you’ll also accept targeted alerts from my group of editorial writers and eccentric attention experts with a latest news, tips, and recommendation to assistance we do your pursuit some-more well and effectively. Our idea is to keep we sensitive on a hottest topics and biggest hurdles faced by IT professionals currently operative with information core technologies.

    Margie Semilof, Editorial Director

compare
and swap
. Besides being complex, these mechanisms share a integrate of unattractive side effects. A
program that fails to get tenure of a apparatus customarily waits, that can lead to elongated
response times and cascading hangs. If programmers aren’t careful, a normal methods are also
prone to deadly
embraces
that henceforth retard during slightest dual processes until one is canceled.

In contrast, TEF employs an “optimistic” opinion that assumes it can ensue though conflicts.
If a confidence is misplaced, a hardware rolls a transaction behind and lets a module decide
what to do next.

The zEC12 combined dual special machine instructions
to symbol a commencement and finish of transactions. In between these instructions, a module can load
and store from memory and change registers. However, all a changes are provisional and uncommitted
until a routine ends a transaction though encountering a conflict. If a dispute arises — for
instance if another CPU changes a memory plcae referenced by a transactional routine — the
hardware aborts a transaction, including changes to memory, and a whole thing has to start
over. Note that programs regulating TEF contingency heed to a specific entrance proof since if a conflict
causes a CPU to miscarry a process, it will bend behind to a instruction immediately
following a commencement of a transaction, afterwards set a non-zero condition formula (CC).

In addition, a zEC12 supports compelled transactions. Constrained exchange have a few
restrictions as to what they can do though they’re some-more expected to succeed.

Non-constrained exchange can be nested adult to a indication contingent depth. The processor commits
the changes as any transaction ends. However, if a dispute arises, a cancel goes all a way
back to a utmost transaction.

Programmers competence use a ETND instruction to get a transaction nesting depth.
If a CPU isn’t in transactional mode, a nesting abyss is zero.

The TEF instructions

These are a new instructions creation adult a TEF:

The Transaction Diagnostic Block

A programmer competence optionally mention a 256-byte area for a transaction evidence retard (TDB) to
receive CPU-generated dispute data. The TDB contains a lot of information, including a general
registers during a time of a dispute along with a transaction cancel code.

The zEC12 Principle of
Operations
sum a many reasons because a transaction competence be aborted. The POPS also warns
that conflicts can arise from “speculative” instruction examination, that is partial of out-of-order
instruction execution. This creates interesting, Kafkaesque situations where a transaction will be
aborted for something that competence or competence not have happened.

A closer demeanour during TEF

Below is a formula TEF formula fragment:

XR   R2,R2     Clear a loop counter

TRANSTRT 0H

     TBEGIN X’FF00′,TDB  Transaction begin

     JNZ  TRANABRT    JMP if formerly aborted

     AP   TRANCNT,=P’1′ Increment decimal counter

     TEND         Commit changes

     J   ITWORKED    Move on to improved things

TDB    DS   XL256     Diagnostic area

TRANABRT DS   0H

     LA   R2,1(,R2)   Increment loop counter

     CHI  R2,=H’30′   31st time through?

     JL   TRANSTRT    No, retry transaction

PLANB   DS   0H       Yes, try something
else

These instructions try to increment a packaged decimal opposite in common storage. The proof will
try to refurbish a opposite 30 times before giving up.

The initial instruction clears register dual (R2), that keeps lane of a series of update
attempts. The subsequent instruction, TBEGIN, puts a CPU in transactional state. The TBEGIN’s first
operand is a facade that tells a processor to revive a essence of ubiquitous purpose registers 0
through 15 if something aborts a transaction. This is critical as a R2 contains a loop
counter. The second operand points to a TDB.

This is where a transaction entrance proof becomes important. The instruction after a TBEGIN
tests a CC. If a CC is zero, a processor was successfully put into transactional state and
control falls by to refurbish a counter. A non-zero CC means something aborted a transaction
and caused a bend to a liberation proof during tag TRANABRT.

After — hopefully — updating a counter, a TEND instruction ends a transaction and
commits a incremented counter.

The instructions following TRANABRT try to redeem from an aborted transaction. First, it
increments a value in R2. If a value is reduction than 30, it jumps behind to re-initialize the
transaction. Otherwise it falls by to devise B.

TEF’s place

TEF appears to be a jump brazen in processor
technology
, deliberation a formidable electronics and microcode indispensable to guard memory and
generate interrupts. The doubt is if it is easier or some-more fit than some of a other
methods mentioned above.

Although a above instance is contrived, it can offer as a approach to consider about how TEF works on a
busy system. Given that a supplement packaged (AP) instruction would govern in nanoseconds, chances are
better than good a transaction will work a initial time even while dozen of threads competence wish to
update that counter. In this case, TEF appears to be a flattering easy solution.

Chances are TEF will find a approach into lots of system-level code. However, IBM also intends
customers to use it by a new Java and C++
APIs
.

About a expert: 
Robert Crawford has been a systems programmer for 29 years. While specializing in CICS technical
support, he has also worked with VSAM, DB2, IMS and other mainframe products. He has automatic in
Assembler, Rexx, C, C++, PL/1 and COBOL. In his latest career proviso he is an operations architect
responsible for substantiating mainframe plan and instruction for a vast word company. He
works in south Texas, where he lives with his family.



This was initial published in Nov 2012

Article source: http://www.pheedcontent.com/click.phdo?i=a2196e1c6d3562195fa33e93c846c0b9

VMware vSphere necessities for Exchange 2010 virtualization

Saturday, November 3rd, 2012



Exchange 2010 is a Tier we focus that lots of users count on, so it’s vicious to ensure
that any Exchange 2010 virtualization plan has adequate resources. If you’re constructing a
virtual Exchange infrastructure to run on VMware vSphere, it’s vicious that we have a proper
software, hardware and collection in place to scrupulously support your infrastructure. Weigh these
important mandate and recommendation before relocating forward.

Hardware and program mandate for Exchange 2010 virtualization

Server virtualization has turn so common in today’s servers that all though low-end systems are

    When we register, you’ll also accept targeted alerts from my group of editorial writers and eccentric attention experts with a latest news, tips, and recommendation to assistance we do your pursuit some-more well and effectively. Our idea is to keep we sensitive on a hottest topics and biggest hurdles faced by Exchange professionals currently operative with Exchange, Outlook and other associated technologies.

    Margie Semilof, Editorial Director



This was initial published in Nov 2012

VMware
vSphere
and Microsoft Hyper-V compatible. All a server unequivocally needs is an AMD-V- or
Intel-VT-enabled CPU and adequate memory to horde a preferred series of practical appurtenance (VM)
instances. If a server will horde countless VMs, it also helps to have mixed 1 Gigabit Ethernet
ports accessible (or a singular 10 GbE port) to safeguard adequate LAN connectivity.

Make certain you’re wakeful of CPU-compatibility issues; servers in a same virtualization cluster
need a same CPU code and indication (such as dual Intel Xeon E5-2637 processors) in sequence to use
features like vMotion and
Distributed Resource
Scheduler
(DRS). If your cluster contains servers from a same code though opposite families,
then Enhanced vMotion Compatibility (EVC) — a underline of VMware vSphere — can provide
compatibility between CPUs that aren’t from a same CPU family.

Virtual machines are installed from and protected
with storage
. Local or direct-attached storage will work, though a storage area network (SAN) or
network-attached storage (NAS) is compulsory in a practical infrastructure if we wish to use vSphere’s
advanced features.

Besides a apparent emanate of storage capacity, also cruise a I/Os per second (IOPS) offered.
Measure that opposite a I/O final of applications using on a VMs — and a series of
workloads in your infrastructure — to safeguard that your storage will have a required I/O
bandwidth. Microsoft’s Exchange Server Profile Analyzer is a best
tool to assistance Exchange administrators establish IOPS demands.

You’ll also need to permit and implement a accumulation of VMware vSphere components in your Exchange
2010 virtualization project. This includes vSphere Enterprise or Enterprise
Plus
, vCenter for centralized vSphere management, vMotion for effort emigration between
servers, Storage vMotion for practical hoop record emigration in storage, DRS and High Availability.

Virtual infrastructure collection to support your project

Once we have a correct server hardware, your ESX or ESXi vSphere hypervisor, a vCenter
management console and any modernized facilities in place, it’s vicious to cruise dual other virtual
infrastructure collection from VMware or a third party.

The initial is a backup/recovery/replication tool. Traditional Exchange
2010 backup tools
should still work after you’ve virtualized Exchange 2010, though a tool
specifically designed for a new practical infrastructure offers improved opening and
versatility.

VMware includes a VMware
Data Recovery
(VDR) apparatus in each chronicle of vSphere solely Essentials. VDR does a good job
when it comes to backup and recovery, though it doesn’t scale over 100 VMs. Also, a backup
repositories have singular capacity, and it does not offer replication. Third-party collection — such as
those from Veeam Backup, PHD Virtual and Appassure Backup
– are some-more absolute than VDR though will cost we extra.

The second infrastructure apparatus should yield opening and ability analysis. Although
vCenter offers simple opening analysis, it doesn’t go distant adequate in identifying, presaging and
troubleshooting ability bottlenecks.

You can use a endorsed VMware vCenter Operations
Management
Suite, though we also have third-party options, like vKernel
vOPS
from Dell.
Depending on your virtualized Exchange 2010 instance, we might also wish to cruise a apparatus that
takes a opposite approach, such as practical network ability research from Xangati.

Selecting a right hardware, VMware products and practical infrastructure collection won’t pledge a
successful virtualized
Exchange 2010 deployment
, though it will positively urge your contingency of success.

These collection can also be complex, so build your possess lab, learn as most as we can and emanate a
virtualized Exchange Server explanation of concept.

ABOUT THE AUTHOR
David Davis is a author of a VMware vSphere video training library from Train Signal (including
the new vSphere 5 video training course). With some-more than 18 years of craving experience, Davis
has created hundreds of virtualization-related articles for a Web and is a vExpert, a VMware
Certified Professional, a VMware Certified Advanced Professional-Datacenter administration, and
Cisco Certified Internetwork Expert #9369. His personal website is VMwareVideos.com.

Disclaimer:
Our Tips Exchange is a forum for we to share technical recommendation and imagination with your peers and to learn from other craving IT professionals. TechTarget provides a infrastructure to promote this pity of information. However, we can't pledge a correctness or effect of a element submitted. You determine that your use of a Ask The Expert services and your faith on any questions, answers, information or other materials perceived by this Web site is during your possess risk.

Article source: http://www.pheedcontent.com/click.phdo?i=9ecc538cb5ab46547cfdd2143ae93d4e

Samsung Joins The HSA Foundation, Your Next Galaxy with AMD Inside?

Friday, August 31st, 2012

Earlier this summer, AMD launched a HSA (Heterogeneous System Architecture) substructure with a series of core partners. Now Samsung has assimilated a collaborative bid as well. This could be a commencement of an rare turn of team-work between a APU engineer and large smartphone/tablet developer. The dual companies already share certain technologies; Samsung and GlobalFoundries, AMD’s arch production partner, are members of IBM’s Common Platform Alliance.

AMD is anxious to have Samsung onboard, and has announced a series of additional members, including Apical, Arteris, MulticoreWare, Sonics, Symbio and Vivante. What this means for Samsung, and either it signals a vital change in a company’s long-term skeleton isn’t nonetheless clear. AMD doesn’t now have a smartphone-capable product line, though a company’s next-generation Kabini APU (Brazos’ 28nm successor) will broach a manly multiple of CPU and GPU opening in a tablet-capable form factor.

The Apple outcome might not be a final word in either or not Samsung unlawfully infringed on a biggest competitor’s patents, though it creates clarity for a association to try other product options in a bid to compute itself. Samsung, of course, has a possess line of Exynos ARM products though has mostly used a reduction of hardware from both Qualcomm and Texas Instruments.

It’s not transparent nonetheless accurately how HSA membership will impact product development. Presumably a architectural specifications AMD mentions are a developer horizon aim identical to OpenCL. This would concede several companies to exercise their possess singular hardware while charity a one customary for developers to target, identical to a approach DirectX and OpenGL run on AMD, Intel, and Nvidia GPUs notwithstanding poignant differences in a underlying hardware. OpenCL was creatively combined to residence this need, though AMD evidently feels there’s a need for a apart substructure that focuses on identical issues.

This also expected ties to AMD’s possess long-term plans for a APUs. The association has formerly pragmatic that it sees a destiny where a GPU takes over from a CPU for FPU calculations and complicated lifting. The company’s HSA roadmap stays sincerely assertive with unbroken GPU products charity deeper layers of formation between CPU and GPU by 2015.

Article source: http://hothardware.com/News/Samsung-Joins-The-HSA-Foundation-Your-Next-Galaxy-with-AMD-Inside/

Vivante works with HSA on hybrid computing

Wednesday, August 22nd, 2012

Vivante Corp. joins a Heterogeneous System Architecture (HSA) Foundation that is directed during innovations in mobile and embedded GPU computing.

The idea of HSA is to emanate a singular architecture specification and customary focus programming interface (API) that developers can simply adopt to optimise distributed workloads opposite a GPU and CPU, for a best opening and energy potency in parallel computing.

Vivante products targeting hybrid platforms and designed to work directly with a CPU by a one memory element will use ACE-Lite cache coherency or a local tide interface. The Vivante HSA pattern will be built on a one program and hardware package that provides a singular design travelling mixed handling systems and platforms. Vivante HSA program will be retrograde harmony with all existent compute-enabled products and built around HSA APIs and collection that element a existent support of OpenCL, Google Renderscript, and Microsoft DirectCompute.

We demeanour brazen to collaborating with a consortium to conclude a subsequent era design for smartphones, tablets, TV/STB, embedded, networking, cloud computing, and automotive,” pronounced Wei-Jin Dai, President and CEO of Vivante.

Article source: http://www.eetindia.co.in/ART_8800673299_1800000_NT_700c9934.HTM

Vivante Joins Heterogeneous System Architecture (HSA) Foundation to …

Monday, August 20th, 2012

SUNNYVALE, Calif., Aug. 20, 2012 /PRNewswire/ — Vivante Corporation, a world-wide personality in graphics and GPU Compute technologies for handheld, consumer, and embedded devices, now announced it has assimilated a Heterogeneous System Architecture (HSA) Foundation to pull brazen innovations in mobile and embedded GPU computing. The idea of HSA is to emanate a singular pattern selection and customary focus programming interface (API) that developers can simply adopt to optimize distributed workloads opposite a GPU and CPU, for a best opening and energy efficiency.

(Logo: http://photos.prnewswire.com/prnh/20091008/VIVANTELOGO)

Vivante products targeting hybrid platforms and designed to work directly with a CPU by a one memory element will use ACE-Lite™ cache coherency or a local tide interface. The Vivante HSA pattern will be built on a one module and hardware package that provides a singular pattern travelling mixed handling systems and platforms. Vivante HSA module will be retrograde harmony with all existent compute-enabled products and built around HSA APIs and collection that element a existent support of OpenCL™, Google Renderscript™, and Microsoft® DirectCompute™. By simplifying a lives of focus developers targeting extrinsic architectures, programmers can emanate breakthrough use cases that take advantage of a new model change to hybrid computing. Real universe applications that are already accelerated by Vivante cores embody mechanism vision, picture processing, protracted reality, sensor fusion, and suit processing.

“We acquire Vivante to a HSA Foundation as a valued member of a consortium of record leaders pushing a subsequent epoch of extrinsic computing platforms,” pronounced Phil Rogers, HSA Foundation President and AMD Corporate Fellow. “Vivante’s products and technologies will raise HSA and assistance enhance a footprint into a far-reaching operation of markets, bringing new capabilities of hybrid systems to Vivante’s customers.”

“Vivante is vehement to join HSA Foundation as a record writer defining a subsequent epoch of creation built on extrinsic computing,” pronounced Wei-Jin Dai, President and CEO of Vivante. “We are already one of a attention leaders in mobile and embedded GPU and Compute technologies – now a usually association with mass marketplace GPU IP that pass OpenCL 1.1 conformance and a usually association that has partners shipping OpenGL ES 3.0 silicon. We demeanour brazen to collaborating with a consortium to conclude a subsequent epoch pattern for smartphones, tablets, TV/STB, embedded, networking, cloud computing, and automotive.”

About a HSA Foundation

The HSA (Heterogeneous System Architecture) Foundation is a not-for-profit consortium for SoC IP vendors, OEMs, academia, SoC vendors, OSVs and ISVs whose idea is to make it easy to module for together computing. HSA members are building a extrinsic discriminate ecosystem, secure in attention standards, for mixing scalar estimate on a CPU with together estimate on a GPU while enabling high bandwidth entrance to memory and high focus opening during low energy consumption. HSA defines interfaces for together mathematics utilizing CPU, GPU and other programmable and bound duty devices, and support for a different set of high-level programing languages, thereby formulating a subsequent substructure in ubiquitous purpose computing. Please go to www.hsafoundation.com for some-more information.

About Vivante Corporation

Vivante Corporation, a personality in multi-core GPU, OpenCL™, and 2D Composition IP solutions, provides a top opening and lowest energy characteristics opposite a operation of Khronos™ Group API harmonious standards formed on a ScalarMorphic™ architecture. Vivante GPUs are integrated into patron silicon solutions in mass marketplace products including smartphones, tablets, HDTVs, consumer wiring and embedded devices, using thousands of graphics applications opposite mixed handling systems and module platforms. Vivante is a secretly hold association headquartered in Sunnyvale, California, with additional RD centers in Shanghai and Chengdu. For some-more information, revisit http://www.vivantecorp.com

Vivante and a Vivante trademark are trademarks of Vivante. All other product or use names are a skill of their particular owners.

SOURCE Vivante Corporation

Article source: http://www.virtualpressoffice.com/publicsiteContentFileAccess?fileContentId=920460&fromOtherPageToDisableHistory=Y&menuName=News&sId=&sInfo=

ARM’s Mali GPU upgrades set to energy adult intelligent devices

Monday, August 6th, 2012

ARM Mali GPU graphicARM says new information application tech will assistance boost Mali GPUs’ power

ARM has announced a second era GPU (graphics estimate unit) designs.

It says a T-600 array design offers a 50% opening boost that could assistance smartphones and tablets run some-more graphics-intense video games and run print modifying programmes faster.

ARM says a initial products regulating a tech should launch by Sep 2013.

ARM dominates a marketplace in mobile device CPU (central estimate unit) designs, though is a smaller actor when it comes to GPUs.

Another British organisation – Imagination Technologies – is now a vital force in a mobile graphics sector.

Neither ARM nor Imagination build anything themselves, though instead make income by chartering their egghead properties to manufacturers who mix them with other technologies to emanate a chips that appetite mobile devices.

ARM vs Imagination

ARM says some-more than one in 5 Android smartphones have graphics processors regulating a technology, including Samsung’s best-selling Galaxy S3.

By contrariety a investigate by Jon Peddie Research suggests Imagination’s record was used in half of all graphics chips shipped to make intelligent mobile inclination in 2011, including a chips in Apple’s iPads and iPhones.

ARM hopes to advantage on a rival, observant that a CPU and GPU technologies advantage from a fact they have been designed to work together.

“The plea as we pierce brazen with some-more formidable processors – both CPU and GPU – is that a communication between a dual becomes most some-more critical,” Kevin Smith, ARM’s clamp boss of vital selling told a BBC.

Continue reading a categorical story

GPU vs CPU

GPUs and CPUs differ in a proceed they proceed a task.

While CPUs are designed to lift out calculations one-at-a-time during high speed, GPUs routine mixed sets of calculations during a same time, though typically take significantly longer to do any one.

CPUs are best matched for consecutive tasks where a answer to one calculation is used to assistance work out a subsequent one.

But GPUs are quite good during what are termed “parallelisable” tasks – processes that can be damaged down into several tools and run concurrently where a outcome of any one calculation does not establish a submit of another.

As their name suggests GPUs were creatively designed to hoop graphics, operative out what colour any pixel of a shade should be.

But they are now used for other parallelisable computing tasks such as debate recognition, picture estimate and settlement matching.

A mechanism will suffer a opening boost if it manages to order adult tasks between a CPU and GPU cores to take advantage of a opposite ways they behave.

But to do this developers contingency have initial coded their program with this in mind.

“It’s about putting a right estimate charge on a right CPU or GPU, and eventually that is about fluctuating battery life for consumers and staying within a appetite budgets that mobile inclination require.”

‘Revolutionary’ compression

Games consoles, such as a Playstation 3 and Xbox 360, have historically been means to uncover some-more modernized graphics than smartphones and tablets since they have been means to use some-more power-hungry chips.

But a peculiarity opening has shrunk as mobile device GPUs have turn some-more appetite efficient.

ARM skeleton to locate up, and maybe even overtake, a stream era of consoles by adopting a new information application record it combined called ASTC (adaptive scalable hardness compression).

The organisation says ASTC is a “revolutionary” new algorithm – or set of instructions – that will concede program designers to use a same volume of information to report some-more picture fact than had been probable before, or to use reduction information to yield a same volume of detail.

It covers a far-reaching operation of formats including both 2D and 3D images, as good as HDR (high energetic range) photographs.

“From a consumer’s indicate of perspective it’s going to meant improved battery life and aloft picture quality,” pronounced Steve Steele, product manager of ARM’s media estimate division.

“Texture application is critical since relocating information about costs energy, so by relocating reduction information about your battery lasts longer.

“You will be means to download games faster, and it’s also been designed to be some-more fit during uncompressing information once it’s on your device.”

Devices regulating ARM GPUsARM’s initial era of Mali GPU designs already underline in a operation of intelligent devices

While ARM’s new GPU designs are a initial to use ASTC, a organisation hopes it will turn an attention standard, and skeleton to permit it for a price to others.

Smart TVs

While usually a minority of smartphones now use ARM-based GPUs, a organisation says it has already prisoner some-more than 70% of a intelligent TV market.

Screens from LG, Samsung, Sony and Sharp use chips formed on a technology, and Mali GPUs also seem in many set-top boxes.

ARM says manufacturers have already shown seductiveness in a latest designs to assistance them supplement features.

“As intelligent TVs get some-more calm brought to them, things like user-interfaces are going to turn most some-more complex,” pronounced Mr Smith.

“That means larger graphics capabilities, using both [media] calm and high-end games.”

Article source: http://www.bbc.co.uk/news/technology-19141807