Posts Tagged ‘EMR’

Talus Foot Surgeons Selects ClearDATA for HIPAA-Compliant Cloud Hosting of … – Virtual

Wednesday, August 22nd, 2012

ClearDATA Networks, Inc., a heading medical cloud computing height and use provider, currently announced that Talus Foot Surgeons, LLC., a Scottsdale-based provider of podiatric medicine and surgery, is hosting a BioMedix TRAKnet™ PM Advanced Practice Management Solution on a ClearDATA Private Cloud during a company’s HIPAA-compliant datacenter.

(PRWEB) Aug 22, 2012

ClearDATA Networks, Inc., a heading healthcare cloud computing height and use provider, currently announced that Talus Foot Surgeons, LLC., a Scottsdale-based provider of podiatric medicine and surgery, is hosting a BioMedix TRAKnet™ PM Advanced Practice Management Solution on a ClearDATA Private Cloud during a company’s HIPAA-compliant datacenter. Running a use government program on a ClearDATA Healthcare Cloud Platform saved Talus poignant hardware and IT support expenses. The routine of entirely implementing a program and studious information into a cloud was finished in approximately dual weeks.

“Once we evaluated a options and cost comparisons, it was an easy preference to have a BioMedix use government program hosted with ClearDATA rather than using it on computers within a offices,” pronounced Dr. Serrina M. Yozsa, DPM, of Talus Foot Surgeons, LLC. “We separated a responsibility of carrying to squeeze new servers, as good as a common IT support and upkeep costs. Best of all, we now have good assent of mind meaningful a complement is using with sum reliability, a studious information is secure, and a use is assembly HIPAA confidence requirements.”

In early 2012, Dr. Yozsa motionless to switch from an comparison use government resolution to BioMedix TRAKnet, that was approved by a Office of a National Coordinator for Health Information Technology (ONC), and would concede them to grasp Meaningful Use.

About ClearDATA’s HIPAA Compliant EMR Cloud Hosting Service

In response to flourishing marketplace demand, ClearDATA has grown a secure, HIPAA-compliant, and cost effective resolution for EMR program smoothness that facilities a pre-configured multiple of ClearDATA’s private Cloud services, HP’s state-of-the art server and storage solutions, and EMR/PM program images to form a singular resolution delivered as a service. ClearDATA’s EMR Cloud Hosting Service, that includes online backup and recovery, saves medical organizations from estimable collateral expenditures while providing secure, reliable, high opening entrance to EMR program and studious information according to HIPAA Security Rule and a HITECH Act.

About ClearDATA Networks, Inc.

ClearDATA Networks, Inc. is a marketplace personality for cloud computing and information confidence services for medical providers, program vendors and VARs, and 100% dedicated to a medical field. ClearDATA’s services capacitate providers to entirely automate and firmly conduct medical medical records, applications, IT infrastructure and digital storage. The association provides HITECH HIPAA-compliant cloud and hosting infrastructure and managed services, offsite backup and disaster recovery, medical picture archiving, information confidence and world-class support. The association offers HIPAA Security Risk and Remediation services by a U.S. Healthcare Compliance multiplication to a medical attention in sequence to safeguard that they accommodate a severe standards of confidence compulsory for stable health information to denote Meaningful Use. For some-more information, call 602-635-4000, email: sales (at) cleardata (dot) net or visit: http://www.cleardata.net.

###

For a strange chronicle on PRWeb visit: http://www.prweb.com/releases/prweb2012/8/prweb9823438.htm

Article source: http://www.virtual-strategy.com/2012/08/22/talus-foot-surgeons-selects-cleardata-hipaa-compliant-cloud-hosting-biomedix-traknet-prac

Why large data, contrast apps make good Amazon EC2 mark instances

Friday, April 13th, 2012

When deploying applications in a cloud, we need to establish excusable performance
parameters. If you’re hosting time-sensitive applications in Amazon EC2 with fault-tolerant
designs, those apps make ideal possibilities for Amazon
EC2 mark instances
.

    When we register, my organisation of editors will also send we alerts about public, private and hybrid cloud computing as good as other associated technologies.

    Margie Semilof, Editorial Director

Batch-oriented processing, such as testing, examining “big data” as
well as extraction
transformation and bucket (ETL)
operations are good possibilities for mark instances. These jobs are
typically scheduled to run though most end-user interaction. The apps also lend themselves to
dividing a whole effort into smaller tasks that can be finished independently. For example,
if your program growth organisation is doing retrogression contrast on a new cloud-based app, they can
submit tests for one procedure to one instance while contrast another procedure to a opposite instance.
Tests that camber both modules can be submitted to a third instance.

Big
data analysis
and information room ETL operations also fit good as mark instances, even though
subtasks are not indispensably eccentric of other tasks. Consider, for example, a standard data
warehouse assembly problem.

A data
warehouse
collects information from mixed source locations, such as bend offices. Some branch
data is many-sided plumb — from bend bureau totals into informal totals and afterwards to
corporate totals. Other data, such as sales totals for particular products opposite all stores, is
aggregated horizontally.

Amazon EC2 mark instances can be used to perform ETL operations by segment or by product line. A
summary book could be used to sum information from mixed mark instances. The same book could
detect when a segment or product sum has not been completed, presumably since Amazon recovered a
spot instance and restarted a pursuit to calculate blank data.

Challenges with vast information research and Amazon EC2 mark instances 
Amazon mark instances are ideal for use with vast information research in a cloud, though we can encounter
some problems. Moving vast information to a cloud can be costly and slow.

If a vast volumes of information we wish to investigate are already in a open cloud, afterwards spot
instances will work. However, if a cost of uploading and storing vast volumes of information outweighs
the assets of regulating cloud computing resources, cruise regulating in-house clusters or a private
on-premises cloud.

If mark instances are an choice for your vast information research requirements, we competence be means to
use Amazon Elastic Map
Reduce
(EMR) with EC2 mark instances. EMR implements a map revoke indication to routine big
data. This computational indication works good for tasks in that vast volumes of information can be analyzed
independently (the map phase); formula are total in a new set of information (reduce phase) that is
processed in a identical map-reduce pattern.  

Many — though not all — vast information projects are a good fit for map reduce. Network analysis
problems, such as examining amicable networks or a upsurge of email messages, do not lend themselves
to map reduce. In further to providing a scalable height for examining vast information sets, EMR
provides error passive capabilities that support liberation when mark instances are reclaimed.

Amazon EMR is only one approach to deliver error toleration into your focus architecture. If
you are operative with tradition applications that were not designed with error toleration in mind,
consider regulating a check-pointing plan to save information about a computational state of
persistent storage. When your focus starts, check-point capabilities can detect a standing of
the final saved state and continue estimate from that point. 

You can also use summary queuing to keep a list of tasks that still need to process.
Applications using on Amazon EC2 mark instances can take a charge from a tentative jobs reserve and add
a summary to an in-process reserve to prove a mark instance is operative on a task. When a job
is complete, it will mislay a pursuit from a in-process queue. Scripts can run spasmodic to
check a age of jobs in a in routine reserve and supplement jobs behind to a tentative reserve if they have
not been finished in a reasonable volume of time (presumably since a mark instance was
reclaimed).

Spot instances can assistance revoke your bottom line when using vast information analytics operations in
the cloud. Be certain to cruise opening mandate and fault-tolerance characteristics of your
application before using them on mark instances.

 

Dan Sullivan, M.Sc., is an author, systems designer and consultant with over
20 years of IT knowledge with engagements in modernized analytics, systems architecture, database
design, craving confidence and business intelligence. He has worked in a extended operation of
industries, including financial services, manufacturing, pharmaceuticals, program development,
government, sell and education, among others. Dan has created extensively about topics ranging
from information warehousing, cloud computing and modernized analytics to confidence management,
collaboration, and content mining.



This was initial published in Apr 2012

Article source: http://www.pheedcontent.com/click.phdo?i=a091661e1b6805c67b9735cfc7f59446

Examining a state of PaaS in a year of ?big data?

Tuesday, March 27th, 2012

This year has already been noted as a year of “big data” in a cloud, with vital PaaS
players, such as Amazon, Google, Heroku, IBM and Microsoft, stealing a lot of publicity. But which
providers indeed offer a many finish Apache
Hadoop implementations

    When we register, my group of editors will also send we alerts about public, private and hybrid cloud computing as good as other associated technologies.

    Margie Semilof, Editorial Director

in a open cloud?

It’s apropos transparent that Apache Hadoop, along with HDFS, MapReduce, Hive, Pig and other
subcomponents, are gaining movement for vast information analytics as enterprises increasingly adopt Platform as
a Service (PaaS) cloud
models for craving information warehousing. To prove Hadoop has matured
and is prepared for use in prolongation analytics cloud environments, a Apache Foundation upgraded to
Hadoop v1.0.

The capability to emanate rarely scalable, pay-as-you-go Hadoop clusters in providers’ data
centers for collection estimate with hosted MapReduce estimate allows craving IT departments to
avoid collateral losses for on-premises servers that are used sporadically. As a result, Hadoop has
become de rigueur for deep-pocketed PaaS providers — Amazon, Google, IBM and Microsoft –
to package Hadoop, MapReduce or both as prebuilt services.

AWS Elastic MapReduce
Amazon
Web Services (AWS)
was initial out of a embankment with Elastic MapReduce (EMR) in Apr 2009. EMR
handles Hadoop cluster provisioning, runs and terminates jobs and transfers information between Amazon EC2
and Amazon S3 (Simple Storage Service). EMR also offers Apache Hive, that is built on Hadoop for
data warehousing services.

EMR is error passive for worker failures; Amazon recommends using usually a Task Instance Group
on mark instances to take advantage of a reduce cost while still progressing availability.
However, AWS didn’t supplement support for mark instances until Aug 2011.

Amazon relates surcharges of $0.015 per hour to $0.50 per hour for EMR to a rates for Small to
Cluster Compute Eight Extra Large EC2 instances. According to AWS: Once we start a pursuit flow,
Amazon Elastic MapReduce handles Amazon EC2 instance provisioning, confidence settings, Hadoop
configuration and set-up, record collection, health monitoring and other hardware-related
complexities, such as automatically stealing inadequate instances from your using pursuit flow. AWS
recently announced giveaway CloudWatch metrics for EMR instances (Figure 1).

Google AppEngine-MapReduce
According to Google developer Mike Aizatskyi, all Google teams use MapReduce,
which it initial introduced in 2004. Google expelled an AppEngine-MapReduce API as an “early
experimental recover of a MapReduce API” to support using Hadoop 0.20 programs on Google App
Engine
. The group after expelled low-level files API v1.4.3 in Mar 2011 to yield a file-like
system for middle formula for storage in Blobs and softened open-source User-Space Shuffler
functionality (Figure 2).

The Google AppEngine-MapReduce API orchestrates a Map, Shuffle and Reduce operations around a
Google Pipeline API. The association decribed AppEngine-MapReduce’s stream standing in a video display for I/O 2012. However,
Google hadn’t altered a “early initial release” outline as of Spring 2012.
AppEngine-MapReduce is targeted during Java and Python coders, rather than vast information scientists and
analytics specialists. Shuffler is singular to approximately 100 MB information sets, that doesn’t qualify
as vast data. You can ask entrance to Google’s BigShuffler for incomparable information sets.

Heroku Treasure Data Hadoop add-on
Heroku’s Treasure Data Hadoop appendage enables
DevOps workers to use Hadoop and Hive to investigate hosted focus logs and events, that is one
of a primary functions for a technology. Other Heroku vast information add-ons embody Cloudant’s
implementation of Apache CouchBase, MongoDB from MongoLab and MongoHQ, Redis To Go, Neo4j (public
beta of a graph database for Java) and RESTful Metrics. AppHarbor, called by some “Heroku for .NET,” offers a
similar appendage lineup with Cloudant, MongoLab, MongoHQ and Redis To Go, and RavenHQ NoSQL database
add-ins. Neither Heroku nor AppHarbor horde general-purpose Hadoop implementations.

IBM Apache Hadoop in SmartCloud
IBM began charity Hadoop-based information analytics in a form of InfoSphere BigInsights Basic
on IBM SmartCloud Enterprise
in Oct 2011. BigInsights Basic, that can conduct adult to 10 TB
of data, is also accessible as a giveaway download for Linux systems; BigInsights
Enterprise
is a fee-based download. Both downloadable versions offer Apache Hadoop, HDFS and
the MapReduce framework, as good as a finish set of Hadoop subprojects. The downloadable
Enterprise book includes an Eclipse-based plug-in for essay text-based analytics,
spreadsheet-like information find and scrutiny collection as good as JDBC connectivity to Netezza and
DB2. Both editions yield integrated designation and administration collection (Figure 3).
 

My Test-Driving
IBM’s SmartCloud Enterprise Infrastructure as a Service: Part 1
and Part
2
tutorials report a executive facilities of a giveaway SmartCloud Enterprise hearing version
offered in Spring 2011. It’s not transparent from IBM’s technical publications what facilities from
downloadable BigInsight versions are accessible in a open cloud. Their Cloud Computing: Community resources
for IT professionals page
lists usually one BigInsights Basic 1.1: Hadoop
Master and Data Nodes
program image; an IBM deputy reliable a SmartCloud version
doesn’t embody MapReduce or other Hadoop subprojects. Available Hadoop tutorials for SmartCloud
explain how to provision
and exam a three-node cluster on SmartCloud Enterprise
. It appears IBM is blank elements
critical for information analytics in a stream BigInsights cloud version.

Microsoft Apache Hadoop on Windows Azure

Microsoft hired Hortonworks, a Yahoo! spinoff that specializes in Hadoop consulting, to help
implement Apache Hadoop on Windows Azure, or Hadoop on
Azure (HoA). HoA has been in an invitation-only village technical preview (CTP or private beta)
stage given Dec 14, 2011.

Before fasten a Hadoop bandwagon, Microsoft relied on Dryad, a graph database grown by
Microsoft Research, and a High-Performance Computing add-in (LINQ to HPC) to hoop vast data
analytics. The Hadoop on Azure CTP offers a choice of predefined Hadoop clusters trimming from Small
(four computing nodes with 4 TB of storage) to Extra Large (32 nodes with 16 TB), simplifing
MapReduce operations. There’s no assign to join a CTP for prerelease discriminate nodes or
storage.

Microsoft also provides new JavaScript libraries to make JavaScript a first-class programming
language in Hadoop. This means JavaScript programmers can write MapReduce programs in JavaScript
and run these jobs from Web browsers, that reduces a separator to Hadoop/MapReduce entry. The CTP
also includes a Hive add-in for Excel that lets users correlate with information in Hadoop. Users can issue
Hive queries from a add-in to investigate unstructured information from Hadoop in a informed Excel user
interface. The preview also includes a Hive ODBC Driver that integrates Hadoop with other Microsoft
BI tools. In a new blog post on Apache
Hadoop Services for Windows Azure
, we explain how to run a Terasort benchmark, one of four
sample MapReduce jobs (Figure 4).

HoA is due for an ascent in a “Spring Wave” of new and softened facilities scheduled for
Windows Azure in mid-2012. The ascent will capacitate a HoA group to acknowledge some-more testers to a CTP
and substantially embody a betrothed Apache Hadoop on Windows Server 2008 R2 for on-premises or private
cloud and hybrid cloud implementations. Microsoft has aggressively reduced
charges for Windows Azure discriminate instances and storage
during late 2011 and early 2012;
pricing for Hadoop on Azure’s recover chronicle substantially will be rival with Amazon Elastic
MapReduce.

Big information will meant some-more than Hadoop and MapReduce
I determine with Forrester Research researcher James Kobielus, who blogged, “Within a vast data
cosmos, Hadoop/MapReduce
will be a pivotal growth framework
, though not a usually one.” Microsoft also offers a Codename
“Cloud Numerics” CTP
for a .NET Framework, that allows DevOps teams to perform numerically
intensive computations on vast distributed information sets in Windows Azure.

Microsoft Research has posted source formula for implementing Excel
cloud information research in Windows Azure
with a Project “Daytona” iterative MapReduce
implementation. However, it appears open source Apache Hadoop and associated subprojects will dominate
cloud-hosted scenarios for a foreseeable future.

PaaS providers who offer a many programmed Hadoop, MapReduce and Hive implementations will gain
the biggest following of vast information scientists and information analytics practitioners. Microsoft
provisioning a Excel front finish for business comprehension (BI) applications gives a company’s
big information offerings a conduct start among a flourishing series of self-service BI users. Amazon and
Microsoft now yield a many finish and programmed cloud-based Hadoop vast information analytics
services.

 

Roger Jennings is a data-oriented .NET developer and writer, a Windows Azure
MVP, principal consultant of OakLeaf Systems and curator of a
OakLeaf Systems blog. He’s
also a author of 30+ books on a Windows Azure Platform, Microsoft handling systems (Windows NT
and 2000 Server), databases (SQL Azure, SQL Server and Access), .NET information access, Web services and
InfoPath 2003. His books have some-more than 1.25 million English copies in imitation and have been
translated into 20+ languages.



This was initial published in Mar 2012

Article source: http://www.pheedcontent.com/click.phdo?i=6eeb91248f3974396c7293b894d92caf

RENCI-Duke Project Aims to Use Data to Improve Medical Treatment Decisions

Friday, July 22nd, 2011

Newswise — A extend from a Agency for Healthcare Research and Quality (AHRQ) will capacitate RENCI (Renaissance Computing Institute during UNC Chapel Hill) and Duke University to rise a complement that aggregates and visualizes chronological medical information so doctors can use it to assistance them make a best probable diagnosis decisions for their patients.

AHRQ, a multiplication of a U.S. Department of Health and Human Services, will yield $300,000 over dual years to RENCI and Duke University Health System to rise VisualDecisionLinc, a program antecedent that integrates chronological studious information and analogous information from identical patients, all subsequent from electronic medical annals (EMRs), into a preference support tool.

The VisualDecisionLinc complement hypothesizes that doctors will make improved diagnosis decisions if they can fast entrance and simply investigate information about identical patients and a efficacy of several treatments.

The complement uses information from a MindLinc EMR complement grown during Duke University Medical Center. MindLinc-EMR is a widely used behavioral health EMR complement containing information from some-more than 2.1 million studious encounters, creation it a largest information room of unknown psychoanalysis information in a U.S.

The AHRQ-funded work will build on an ongoing RENCI-Duke plan and will concentration on 3 pivotal initiatives:
• Developing a best processes for selecting analogous populations. The researchers will use demographic information, box histories and diagnoses to assistance clinicians name analogous populations from a EMR that are many applicable to their patients.
• Creating a visible user interface to assistance in selecting a best diagnosis choices. Clinicians need to be means to find a critical information in their datasets fast and to perspective information in a approach that is easy to analyze. Visualization and visible analytics techniques will be used to aggregate, perspective and correlate with vast volumes of studious data, and to assistance clinicians know their information quickly.
• Evaluating a efficacy of VisualDecisionLinc in credentials for a larger-scale investigate implementation.

“Our grounds is straightforward,” pronounced Ketan Mane, comparison investigate informatics developer during RENCI, who is formulating VisualDecisionLinc with Chris Bizon a RENCI comparison investigate scientist, Phil Owen, RENCI IT developer, and Charles Schmitt, RENCI’s executive of informatics. “The EMRs embody large amounts of studious information on diagnoses, medication, and diagnosis outcomes, though doctors don’t have time to investigate pages and pages of information in a spreadsheet format. We wish to use information record to solve this information overkill problem, while during a same time entertainment insights about information characteristics for improved preference support.”

The concentration of a RENCI-Duke investigate plan is to yield EMR information to clinicians in ways that are useful—for example, summaries of patients with identical medical profiles– and in a visible format that is easy to understand, pronounced Mane, “so that a information can be used to support clinical decision-making during a indicate of care.”

The RENCI group will work with Dr. Kenneth Gersing, a psychiatrist and medical executive of clinical informatics in a psychoanalysis dialect during Duke University, Dr. Ricardo Pietroban, clamp chair of a dialect of medicine during Duke, and Bruce Burchett, an partner highbrow of psychoanalysis during Duke. Drs. Ranga Krishnan and John Rush, vanguard and clamp vanguard of clinical sciences during a Duke-NUS Graduate Medical School in Singapore, will offer as advisors to a project.

The Duke group related with RENCI dual years ago to support in an ongoing bid to use electronic medical annals to urge medical decision-making.

“The idea is to use EMRs to make a best diagnosis decisions probable and to urge studious outcomes,” pronounced Gersing, who led a growth of MindLinc-EMR. “If we can provide patients some-more effectively, it means fewer clinician visits, a improved peculiarity of life, and reduce medical costs.”

Comment/Share


Article source: http://www.newswise.com/articles/renci-duke-project-aims-to-use-data-to-improve-medical-treatment-decisions