Amazon Web Services exec talks interoperability lessons from the past year

By | May 12, 2021

The biggest barrier to physicians having the most complete medical history for their patients at every point of patient care is the lack of interoperability among information systems. This prevents electronic health records and data from other systems from following a patient.

A manual, time-consuming process is required to bring this information together. However, this is one of those pivotal moments when the healthcare industry has an opportunity to take what has been learned over the last year and identify and fix the underlying problems that plague the healthcare and life sciences community, said Pat Combes, worldwide technical leader, healthcare and life sciences, at Amazon Web Services.

Healthcare IT News interviewed Combes to discuss interoperability challenges, problems identified during the past chaotic year, and how interoperability can help with personalized care.

Q: Where does the healthcare industry stand with interoperability today? What is the status quo, and what are the top couple of challenges?

A: The industry has made progress in developing open standards and application programming interfaces to facilitate data fluidity and sharing among multiple electronic health record systems and data repositories. As a result, commercial and open source interoperability services are coming online to help breakdown information silos so data can better support clinical decisions that influence a patient’s health and wellbeing.

While there is room for optimism, the industry is still grappling with data structure and management challenges.

First, incomplete, disparate and disconnected data. Most health and patient data is stored as unstructured medical format, and identifying information in the data is a manual and time-consuming process. There are significant variations in the way data is shared, read and understood across health systems, which can result in information being siloed and overlooked or misinterpreted.

Further, most EHR systems do not follow patients on their care journey beyond the hospital or clinic walls. As a result, only a portion of healthcare data is available at any point of care, resulting in a fragmented view of a patient’s health history.

Second, slow adoption and scaling of open interoperability standards. We all agree that standards can streamline the structured data exchange needed to improve preventive and value-based care for people, predictions, diagnostics, post-marketing surveillance of medical products (for example drug, device), care quality, cost reduction and clinical research.

Industry guidelines and resources like the Fast Healthcare Interoperability Resources (FHIR) from Health Level Seven International (HL7) have helped to set a standard, though there is still more work to be done to support organizations to remove barriers toward adoption and make the electronic exchange of data more seamless, with the goal of providing a better provider and patient experience.

And third, risks due to siloed data: When it comes to storing health information including clinical, genomic, device, financial, supply chain and claims, data security is priority No. 1. Storing patient data across different systems and platforms makes it difficult to deliver personalized care, draw data insights and streamline service.

Data lakes house sensitive patient and administrative information in one secure, strategic and cost-friendly platform so hospitals can access their data more easily while meeting high security and compliance standards.

While these challenges persist, cloud technology is being leveraged in remarkable ways to break down data silos and facilitate interoperability, while ensuring patient data is secure. For example, Change Healthcare recently launched Social Determinants of Health (SDOH) Analytics, an innovative national data resource that links de-identified claims with factors such as financial stability, education level, ethnicity, housing status and household characteristics.

The resulting data set is de-identified in accordance with HIPAA privacy regulations. That helps health systems, insurers and life sciences organizations explore how geo-demographic factors affect clinical-care and patient outcomes.

Q: You’ve said that this is a pivotal moment in time when healthcare can take what it’s learned over the past year and fix the underlying problems. What happened and what can be done with interoperability?

A: Perhaps the most important learning is that achieving true healthcare interoperability requires understanding, evaluating and solving issues in the underlying syntactic and semantic characteristics of the data. 

Syntactic interoperability requires a common structure so that data can be exchanged and interpreted between health IT systems, while semantic interoperability requires a common language so that the meaning of data is transferred along with the data itself. This combination supports data fluidity.

The industry has made meaningful progress on this front. As mentioned previously, the FHIR open standard has emerged to act like a lingua franca, providing a universal adapter for sharing clinically relevant data easily and securely from any EHR or clinical system and allowing software developers to build high-quality applications.

FHIR enables healthcare solutions providers to build secure, compliant and scalable solutions for the delivery and exchange of medical information across the healthcare industry.

Another promising area is the development of APIs for clinical exchange and administrative automation. Healthcare HIPAA-compliant interoperability APIs are helping the healthcare industry develop and use open standards, such as FHIR, for easy exchange of information, freeing providers, payers and patients themselves from the confines of proprietary data formats and systems.

The industry is also developing reference implementation for FHIR APIs, using server-less technology as a cost-efficient and flexible approach to these interfaces. When coupled with access to more than 100 HIPAA-eligible features and services, with a wide range of certifications and attestations, this approach can help support compliance programs worldwide.

As technology creates more data across healthcare organizations, applying technologies like artificial intelligence and machine learning will be essential to help take that data and create the shared structure and meaning necessary to achieve interoperability. 

Shared structure and meaning will enable interoperability solutions that transform data input from various media types and forms: voice, image, scan, PDF, etc., into a common text format which can be shared with and leveraged by every entity in the value chain.

Instead of moving static, electronic documents or faxes like care summaries between healthcare providers, clinical AI-service APIs can enable EHR vendors and health systems to communicate in a standardized way with apps and other EHRs. 

With access to all available information, advanced analytics and machine learning can then enhance medical and scientific insights tied to patient outcomes in an accurate, scalable, secure and timely manner.

An example of how standards and open APIs can help improve patients’ health outcomes and overall experience is the FHIR-enabled storage and APIs created by Seattle-based Fred Hutchinson Cancer Research Center to enable care coordination between oncologists and primary care providers.

Fred Hutchinson Cancer Research Center used the APIs to provide patients with an application to support their regimes, including appointment follow-up and engagements with multiple providers, providing visibility into disease progression.

Q: You said there will be a time when our most challenging medical conditions like cancer and diabetes can be treated with tailored medicines and personalized care, all supported by technology, interoperability and collaboration. How exactly does interoperability help, and what needs to be done to get there?

A: As the country moves toward value-based care, artificial intelligence and machine learning, paired with data interoperability, will improve patient outcomes while driving operational efficiency to lower the overall cost of care.

By enabling data liquidity securely, and supporting healthcare providers with predictive machine learning models, clinicians will be able to seamlessly forecast clinical events like strokes, cancer or heart attacks and intervene early with personalized care and a superior patient experience.

As healthcare organizations take the necessary steps toward syntactic and semantic interoperability, the industry will be able to use data to place a renewed focus on key patient care initiatives.

One such initiative is early detection of serious disease and predicting patient health events. New interoperability solutions are giving health practitioners access to all pieces of a patient’s medical puzzle by pulling together anonymized patient data into longitudinal records that can be developed along with physician correlations.

This data, coupled with other unstructured data can power machine learning models and algorithms that help with earlier detection of diseases such as congestive heart failure – potentially months before clinical manifestation. Pairing this predictive tool with real-time integration into individual health records can support provider decision-making in real time.

With machine learning applied to this data, providers can identify at-risk patients, deliver definitive diagnoses and develop evidence-based treatment plans to drive meaningful patient results. That orchestration and execution of data is the definition of valuable patient-focused care – and the future of what we see for interoperability driven by AI and machine learning.

For example, the Pittsburgh Health Data Alliance is seeing strong dividends from its machine learning to study breast cancer risk and identify depression markers. Deep learning systems are being used to analyze mammograms in order to predict the short‐term risk of developing breast cancer.

More accurate predictions from screening images can help reduce unnecessary imaging examinations or clinical procedures, decreasing patients’ anxiety resulting from inaccurate risk assessments, and cutting costs.

In a second project, machine learning models are enabling new sensing technologies that can automatically measure subtle changes in an individual’s behavior – such as facial expressions and language use – that can act as biomarkers for depression. 

A quick and objective marker of depression could help clinicians more efficiently assess patients at baseline, identify patients who would otherwise go undiagnosed, and more accurately measure patients’ responses to interventions.

Another initiative is accelerating the design and delivery of new therapies. Open standards like FHIR and APIs are enabling players across the value chain to promote and scale interoperability for a greater and more efficient access to clinical data.

A great example of the power of shared structure and meaning is the COVID-19 Open Research Dataset, or CORD-19. Developed in 2020 by a coalition of research groups, it provides open access to the plenary of available global COVID research and data.

Freely available research data combined with AI tools to help researchers find information relevant to their work was one of the key reasons the COVID vaccine was developed so quickly. The shared research environment allowed scientists to quickly find the most promising immunologic target – antibodies to the coronavirus spike protein – and engineer the mRNA vaccine to trigger those antibodies.

Moving forward, this type of shared resource has the potential to help the scientific community streamline vaccine development for diseases with even more complex immune responses, such as diabetes. On a broader scale, the collective efforts across the industry can advance the ability to manage, mitigate and cure disease on a global level and help restore transparency to the healthcare business and system safely and effectively.

And another initiative is personalizing the consumer health journey. Interoperability of healthcare data is key to being able to identify the unique needs of each individual, which is essential to creating a frictionless and more personalized patient experience.

For example, early in the pandemic, MetroPlus Health identified approximately 85,000 at-risk individuals (for example, comorbid heart or lung disease, or immunocompromised) who would require additional support services while sheltering in place. 

In order to engage and address the needs of this high-risk population, MetroPlus Health quickly developed capabilities to connect each individual with the resources to ensure their specific needs were met.

The MetroPlus team worked closely with their partners, including a local data-driven community-based organization called AIRnyc, which deploys community health workers to help people navigate the health and social care landscape. Together, the organizations worked to identify resources to support patients, leveraging both existing systems like the NYC COVID hotline, the COVID Emotional Wellness line, MetroPlusHealth’s telehealth vendor, and the staff from the MetroPlusHealth’s pharmacy customer phone lines.

To connect New Yorkers in need with their partners, MetroPlus Health leaned on technology and cloud to build an SMS-based chatbot solution. The chat bot helped MetroPlus Health reach out to tens of thousands of individuals by SMS, and then connect them to available resources.

After the initial pilot, MetroPlus Health care managers began leveraging a community-based organization referral platform, called NOW POW, and existing MetroPlus Health contracts and relationships with food delivery services, including God’s Love We Deliver. As a result, thousands of at-risk individuals were connected with necessary services while reducing their exposure to COVID-19.

Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

News from healthcareitnews.com