Data Integration

Does FHIR Fully Address Healthcare Data Interoperability Silo Challenges?

Actian Corporation

December 9, 2021

healthcare data

Let’s face it, information withholding is common in many industries, but it’s been especially rampant in the healthcare space. Payers and Providers have avoided scrutiny for the cost of care delivered and the reimbursements they were collecting. Consequently, information blocking has become one of the key impediments in shifting from a fee-for-service (FFS) model to a more transparent value-based care (VBC) model. Of course, information blocking isn’t absolute or explicit. Instead, it manifests itself through infrastructural barriers such as the slow transition from paper-based records, manual processes, and the lingering need for one-off, point-to-point integrations between proprietary platforms. Sometimes, information blocking is a side effect of HIPAA and Meaningful Use, where well-meaning guidelines can prevent or discourage information sharing when the technology or process is not granular or flexible enough to enable sharing without sacrificing security and privacy.

Congress crafted the Cures Act of 2016 to address these barriers in information sharing and bring more transparency to healthcare costs and reimbursements. Further, the Office of National Coordination of Healthcare Information Technology (ONCHIT and now ONC) defined specific data sharing standards (interoperability), guidelines for use, and mandated implementation timelines for payers and providers to get their act together. The resulting interoperability standard, Fast Healthcare Interoperability Resources (FHIR, pronounced “Fire”), is the latest update to Health Layer 7 (HL7), HL7V4 in effect.  By 2024, the plan is to replace the current de-facto standard HL7v2 (HL7v3 did not see widespread implementation in the US) with FHIR. With over 95% of healthcare organizations using HL7v2, you may want to understand better how this may impact your healthcare business.

Why FHIR Over HL7v2?

Why bother?  HL7v2 has several drawbacks, the key ones as follows:

  • First and foremost, HL7v2 doesn’t provide a way for a human to view the contents within each message passed.
  • Secondly, healthcare messages can be extensive, consisting of several pieces of information that one may want to send or receive or view separately; HL7v2 doesn’t provide an easy way to do this.
  • Also, HL7v2 is limited to the last generation of standards for semi-structured data formatting and transfer, XML and SOAP.
  • And finally, HL7v2 didn’t start as a free open standard and didn’t become one until 2013, and most of the major early adopters’ versions predate this shift. The result is multiple implementations with backward compatibility and cross-vendor integration challenges.

FHIR solves for each of these HL7v2 deficiencies in the following ways:

  • FHIR is a free open standard from inception, designed for use by any developer – not just those in the healthcare space.
  • FHIR messages are based on XML, RDF, and JSON data formats, using RESTful APIs to expose the data as a set of web services.
  • Exactly what data is formatted is also standardized to adhere to the US Core Data Interoperability (USCDI) data format, which must be stored and downloadable in a Consolidated Clinical Document Architecture (CCDA). When it comes to regulated standards, adoption is dependent on providing an agreed-upon set of data in a format that all systems can read.
  • FHIR messages are readable by humans and are modular, enabling specific elements to be shared and the ability to eyeball whether you’ve received what you expected.

There are more extensive comparisons of FHIR and HL7v2 and detailed descriptions of USCDI and CCDA on the government’s HealthIT website. The key takeaway should be that the new FHIR standard provides far more simplicity, granularity, and standardization of what data should be shared and how it should be formatted to enable that sharing. While FHIR removes interoperability barriers, it does not guarantee open and accessible healthcare data sharing if you don’t address the other data integration challenges surrounding healthcare data silos. Without pairing it with the right changes to your data analytics strategy and implementations, silos will remain. By extension, it will only marginally bolster the shift from FFS to VBC.

Interoperability is Only the Starting Point for Integration

FHIR interoperability will undoubtedly make it easier for providers to share their clinical data amongst themselves and payers. It will also make incorporating external clinical data (such as state and federal population health data repositories) easier by exposing these external repositories as web services. Of course, this will require your IT teams to create more point-to-point connections. And, while you can reuse these point-to-point connections, there is a better way to optimize the use of FHIR: embed it in a data hub.

With data hub functionality, you can ingest, prepare, and combine complementary financial and operational data from provider and payer administrative systems with FHIR clinical data. Typically, payer data – and data sent to and from them – is in EDI formats such as X.12 8xx formats. Much of the social determinants of health or SDOH data is in other formats outside the healthcare arena. Data from FHIR resources must be unpacked and transformed into EDI or other formatted data and vice versa. Often, the combination of clinical, SDOH, and financial data is necessary, making point-to-point connections suboptimal.

Instead, a virtual, central location to connect all possible data is a more efficient and flexible way to deal with the variety and diversity of healthcare data. This is the concept behind having a data hub. Historically, healthcare data hubs have been very complex data integration platforms used by data engineers and other IT integration specialists. But, in the spirit of data democratization that the Cures Act and FHIR are promoting, the “to be” healthcare data hub of the future should include the ability to avoid hard-coding and one-off scripts. A drag and drop menu, no-code/low-code approach avoids overly taxing healthcare IT teams and allows analysts and other power-users with self-service access to the data. Further, healthcare data hubs should have built-in transformation templates since the USCDI and CCDA structures and EDI formats such as 837 are well defined. This removes one of the largest time sinks for IT and empowers non-IT super users to work with the data directly.

Increased Data Sharing Doesn’t Automatically Mean Higher Data Quality

Another issue that FHIR doesn’t address and, if anything, may further exacerbate the problem is data quality. Given the broader and hopefully more frequent data sharing, there will always be data conversion errors inherent in sending specific separate resources instead of monolithic records, as with HL7V2. Furthermore, the need for data transformations will remain and grow as more diverse and disparate sets of data are shared. A healthcare data hub must be fluent in FHIR, EDI X.12 8xx, legacy HL7 versions, and other standards to perform data ingestions, make the transformations between standards, and provide data quality functions such as deduplication and the ability to set up pattern recognition for common conversion errors. Finally, the standards for data formats within each FHIR resource may change over time. A healthcare data hub must be aware of changes to how specific data must be transformed.

All this data sharing between various providers and payers takes place between applications and data repositories. Point-to-point integrations represent the data connections between specific operational and business processes and support automated data analysis for and decision support for care delivery and knowledge workers. Automation of the data ingestion and egress to and from the various applications, web services, and data repositories, preparation, transformation, and unification is also a critical factor in successful data sharing. Again, FHIR facilitates – but doesn’t complete – the solution.

Conclusions

The Patient Protection and Affordable Care Act of 2009 was a good start to broadening access to healthcare for millions of citizens without resorting to a single-payer system. Further, the sections of the law that promoted Accountable Care Organizations focused on outcomes partially addressed the need – or at least acknowledged it – to shift away from a Fee for Service model. The Cures Act, ONC interpretation, and support of FHIR as a tool for data sharing will undoubtedly accelerate the shift to value-based care by creating a level playing field for data sharing. A more comprehensive data integration strategy is still required and should focus on combining and improving the quality of healthcare and non-healthcare data.

A comprehensive data integration strategy with a data hub as the most effective means of ensuring all relevant and necessary data brought together cannot answer questions about what needs adjusting in any given clinical, operational, or financial process or decision to improve outcomes. A cloud data warehouse and analytics tools need to be integrated with the data hub to analyze and extract the insights to drive the shift from FFS to VBC.  In my last blog, we described the Actian Healthcare Data Analytics Hub, a more efficient and effective way to extract insights for Value-Based Care.  In the next blog, we’ll look at some use cases where analytics drive the shift from FFS to VBC.

About Actian Corporation

Actian is helping businesses build a bridge to a data-defined future. We’re doing this by delivering scalable cloud technologies while protecting customers’ investments in existing platforms. Our patented technology has enabled us to maintain a 10-20X performance edge against competitors large and small in the mission-critical data management market. The most data-intensive enterprises in financial services, retail, telecommunications, media, healthcare and manufacturing trust Actian to solve their toughest data challenges.