Data Lake

Lakehouse Platform

Find an abundance of information in a data lake

Why is the Data Lakehouse Platform Important?

If you store all datasets in a single repository without adequate descriptions, it rapidly devolves into a disused data swamp. Data sets need descriptive information such as metadata that lets users find, use, and eventually trust data. Maintaining data lakes and data warehouses in siloes is inefficient as data must be moved to the data warehouse for analysis. Combining the two distinct functions into a unified data lakehouse platform streamlines data pipelines and provides more immediate access to repository data.

Who Uses the Data Lakehouse Platform?

Data engineers use the lakehouse to prepare data for data scientists who analyze the data using the integrated data warehouse. Data analysts and citizen data analysts can use the data lakehouse by virtue of the metadata that makes data sets easy to find, access and relate.

What Are the Components of the Lakehouse Platform?

Data Storage

The data lakehouse stores a wide variety of data types. The data sets can be database tables for structured and semi-structured, flat, structured, and unstructured data.

Metadata Catalog

The metadata in the data lakehouse labels and describes the various data sets to make them easier to locate and use.

Data Connectors

The structured data in the data lakehouse is easy to access using a structured query language (SQL). The semi-structured, unstructured, and proprietary data require connectors such as Spark to access.

Access APIs

Business intelligence (BI) tools for data analysis and application programs require application programming interfaces (APIs) to access data stored in the data lakehouse. These can include SQL, REST, and ODBC, for example.

How Does a Data Lakehouse Platform Compare to a Data Mesh and Data Fabric?

Data lakes were a hot idea ten years ago. It appeared as an evolution of the centralized enterprise data warehouse because it could store more data types, such as video, transcripts, large images, and audio files. However, businesses discovered that simply collecting data without adequately cataloging it turned it into a garbage dump for data.

The data lakehouse is a newer approach which aims to create a more usable repository than a data lake which profiles the data and documents it to make it more likely to be used.

The data fabric keeps data distributed, providing a single virtual centralized user interface with centralized data ownership and stewardship.

A data mesh uses a federated set of domain-specific data product services with stewardship and data ownership at the domain level. The data mesh is a peer-to-peer model with domains sharing data horizontally.

Data Integrity

Maintaining data integrity is important, which means data must adhere to atomicity, consistency, isolation, and durability (ACID) properties. Data relationships must be expressed by stating key values to avoid misleading join query results. Changes to data sets must be monitored and managed to maintain data integrity and avoid logical data corruption.

Data Quality and Governance

An important benefit of sound data governance is data quality. Data must be current and maintained to ensure errors are corrected and cleansed. High-quality data becomes trusted data.

Benefits of a Data Lakehouse Platform

The data lakehouse concept has gained popularity due to many of the benefits listed below:

  • Organizations can use the data lakehouse to extract more value from their existing data assets.
  • Users of the data lakehouse enjoy greater data quality than a data lake because the data is profiled to gain insights into its volume, timeliness, and accuracy.
  • The data lakehouse can enforce data governance.
  • The centralized repository can increase security by supporting role-based access.
  • A data lakehouse is easier to administer and uses resources more efficiently than distributed data stores.
  • The data lakehouse platform promotes self-service analytics by providing a catalog and metadata to help users find the right data sets for their analysis.
  • The right database technology can significantly improve data access speeds compared to a data lake.
  • Different data sets can be related to one another in a data lakehouse, unlike in a data lake.
  • Machine Learning benefits from a data lakehouse in that complete data sets can be mined versus subsets or aggregations usually found in a traditional data warehouse.

Creating a Data Lakehouse With Actian

The Actian Data Platform makes it easy to create a lakehouse that can be deployed on-premise, on AWS, Azure, and Google Cloud. The Actian platform’s data analytics uses a columnar, vector processing database engine for fast query speeds. Data can be centralized or distributed thanks to queries that can span database instances.

Built-in data integration features can profile data, automate data preparation steps, and support streamed data sources. The data integration capabilities built into the Actian Data Platform include a Spark connector to access unstructured data and work with popular data storage structures, including S3 buckets, Google Drive folders, and Azure Blob storage.

Try the Actian Data Platform for free.