Data Analytics

Data Readiness

Image of man working at a laptop with a stylus with interconnected file icons emitting from a laptop representing data readiness

Before data is used for a specific intended purpose, such as training a Machine Learning (ML) model or for data analysis, it must be ready. Preparing data can involve filling gaps, normalizing distribution, and removing outliers to provide the most accurate results.

Why is data readiness important?

Raw source data not checked for data readiness can lead to inaccurate or misleading analytic results. Decisions based on such data are more likely to result in unintended outcomes. For example, not removing outliers will skew the resulting conclusions and introduce bias into AI models.

A data readiness checklist

Below are some of the factors to be considered when preparing data for AI or analytic use cases:

  • Is the data a representative sample containing sufficient numbers of values to be significant?
  • Have gaps been filled using multiple sources or through extrapolation?
  • Have outlying values been removed or weighted lower than core values?
  • Have targets been labeled if the data is being used for machine learning?
  • Has the same data been gridded to contain samples across a space or time continuum?

Getting data ready

Below are several ways to get data in a state of readiness:

Smart deduplication

Many data fields are intended to contain duplicates, such as the color of a product or ZIP codes. When fields are used for key values such as email addresses in a contacts data set, data values should ideally be unique. A rough way to remove copies of duplicated records is to simply delete rows. A more intelligent way is to use a rule-driven approach to keep the most recent occurrence or to merge and reconcile records by augmenting existing data with additional field values from duplicate instances.

Increasing data consistency

When consolidating records from multiple sources, inconsistencies can creep in. It could be that some regions may spell out the customer’s State. This is an easy fix using a script or SQL statement containing a CASE predicate.

Filling gaps

Gaps in the data can be filled by drawing from multiple data sources and assigning default values. In many cases, an extrapolated or interpolated value can be used to fill any gaps.

Removing outlying values

Outlying values can be removed to avoid analysis being unduly skewed or biased by outlying values.

Filtering out data

Data essential for an upstream process may become irrelevant to an analytic application. In this case, unnecessary data can be filtered out. This reduces downstream CPU and storage usage while protecting the validity of any analysis. This is particularly important for large datasets that are being used on a public cloud platform where you pay by resource consumption. Data should be increasingly filtered as it becomes more focused on being used to answer more specific questions.

Partitioning

If the result of an analytic process is time-critical, data can be pre-partitioned to accelerate processing time. Partitioning can be based on a key value, on value ranges or a hash to distribute evenly across partitions. Partitioning massively accelerates processing times for large datasets by making parallel processing more efficient. Range scan queries can also be accelerated by making it easy to skip partitions with values that don’t match the range criteria.

Transforming data

Data integration tools such as Actian DataConnect or integration as a service on the Actian Data Platform can be used to change data formats to improve matching, remove leading or trailing spaces, and add leading zeros. Regulated data can be masked or obfuscated to protect customer privacy.

Using validation to improve data quality

A meaningful way to enforce data validity is to compare several data sources to ensure data integrity.

Automating data readiness

A data pipeline process managed by a data integration solution can help automate data readiness. A pre-programmed and scheduled set of tasks can be chained together to assist with data readiness. A data preparation pipeline can contain steps to extract, filter, transform, gap fill, and verify data partition data.

The benefits of data readiness

The primary reasons to adopt data readiness include:

  • Avoid delays in data analysis due to data that is incomplete or raw.
  • Increase the amount of quality of data available to analysts and data scientists.
  • Provide the business with the ability to understand the prevailing market conditions and act quickly.
  • Increase competitiveness by responding faster to changing customer needs and market dynamics.

Actian and data readiness

The Actian Data Platform includes a highly scalable hybrid integration solution that delivers high-quality data for unifying, transforming, and orchestrating data pipelines to drive data readiness. DataConnect is an intelligent, low-code integration platform that addresses complex use cases with automated, intuitive, and reusable integrations.

The Actian Vector database makes it easier to perform market analysis due to its columnar storage capability that minimizes the need for pre-existing data indexes. Vector processing speeds queries by exploiting multiple CPU caches from a single instruction.

The Actian Data Platform can run on-premise and multiple cloud platforms to run your analytics wherever your data resides. Learn more here.