Data Security

Data Privacy Regulations: What to Know

Actian Corporation

September 10, 2025

locks depicting data privacy regulations

Personal information has become a valuable asset over the last several decades, leading to the establishment of stringent data privacy regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). These laws aim to protect individuals’ privacy rights by regulating how organizations collect, store, and process personal data. Compliance with such regulations is not only a legal obligation but also a critical factor in maintaining customer trust and avoiding substantial penalties.

To navigate these complex requirements efficiently, organizations can leverage advanced tools like the Actian Data Intelligence Platform, which integrates metadata management and data governance practices to automate compliance processes.

Here’s what you should know about data privacy regulations and how the platform can help.

Understanding Data Privacy Regulations

Data privacy regulations establish guidelines and requirements for organizations that collect, store, process, and share personal information. The main goal is to ensure transparency, accountability, and control for individuals over their data.

Let’s explore two of the most prominent regulations, GDPR and CCPA.

General Data Protection Regulation (GDPR)

The GDPR, which came into effect in May 2018, is one of the most comprehensive data privacy laws globally. It applies to any organization that processes the personal data of European Union (EU) citizens, regardless of where the company is based. Key requirements under GDPR include:

  • Lawful Processing: Organizations must have a valid legal basis for collecting and processing personal data. Reasons for data collection must be disclosed and users must consent to data collection.
  • Data Subject Rights: Individuals have the right to access, rectify, erase, and restrict the processing of their data.
  • Data Portability: Users can request to receive their data in a structured, commonly used format.
  • Breach Notification: Companies must notify authorities of data breaches within 72 hours.
  • Accountability and Governance: Organizations must implement proper security measures and maintain detailed records of data processing activities.

Failure to comply with GDPR can lead to fines of up to €20 million or 4% of annual global turnover, whichever is higher. These fines apply to companies that violate GDPR rules, regardless of the geographical location.

California Consumer Privacy Act (CCPA)

The CCPA, which went into effect in January 2020, is a comprehensive privacy law that gives California residents more control over their personal data. Some of its key provisions include:

  • Right to Know: Consumers can request to know what personal data is collected and how it is used.
  • Right to Delete: Individuals can ask businesses to delete their personal information.
  • Right to Opt-Out: Users have the right to opt out of having their data sold to third parties.
  • Non-Discrimination: Organizations cannot discriminate against consumers who exercise their privacy rights.

Businesses that fail to comply with CCPA may face fines and legal consequences, including private lawsuits for data breaches.

The Importance of Complying With Data Privacy Regulations

Beyond legal requirements, compliance with data privacy laws carries significant business benefits:

  1. Building Customer Trust: Consumers are more likely to do business with organizations that prioritize data protection.
  2. Avoiding Legal Penalties: Non-compliance can lead to substantial fines and lawsuits.
  3. Enhancing Operational Efficiency: A well-structured data governance framework improves internal data management and security.
  4. Gaining Competitive Advantage: Companies that demonstrate strong data privacy practices can differentiate themselves in the market.

Challenges in Achieving Compliance

Despite its importance, compliance with GDPR, CCPA, and other data regulations presents numerous challenges:

Data Discovery and Mapping

Organizations must identify and document all personal data they collect and hold, including its source, storage location, and usage. This can be a complex and time-consuming task, especially for large enterprises.

Data Subject Rights Management

Responding to user requests for data access, modification, or deletion requires efficient processes and systems.

Continuous Monitoring and Reporting

Regulations mandate continuous monitoring of data processing activities to ensure compliance, requiring robust tracking and reporting tools.

How the Actian Data Intelligence Platform Helps Organizations Automate Compliance

Actian’s data intelligence platform is designed to help organizations address these challenges by integrating metadata management and data governance practices.

Here are the key ways the Actian Data Intelligence Platform supports compliance automation:

1. Automated Data Discovery and Cataloging

The platform automatically scans and catalogs data assets across an organization, creating a centralized metadata repository. This allows companies to easily identify and classify personal data, streamlining compliance efforts.

2. Personal Data Identification and Classification

The platform employs intelligent algorithms to detect and categorize personal data within datasets. By tagging data assets that contain personal information, organizations can better manage and protect sensitive data.

3. Data Lineage and Impact Analysis

The platform provides detailed data lineage capabilities, allowing organizations to trace the flow of data from its origin to its current state. This transparency helps businesses understand how personal data is processed and ensures compliance with regulations.

4. Data Subject Rights Management

With a clear inventory of personal data, organizations can efficiently respond to data subject requests. The Actian Data Intelligence Platform supports tracking and managing these requests to ensure timely and accurate responses.

5. Policy Enforcement and Monitoring

The platform enables the definition and enforcement of data governance policies, ensuring that data handling practices align with regulatory requirements. Continuous monitoring capabilities alert organizations to potential compliance issues, allowing for proactive remediation.

6. Audit Trails and Reporting

The platform maintains comprehensive audit logs of data access and processing activities. These logs are essential for demonstrating compliance during audits and for internal reporting purposes.

Automate Data Compliance With the Actian Data Intelligence Platform

Compliance with data privacy regulations like GDPR and CCPA is essential for organizations to protect individual privacy rights, maintain customer trust, and avoid significant penalties. However, achieving and sustaining compliance can be challenging without the right tools.

Actian’s data intelligence platform addresses these challenges by automating data discovery, classification, lineage tracking, and policy enforcement. By integrating metadata management and data governance practices, the platform empowers organizations to navigate complex data privacy compliance efficiently and effectively.

Ready to see how the Actian Data Intelligence Platform can automate data compliance for your organization? Request a personalized demo today.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Observability

How to Triage Data Incidents

Actian Corporation

September 8, 2025

Data Incidents

A single data incident can lead to broken dashboards, inaccurate analyses, or flawed decisions, which in turn can critically endanger an organization’s ability to thrive. Whether caused by schema changes, integration failures, or human error, data incidents must be addressed quickly and effectively.

Triage is the process of assessing and prioritizing incidents based on severity and impact, and it is a crucial first step in managing data quality disruptions. This article outlines a systematic approach to triaging data incidents and introduces tools and best practices to ensure an organization’s data systems remain reliable and resilient.

Understanding Data Incidents

Data incidents are events that disrupt the normal flow, quality, or accessibility of data. These can range from missing or corrupted records to delayed data ingestion or faulty transformations. Left unresolved, such issues compromise downstream processes, analytics, machine learning models, and ultimately, business decisions.

Common Causes of Data Incidents

Data incidents often stem from a variety of sources, including:

  • ETL/ELT Pipeline Failures: Issues in data extraction or transformation logic can lead to incomplete or inaccurate data.
  • Source System Changes: Schema modifications or API updates are often the cause of integration pipeline disruptions.
  • Human Error: Manual data entry problems, configuration mistakes, or miscommunication can lead to inconsistent datasets.
  • Infrastructure Issues: Network failures, database outages, or storage constraints can cause delays or data corruption.
  • Software Bugs or Logic Flaws: Flawed code in data processing scripts can propagate incorrect data silently.

Recognizing these root causes helps organizations prepare for and respond to incidents more effectively.

Types of Data Quality Issues

Data quality issues manifest in multiple ways, including:

  • Missing Data: Entire rows or fields are absent.
  • Duplicate Entries: Redundant records inflate data volumes and distort results.
  • Outliers or Anomalies: Values that deviate significantly from expected norms.
  • Schema Drift: Untracked changes to table structure or data types.
  • Delayed Arrival: Latency in ingestion affects freshness and timeliness.

Early detection of these signals (through monitoring tools, data validation checks, and user reports) enables faster triage and resolution.

The Importance of Data Triage

Just as medical teams prioritize patients based on urgency, data teams must evaluate incidents to allocate resources efficiently. Data triage ensures that the most business-critical problems receive immediate attention.

Minimizing Business Impact

Without proper triage, teams may spend time addressing low-priority issues while severe ones remain unattended. For instance, an unnoticed delay in customer order data could result in shipment errors or poor customer service. Triage helps focus efforts where they matter most, reducing downtime and avoiding reputational damage.

Enhancing Data Reliability

Triage lays the groundwork for a resilient data ecosystem. By classifying and tracking incident types and frequencies, organizations can uncover systemic weaknesses and build more fault-tolerant pipelines. Over time, this leads to more accurate analytics, dependable reporting, and greater trust in data.

Steps to Triage Data Incidents

Triage is not a single action but a structured workflow. Here’s a simplified three-step process:

Step 1: Detection and Logging

The process starts with detecting a data incident. This can happen through automated alerts, dashboard anomalies, or stakeholder reports. Once detected, organizations should take the following actions.

  • Log the incident with key metadata: time, source, data domain, and symptoms.
  • Categorize by severity: High (e.g., customer data breach), Medium (delayed reporting), Low (minor formatting errors).
  • Notify the relevant stakeholders: data engineers, analysts, or data stewards.

Accurate logging helps build a knowledge base of incidents and their solutions, speeding up future investigations.

Step 2: Impact Assessment and Prioritization

Next, determine the business impact of the incident:

  • What systems or teams are affected?
  • Is the issue recurring or isolated?
  • Are critical KPIs or SLAs at risk?

Prioritize incidents based on urgency and scope. For example, an incident affecting real-time fraud detection should take precedence over a broken weekly email report. This step often involves a preliminary root cause analysis to determine whether the incident is caused by a transformation error, integration failure, or an issue with the external data source.

Step 3: Containment and Escalation

Once prioritized, initiate containment to prevent further spread. This might involve halting data processing, isolating affected pipelines, or reverting to backup datasets. If the issue is complex or spans multiple teams, escalate to senior engineers or incident response teams. Communication is key. Provide regular updates to stakeholders until the incident has been resolved.

After containment, document the information learned and update processes to prevent similar data issues from occurring.

Implementing Effective Data Management Solutions

A strong data management foundation streamlines triage and reduces the frequency of incidents.

Leveraging Automation Tools

Manual incident detection is inefficient and prone to delays. Modern observability platforms like the Actian Data Intelligence Platform, Monte Carlo, Bigeye, or open-source tools like Great Expectations can:

  • Monitor pipelines and data quality in real time.
  • Detect anomalies automatically.
  • Generate alerts and route them to the appropriate teams.

Automation shortens detection time and ensures consistent handling across incidents.

Establishing Clear Data Governance Policies

Governance frameworks provide clarity on ownership, accountability, and standards. Well-defined data ownership helps answer questions like:

  • Who owns this dataset?
  • Who should be alerted?
  • What’s the escalation path?

Data contracts, lineage tracking, and documentation also play a critical role in triage by reducing ambiguity during high-pressure situations. These steps, respectively, outline the proper procedures to follow, the transformations or alterations that occurred during the triage process, and how the incident was resolved.

Best Practices for Continuous Improvement

Beyond tools and processes, a culture of learning and adaptation enhances long-term data incident response.

Regular Training and Awareness Programs

Data teams, engineers, and dataset users alike should be trained on:

  • How to detect and report incidents.
  • Understanding the triage workflow, including the roles involved in creation and remediation.
  • Common causes and prevention techniques.

Workshops, simulations, and post-mortems help build collective resilience and reduce dependency on a few individuals.

Continuous Monitoring and Feedback Loops

Triage is part of a larger lifecycle that includes post-incident reviews. After each incident:

  • Conduct a root cause analysis (RCA).
  • Update monitoring rules and alert thresholds.
  • Capture metrics such as Mean Time to Detect (MTTD) and Mean Time to Resolve (MTTR).

Integrating these insights into ongoing development cycles ensures systems get smarter and more robust over time.

Protect Data With Actian’s Data Solutions

Actian offers enterprise-grade solutions to prevent, detect, and respond to data incidents with agility and precision. With its high-performance data integration, real-time analytics, and hybrid cloud capabilities, Actian helps organizations maintain clean, timely, and trustworthy data.

Key features that support triage include the following.

  • Real-Time Data Validation: Catch anomalies before they impact dashboards or models.
  • Data Lineage and Auditing: Trace the root causes of incidents with ease.
  • Scalable Integration Tools: Handle changes in data sources without breaking pipelines.
  • Hybrid Deployment Options: Maintain observability across on-prem and cloud systems.

By incorporating Actian into their data ecosystems, organizations equip teams with the tools to detect issues early, triage efficiently, and recover with confidence.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Observability

What is Data Observability?

Actian Corporation

September 4, 2025

what is data observability

As data ecosystems become more complex, ensuring data health, quality, and visibility has never been more critical. Data observability gives organizations comprehensive insights into the quality and movement of their data across systems.

By borrowing principles from software observability, data observability enables data teams to detect, diagnose, and resolve data issues quickly, ensuring trust in business intelligence, analytics, and decision-making.

Understanding Data Observability

Data observability refers to an organization’s ability to fully understand the health and behavior of its data across the entire data stack. It involves continuous monitoring, alerting, and analysis to ensure data is accurate, complete, timely, and consistent. Unlike traditional data quality efforts, which often rely on reactive processes and manual checks, data observability provides automated, scalable, and proactive methods to surface and resolve issues before they impact downstream users.

The scope of data observability extends from raw data ingestion through transformation and storage, all the way to the data’s presentation in dashboards or analytical models. It aims to bridge silos in data engineering, analytics, and operations, creating a holistic view of the data lifecycle.

The 5 Pillars of Data Observability

Data observability consists of five foundational pillars:

  1. Freshness: Ensures that data is up to date and arrives when expected, helping stakeholders trust their dashboards and analytics.
  2. Distribution: This pillar refers to the shape and structure of data. Organizations need to detect anomalies in volume, null values, or unexpected patterns. Essentially, any deviation from expected distributions should be tracked and examined to see whether the root cause is a data quality issue.
  3. Volume: Tracks the completeness of data tables as well as the sheer amount of data being generated. Monitoring volume and completeness can help alert teams when the amount of data ingested exceeds or fails to meet expected thresholds.
  4. Schema: This facet of data observability tracks changes in a dataset’s structure, such as added or missing fields, to prevent downstream issues. Changes in schema can result in inaccurate data or even data loss.
  5. Lineage: Lineage tracking maps the flow of data across systems, offering visibility into dependencies, transformations, and root causes during incidents. This way, users can tell where the incident happened along the dataset’s journey from its origin to its endpoint.

Together, these components provide an ecosystem where data health is visible, measurable, and actionable.

The 5 Pillars of Data Observability in Action

Let’s break down each of the five pillars to see how they work in specific use cases.

Freshness and Timeliness

Freshness refers to how up to date your data is compared to its source. In many business applications, real-time or near-real-time data is critical. Any delay can lead to outdated insights or missed opportunities. Data observability tools track data latency across pipelines and flag when data is stale or delayed.

This is especially important in use cases like fraud detection, stock trading, and inventory management, where even small delays can lead to significant consequences. For example, failing to keep a company’s inventory data up to date can result in empty shelves or a failure to catch instances of theft or embezzlement.

Data Volume and Flow

Observing the volume of data helps teams detect irregularities such as unexpected spikes or drops, which could indicate upstream errors or bottlenecks. For example, a sudden drop in daily transaction records might signal a failed API call or broken ETL job.

Tracking data flow ensures that data is moving smoothly across ingestion, processing, and storage stages, helping maintain the continuity and completeness of datasets.

Schema and Structure

Data schema defines the structure of datasets, which includes the names, types, and organization of fields. Changes in schema, such as a new column added or a data type changed, can break downstream applications or models.

Data observability tools monitor schema drift and structural changes to prevent errors and maintain compatibility across systems. Early detection of schema issues helps avoid runtime failures and data corruption.

Data Lineage and Traceability

Understanding where data comes from and how it changes over time is crucial. Data lineage provides this traceability, enabling users to track data back to its origin and understand every transformation it undergoes.

With complete lineage visibility, teams can quickly assess the impact of changes, debug problems, and ensure regulatory compliance with GDPR, HIPAA, and other regulations. Data lineage also fosters accountability and improves data governance practices.

The Importance of Data Observability in Enterprise Management

Businesses and organizations need to implement data observability processes for a variety of reasons. The importance of having insights into poor data quality or incomplete datasets cannot be overstated. Below are key ways in which data observability has become a necessary facet of healthy enterprise data management.

Enhancing Data Quality and Reliability

Modern enterprises handle data from a variety of diverse sources, including CRMs, ERP systems, and external APIs. The sheer volume and complexity make traditional data quality checks insufficient. Data observability helps correct this by continuously assessing data for anomalies, missing values, duplicates, schema changes, and other quality issues. This enhances trust in enterprise reports, dashboards, machine learning models, and ultimately, business decisions.

By implementing data observability, organizations ensure that their teams work with clean, accurate data and are able to efficiently trace issues back to their root causes. This translates into improved customer experiences, more accurate forecasting, and reduced compliance risk.

Facilitating Proactive Issue Resolution

One of the most valuable aspects of data observability is its proactive nature. Instead of reacting to broken dashboards or missing fields, data teams can identify and address problems before they escalate. For example, if a key metric suddenly drops due to a pipeline failure, an observability system can detect the anomaly, pinpoint the source, and notify relevant stakeholders immediately.

This shift from reactive firefighting to proactive monitoring saves time and resources while improving the efficiency of data teams.

Data Observability vs. Data Monitoring

While data monitoring is a component of data observability, the two are not the same. Monitoring typically involves setting up alerts based on predefined thresholds or metrics. It’s reactive and limited in scope.

Data observability, on the other hand, provides a more holistic view. It combines monitoring with root cause analysis, data lineage, anomaly detection, and system-wide visibility. Observability tools don’t just tell you when something is wrong. Instead, they help data teams understand why it’s wrong and either mitigate the problem or tell teams how to fix it.

Data Observability vs. Data Quality Assurance

Data quality assurance (DQA) involves processes and rules to ensure data meets specific standards. It usually includes manual checks, test scripts, or validation rules applied during data preparation or after ingestion.

Data observability complements and enhances DQA by automating detection across more dimensions and at a much broader scale. Instead of relying solely on predefined tests, observability systems use machine learning and anomaly detection to uncover previously unknown issues, offering more dynamic and proactive data management.

Actian Provides In-Depth Data Observability

As enterprises increasingly rely on data to power strategic decisions, customer experiences, and operational efficiency, the need for robust data observability becomes paramount. It not only empowers data teams to ensure the reliability of their data assets but also builds confidence across the organization in data-driven initiatives.

Actian Data Observability offers real-time monitoring, anomaly detection, and intelligent alerts. It enables organizations to gain deep visibility into the health, quality, and movement of their data. It supports the five pillars of observability, ensuring teams can proactively address issues before they disrupt operations. Take the product tour.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Observability

Data Observability vs. Data Monitoring

Actian Corporation

September 2, 2025

data observability vs. data monitoring

Two pivotal concepts have emerged at the forefront of modern data infrastructure management, both aimed at protecting the integrity of datasets and data pipelines: data observability and data monitoring. While they may sound similar, these practices differ in their objectives, execution, and impact. Understanding their distinctions, as well as how they complement each other, can empower teams to make informed decisions, detect issues faster, and improve overall data trustworthiness.

What is Data Observability?

Data Observability is the practice of understanding and monitoring data’s behavior, quality, and performance as it flows through a system. It provides insights into data quality, lineage, performance, and reliability, enabling teams to detect and resolve issues proactively.

Components of Data Observability

Data observability comprises five key pillars, which answer five key questions about datasets.

  1. Freshness: Is the data up to date?
  2. Volume: Is the expected amount of data present?
  3. Schema: Have there been any unexpected changes to the data structure?
  4. Lineage: Where does the data come from, and how does it flow across systems?
  5. Distribution: Are data values within expected ranges and formats?

These pillars allow teams to gain end-to-end visibility across pipelines, supporting proactive incident detection and root cause analysis.

Benefits of Implementing Data Observability

  • Proactive Issue Detection: Spot anomalies before they affect downstream analytics or decision-making.
  • Reduced Downtime: Quickly identify and resolve data pipeline issues, minimizing business disruption.
  • Improved Trust in Data: Enhanced transparency and accountability increase stakeholders’ confidence in data assets.
  • Operational Efficiency: Automation of anomaly detection reduces manual data validation.

What is Data Monitoring?

Data monitoring involves the continuous tracking of data and systems to identify errors, anomalies, or performance issues. It typically includes setting up alerts, dashboards, and metrics to oversee system operations and ensure data flows as expected.

Components of Data Monitoring

Core elements of data monitoring include the following.

  1. Threshold Alerts: Notifications triggered when data deviates from expected norms.
  2. Dashboards: Visual interfaces showing system performance and data health metrics.
  3. Log Collection: Capturing event logs to track errors and system behavior.
  4. Metrics Tracking: Monitoring KPIs such as latency, uptime, and throughput.

Monitoring tools are commonly used to catch operational failures or data issues after they occur.

Benefits of Data Monitoring

  • Real-Time Awareness: Teams are notified immediately when something goes wrong.
  • Improved SLA Management: Ensures systems meet service-level agreements by tracking uptime and performance.
  • Faster Troubleshooting: Log data and metrics help pinpoint issues.
  • Baseline Performance Management: Helps maintain and optimize system operations over time.

Key Differences Between Data Observability and Data Monitoring

While related, data observability and data monitoring are not interchangeable. They serve different purposes and offer unique value to modern data teams.

Scope and Depth of Analysis

  • Monitoring offers a surface-level view based on predefined rules and metrics. It answers questions like, “Is the data pipeline running?”
  • Observability goes deeper, allowing teams to understand why an issue occurred and how it affects other parts of the system. It analyzes metadata and system behaviors to provide contextual insights.

Proactive vs. Reactive Approaches

  • Monitoring is largely reactive. Alerts are triggered after an incident occurs.
  • Observability is proactive, enabling the prediction and prevention of failures through pattern analysis and anomaly detection.

Data Insights and Decision-Making

  • Monitoring is typically used for operational awareness and uptime.
  • Observability helps drive strategic decisions by identifying long-term trends, data quality issues, and pipeline inefficiencies.

How Data Observability and Monitoring Work Together

Despite their differences, data observability and monitoring are most powerful when used in tandem. Together, they create a comprehensive view of system health and data reliability.

Complementary Roles in Data Management

Monitoring handles alerting and immediate issue recognition, while observability offers deep diagnostics and context. This combination ensures that teams are not only alerted to issues but are also equipped to resolve them effectively.

For example, a data monitoring system might alert a team to a failed ETL job. A data observability platform would then provide lineage and metadata context to show how the failure impacts downstream dashboards and provide insight into what caused the failure in the first place.

Enhancing System Reliability and Performance

When integrated, observability and monitoring ensure:

  • Faster MTTR (Mean Time to Resolution).
  • Reduced false positives.
  • More resilient pipelines.
  • Clear accountability for data errors.

Organizations can shift from firefighting data problems to implementing long-term fixes and improvements.

Choosing the Right Strategy for An Organization

An organization’s approach to data health should align with business objectives, team structure, and available resources. A thoughtful strategy ensures long-term success.

Assessing Organizational Needs

Start by answering the following questions.

  • Is the organization experiencing frequent data pipeline failures?
  • Do stakeholders trust the data they use?
  • How critical is real-time data delivery to the business?

Organizations with complex data flows, strict compliance requirements, or customer-facing analytics need robust observability. Smaller teams may start with monitoring and scale up.

Evaluating Tools and Technologies

Tools for data monitoring include:

  • Prometheus
  • Grafana
  • Datadog

Popular data observability platforms include:

  • Monte Carlo
  • Actian Data Intelligence Platform
  • Bigeye

Consider ease of integration, scalability, and the ability to customize alerts or data models when selecting a platform.

Implementing a Balanced Approach

A phased strategy often works best:

  1. Establish Monitoring First. Track uptime, failures, and thresholds.
  2. Introduce Observability. Add deeper diagnostics like data lineage tracking, quality checks, and schema drift detection.
  3. Train Teams. Ensure teams understand how to interpret both alert-driven and context-rich insights.

Use Actian to Enhance Data Observability and Data Monitoring

Data observability and data monitoring are both essential to ensuring data reliability, but they serve distinct functions. Monitoring offers immediate alerts and performance tracking, while observability provides in-depth insight into data systems’ behavior. Using both concepts together with the tools and solutions provided by Actian, organizations can create a resilient, trustworthy, and efficient data ecosystem that supports both operational excellence and strategic growth.

Actian offers a suite of solutions that help businesses modernize their data infrastructure while gaining full visibility and control over their data systems.

With the Actian Data Intelligence Platform, organizations can:

  • Monitor Data Pipelines in Real-Time. Track performance metrics, latency, and failures across hybrid and cloud environments.
  • Gain Deep Observability. Leverage built-in tools for data lineage, anomaly detection, schema change alerts, and freshness tracking.
  • Simplify Integration. Seamlessly connect to existing data warehouses, ETL tools, and BI platforms.
  • Automate Quality Checks. Establish rule-based and AI-driven checks for consistent data reliability.

Organizations using Actian benefit from increased system reliability, reduced downtime, and greater trust in their analytics. Whether through building data lakes, powering real-time analytics, or managing compliance, Actian empowers data teams with the tools they need to succeed.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Databases

Securing Your Data With Actian Vector, Part 4

Martin Fuerderer

August 28, 2025

securing your data with actian vector

This fourth blog post in the series explains how an existing encrypted database is upgraded from an older version of Actian Vector to Actian Vector 7.0.

Upgrading an Encrypted Database

When upgrading databases from older versions to Actian Vector 7.0, the internal changes to equip them with a “main key” and appropriate derived keys are, of course, handled automatically. For encrypted databases, this includes preserving their already existing database key because it continues to serve for encrypting and decrypting the data.

When using the in-place upgrade method with the “upgradedb” utility, encrypted databases need to be unlocked during the upgrade procedure. Otherwise, the “upgradedb” utility cannot connect to the locked database to perform the upgrade. These five  steps upgrade an encrypted database:

  1. After installation and start-up of Actian Vector 7.0, an existing encrypted database is locked and not yet upgraded. Therefore, it is not possible to connect directly to the database to unlock it.
  2. Connect to database “iidbdb”. The “iidbdb” database is upgraded automatically during the startup of the new Version 7.0.
  3. In the session connected to “iibdb”, temporarily unlock the encrypted database for the upgrade. Run the statement:ENABLE PASSPHRASE ‘<pass phrase>’ ON DATABASE <name_of_encrypted_database>;
  4. With the encrypted database temporarily unlocked, it is now possible to run the utility “upgradedb” for this database.
  5. After running “upgradedb” for the encrypted database, it is necessary to unlock this database again via a direct connection. Use the Terminal Monitor “sql” with the commandline option “-no_x100” to connect directly to the encrypted database. In this session run the statement:ENABLE PASSPHRASE ‘<pass phrase>’;This last step persists the preservation of the already existing database key.

For more details on securing data with Actian Vector, find out how to:

Trusted Security in Every Upgrade

Upgrading to Actian Vector 7.0 doesn’t mean compromising encryption. The process ensures that existing database keys are preserved, while new key structures are applied automatically. By following a few essential steps, organizations can confidently upgrade their Actian Vector database without disrupting data security or accessibility.

Martin Fuerderer headshot

About Martin Fuerderer

Martin Fuerderer is a Principal Software Engineer for HCLSoftware, with 25+ years in database server development. His recent focus has been on security features within database environments, ensuring compliance and robust data protection. Martin has contributed to major product releases and frequently collaborates with peers to refine database security standards. On the Actian blog, Martin shares insights on secure database server development and best practices. Check his latest posts for guidance on safeguarding enterprise data.
Data Intelligence

Model Context Protocol Demystified: Why MCP is Everywhere

Dee Radh

August 26, 2025

model context protocol demystified

What is Model Context Protocol (MCP) and why is it suddenly being talked about everywhere? How does it support the future of agentic AI? And what happens to businesses that don’t implement it?

The short answer is MCP is the new universal standard connecting AI to trusted business context, fueling the rise of agentic AI. Organizations that ignore it risk being stuck with slow, unreliable insights while competitors gain a decisive edge.

What is Model Context Protocol?

From boardrooms to shop floors, AI is rewriting how businesses uncover insights, solve problems, and chart their futures. Yet even the most advanced AI models face a critical challenge. Without access to precise, contextualized information, their answers can fall short by being generic and lacking critical insights.

That’s where MCP comes in. MCP is a rapidly emerging standard that gives AI-powered applications, like large language models (LLM) assistants, the ability to connect to structured, real-time business context through a knowledge graph.

Think of MCP as a GPS for AI. It guides models directly to the most relevant and reliable information. Instead of building custom integrations for every tool or dataset, businesses can use MCP to give AI applications secure, standardized access to the information they need.

The result? AI systems that move beyond generic responses to deliver answers rooted in a company’s unique and current reality.

Why MCP Matters for Businesses

The rise of AI data analysts, which are LLM-powered assistants that translate natural-language questions into structured data queries, makes MCP mission-critical. Unlike traditional analytics tools that require SQL skills or dashboard expertise, an AI data analyst allows anyone to simply ask questions and get results.

These questions can be business focused, such as:

  • What’s driving our increase in customer churn?
  • How did supply chain delays impact last quarter’s revenue?
  • Are seasonal promotions improving profitability?

Answering these questions requires more than statistics. It demands contextual intelligence pulled from multiple, current data sources.

MCP ensures AI data analysts can:

  • Converse naturally. Users ask questions in plain language.
  • Ground answers in context. MCP optimizes knowledge graphs for context.
  • Be accessible to all users. No coding or data science expertise is needed.
  • Provide action-oriented insights. Deliver answers that leaders can trust.

In short, MCP is the bridge between decision-makers and the technical complexity of enterprise data.

The Business Advantages of MCP

The value of AI isn’t in generating an answer. It’s in generating the right answer. MCP makes that possible by standardizing how AI connects to business context, turning data into precise, actionable, and trusted insights.

Key benefits of MCP include:

  • Improved accuracy. AI reflects current, trusted business data.
  • Scalability across domains. Each business function, such as finance, operations, and marketing, maintains its own tailored context.
  • Reduced integration complexity. A standard framework replaces costly, custom builds.
  • Future-proof flexibility. MCP ensures continuity as new AI models and platforms emerge.
  • Greater decision confidence. Leaders act on insights that reflect real business conditions.

With MCP, organizations move from AI that’s impressive to AI that’s indispensable.

Knowledge Graphs: The Heart of MCP

At the core of MCP are knowledge graphs, which are structured maps of business entities and their relationships. They don’t just store data. They provide context.

For example:

  • A customer isn’t simply a record. They are linked to orders, support tickets, and loyalty status.
  • A product isn’t only an SKU. It’s tied to suppliers, sales channels, and performance metrics.

By tapping into these connections, AI can answer not only what happened but also why it happened and what’s likely to happen next.

Powering Ongoing Success With MCP

Organizations that put MCP into practice and support it with a knowledge graph can create, manage, and export domain-specific knowledge graphs directly to MCP servers.

With the right approach to MCP, organizations gain:

  • Domain-specific context. Each business unit builds its own tailored graph.
  • Instant AI access. MCP provides secure, standardized entry points to data.
  • Dynamic updates. Continuous refreshes keep insights accurate as conditions shift.
  • Enterprise-wide intelligence. Organizations scale not just data, but contextual intelligence across the business.

MCP doesn’t just enhance AI. It transforms AI from a useful tool into a business-critical advantage.

Supporting Real-World Use Cases Using AI-Ready Data

AI-ready data plays an essential role in delivering fast, trusted results. With this data and MCP powered by a knowledge graph, organizations can deliver measurable outcomes to domains such as:

  • Finance. Quickly explain revenue discrepancies by connecting accounting, sales, and market data.
  • Supply chain. Answer questions such as, “Which suppliers pose the highest risk to production goals?” with context-rich insights on performance, timelines, and quality.
  • Customer service. Recommend personalized strategies using data from purchase history, service records, and sentiment analysis.
  • Executive leadership. Provide faster, more reliable insights to act decisively in dynamic markets.

In an era where the right answer at the right time can define market leadership, MCP ensure AI delivers insights that are accurate, actionable, and aligned with the current business reality. From the boardroom to the shop floor, MCP helps organizations optimize AI for decision-making and use cases.

Find out more by watching a short video about MCP for AI applications.

dee radh headshot

About Dee Radh

As Senior Director of Product Marketing, Dee Radh heads product marketing for Actian. Prior to that, she held senior PMM roles at Talend and Formstack. Dee has spent 100% of her career bringing technology products to market. Her expertise lies in developing strategic narratives and differentiated positioning for GTM effectiveness. In addition to a post-graduate diploma from the University of Toronto, Dee has obtained certifications from Pragmatic Institute, Product Marketing Alliance, and Reforge. Dee is based out of Toronto, Canada.
Databases

HCL Informix® 15 Launches on the Microsoft Azure Marketplace

Nick Johnson

August 20, 2025

actian hcl informix azure announcement

We’re excited to announce the general availability of HCL Informix® 15 on the Microsoft Azure Marketplace—bringing powerful, enterprise-grade performance to one of the world’s most trusted cloud platforms.

Now, customers can deploy HCL Informix 15 directly on Azure, take advantage of their Microsoft Azure committed cloud spend, and streamline procurement through Azure’s familiar and secure billing environment.

This milestone marks a major step forward in delivering flexible, modern deployment options for organizations running HCL Informix at scale—while making cloud adoption easier, faster, and more cost-effective.

Why it Matters: Cloud-Enabled on Your Terms

With HCL Informix 15 on Azure, enterprises can modernize their data environments while gaining enhanced performance, control, and reliability. Whether you’re planning new deployments or looking to migrate legacy instances, this Azure Marketplace offering enables:

  • Faster, frictionless procurement via Microsoft Azure billing.
  • Use of Microsoft Azure committed spend (MACCs) to fund licenses—no new budget needed.
  • Enterprise-ready security and scalability in a convenient hyperscaler environment.

HCL Informix is known for its unmatched ability to handle high-throughput OLTP workloads along with time series, spatial, and JSON data—all within a single engine. Now, that same capability is available with the simplicity and elasticity of the Azure cloud.

A Critical Moment for Existing Deployments

This launch comes at a pivotal time. General Availability (GA) support for Informix 12.10 will officially end on April 30, 2026, moving into Extended Support. As costs rise with Extended Support, some organizations may choose to go off-support entirely—but this comes with serious risks to security, compliance, and operational stability.

HCL Informix 15 on Azure gives organizations a modern, supported path forward—with the flexibility to deploy in the cloud and the financial efficiency of leveraging Azure cloud credits.

Built for Azure, Backed by Actian

HCL Informix 15 on Microsoft Azure offers seamless integration with native Azure services and infrastructure. Customers can expect:

  • Azure Marketplace-native deployment, with standard provisioning and scaling.
  • Support for hybrid and multi-cloud strategies.
  • High availability, backup, and monitoring tools, all optimized for Azure environments.
  • Enterprise-grade support from Actian and our global partner network.

For organizations looking to modernize, consolidate, or simply future-proof their Informix environments, this new Azure-based deployment model offers a practical and powerful solution.

Get Started Today

HCL Informix 15 is available now on the Microsoft Azure Marketplace.

If you’re an existing user of version 12.10, 14.10, or evaluating your next data platform strategy, now is the time to explore the benefits of running HCL Informix 15 on Azure. From simplified procurement to built-in modernization capabilities for ongoing value, this launch makes it easier than ever to align your data infrastructure with your cloud strategy.

» Explore the Azure Marketplace Listing
» Contact us for upgrade assistance or deployment support

 

Informix is a trademark of IBM Corporation in at least one jurisdiction and is used under license.

nick johnson headshot

About Nick Johnson

Nick Johnson is a Senior Product Marketing Manager at Actian, driving the go-to-market success for HCL Informix and Actian Zen. With a career dedicated to shaping compelling messages and strategies for databases, Nick brings a wealth of experience from his impactful work at leading technology companies, including Neo4j, Microsoft, and SAS.
AI & ML

Investing in People: Why Mentorship Matters at Actian

Liz Brown

August 18, 2025

mentorships at actian

If there’s been one constant throughout my career, it’s the power of mentorship. From my first job to my current role at Actian, mentors have helped me grow, see new possibilities, and make smart career decisions. As I’ve moved through different roles and companies, I’ve also made it a point to be a mentor for others, which has been a rewarding experience.

At Actian, I’m thrilled to see how much our leadership team values mentorships. This goes beyond a formal program. It covers how we work together, support our peers, and help each other succeed. Mentorships are a core part of Actian’s culture, which is one of the reasons we have an award-winning workplace. 

How Mentorships Shaped My Career

I can trace some of my biggest career growth moments back to the mentors who invested their time and expertise in me. One of my first managers, going all the way back to 2003, made a lasting impact. We worked together early in my career, but we stayed in touch. Even though it’s been more than two decades since we worked together, I consider her a trusted friend and still call her for advice and perspective.

What helped make that relationship so impactful was how intentional her advice was. She helped me map out my three-, five-, and 10-year career goals and encouraged me to think about where I wanted to be in my future, not just what I had to get done that day. This mindset helped shape how I approach my career and how I support others.

I’ve been lucky to have several mentors along the way, including a manager at AWS who I followed through multiple roles. Even today, I regularly call her to get advice on everything from job decisions to how to handle challenging situations. Those relationships have helped me at every stage along my career path, and they’ve inspired me to be just as intentional about mentoring others.

Paying it Forward by Supporting Mentees

No matter where I’ve worked, I’ve always made mentorships a priority. At IBM, AWS, and now Actian, I’ve had mentees whether I was in management or an individual contributor role.

I support having career conversations that go beyond the next task we need to accomplish and instead focus on long-term personal growth. That’s why when one of my mentees from AWS who is based in Sweden took a job at Google, we still continue to connect and share experiences.

I like to map growth and goals on a 2×2 chart. This includes asking questions such as:

  • What are you working on now?
  • What are your short-term goals?
  • Where do you want to be long-term?
  • What skills or experiences will help you get there?

These conversations help people see beyond the daily workload and emphasize building a career they’re excited about.

Mentorship in Action at Actian

One of the things I really appreciate at Actian is how the mentorship program is woven into our culture. Every new hire gets an onboarding buddy, which I view as a really smart approach to accelerating how quickly we learn about Actian processes and priorities, and also solve any potential workplace challenges.

That ongoing buddy is the new employee’s go-to resource for all those early questions and, more importantly, someone who can offer guidance from day one in a safe environment. When I joined Actian, Ron Weber was my onboarding buddy. It was very beneficial to have a seasoned marketing professional and colleague to lean on for insights.

The engagement doesn’t end as employees become acclimated to the organization. Mentorships happen naturally at Actian through everyday interactions, career conversations, and ongoing collaboration. I always encourage people to build relationships outside of their immediate teams.

For instance, some of the best advice I’ve received came from people in sales because hearing different perspectives helps me better understand the impact of our work. Hearing other departments’ viewpoints also enables me to think about aspects of our projects that I might have missed or not fully considered. 

Why I Fully Support the Actian Internship and Mentoring Programs

Our internship program is another way I see Actian’s commitment to developing employees and preparing them for the next stage of their careers. Throughout the year and especially in the summer, we engage interns across various departments. They work on everything from creating marketing campaigns to elevating customer experiences to developing innovative technology.

We pair interns with managers and onboarding buddies, giving them meaningful projects with opportunities to drive business outcomes. This isn’t busy work. We offer hands-on projects that directly contribute to our business, platforms, and customer experiences.

For example, this summer I have an intern on my team. She’s leading an account-based marketing project that will be sent directly to customers. It’s rewarding to see her grow, take ownership of the project, and gain experience that she can use both in the classroom and throughout her career. As a manager, having an intern helps me move forward on strategic projects that I might not otherwise have time to tackle.

That’s why our internship program is a win for everyone. Our interns gain valuable real-world experience, and Actian benefits from fresh perspectives, new ideas, and brand awareness in places we may not normally reach. When interns return to campus and talk about working at Actian, they introduce our company to other students—future business leaders—and their professors. I love that Actian gets this exposure on campuses and reaches an audience we don’t otherwise engage with.

 Why Mentorship Will Always Be Important

Career growth doesn’t happen in isolation. It happens through connections, conversations, and continuous learning. One of the most rewarding ways we bring this belief to life at Actian is through our mentoring culture.

Even after years in the field, I still seek advice from my mentors. I want to continue to learn, grow, and avoid blind spots. Mentorship has been a huge part of my success, and it’s something I’m deeply passionate about for others, regardless of where they are in their career.

At Actian, I see mentorships in action every day, from onboarding new employees to internships with university students to the everyday career conversations we have across the business. It’s part of who we are. It’s also a big reason why Actian is such a great place for people to grow their skill sets and advance their careers.

liz brown headshot

About Liz Brown

Liz Brown is a high-energy, results-driven marketing professional with a proven track record of driving business growth and inspiring, mentoring, and enabling colleagues and peers. Known for her strategic thinking and collaborative leadership, Liz excels at building impactful marketing strategies, ABM programs, and enablement initiatives tailored to top accounts and industries. She has extensive experience in brand positioning, integrated campaigns, and customer engagement, from large-scale events to targeted digital initiatives.
Data Intelligence

The Power Behind the Graph: Why Actian Outpaces the Competition

Phil Ostroff

August 14, 2025

Unlocking the Power of Data with a Federated Knowledge Graph

Data is everywhere, and organizations are awash in information but still struggle to turn it into real business value. Silos persist, data duplication runs rampant, and finding the “right” data often feels like hunting for a needle in a haystack. Enter the federated knowledge graph—a modern approach to unifying data context, lineage, and relationships across domains, platforms, and business units.

Actian’s federated knowledge graph doesn’t just connect data. It connects understanding, enabling every user, from data engineers to business analysts, to explore data intuitively and confidently. It is what differentiates the Actian Data Intelligence Platform—and it’s something most of our competitors still lack.

What is a Federated Knowledge Graph, and Why Does it Matter?

A federated knowledge graph organizes and links metadata, business terms, technical definitions, and usage patterns across distributed data systems. Unlike traditional data catalogs that rely on rigid, centralized schemas, Actian’s knowledge graph architecture embraces federation—aggregating insights from multiple domains without forcing everything into a single model.

The result? A contextualized map of your organization’s data ecosystem that scales as you grow, evolves as your systems change, and remains discoverable by both technical and non-technical users.

Benefits include:

  • Faster data discovery through contextual relationships and semantic search.
  • Enhanced data trust via connected lineage, quality scores, and usage metrics.
  • Smarter governance is powered by visibility into how data is created, transformed, and consumed.
  • Better business alignment by linking KPIs and metrics to the actual data behind them.

How Actian Outperforms the Competition

Many data intelligence vendors offer basic graph features, such as lineage visualizations or relationship mapping. However, these are often centralized and limited in scope, focusing narrowly on either technical lineage or business glossary terms. They fall short of what a federated graph can achieve.

Here’s how Actian differentiates:

Capability Actian Data Intelligence Other Solutions
Federated Graph Architecture Yes – spans domains and tools. No – basic graph capabilities are often centralized and siloed.
Dynamic Lineage and Context Integrated across data catalog + observability. Limited, static lineage.
Semantic Search + NLP Deep contextual discovery. Keyword-based, inconsistent tagging.
Business-to-Technical Mapping Direct links between terms and data. Often requires manual stitching or third-party tools.
Open, Extensible Framework Supports modern data stacks (e.g., Iceberg, dbt, Fivetran, etc.). Often proprietary or restricted.

Competitors may claim to “connect the dots” with third-party solutions, but without a federated model, those dots remain scattered. Actian unifies them—and surfaces the insights your team needs in real time.

Some competitors even claim to include this functionality within their offering, but customers later find out that they simply offer an API to connect with additional solutions. That translates to more cost and, more importantly, additional potential points of failure.

Why the Right Knowledge Graph Matters to Your Business

In a world where data must be treated as an asset, too many organizations still struggle to locate, understand, and trust their data. Why? Because most data platforms stop at superficial integrations and centralized catalogs that can’t keep up with the growing complexity of enterprise ecosystems.

Let’s be clear: centralized metadata models are not built for scale. Collibra, Alation, and Informatica may offer lineage maps and glossaries, but their architectures are inherently rigid. They rely on manual curation, brittle connectors, and static representations of data relationships. The result is often a stale catalog that looks good in demos but quickly becomes outdated and irrelevant in practice.

Actian’s federated knowledge graph overcomes these limitations.

While other platforms force all metadata into a central repository—often requiring complex ETL-style ingestion—Actian leverages federation to connect to the source of truth in real-time, allowing metadata to remain decentralized while still being universally discoverable. That means your data catalog is always current, context-rich, and governed by design—not by duct tape.

Let’s dig into a few real-world consequences of those differences:

  • Other solutions rely heavily on keyword-based search, which leads to missed results and user frustration. Actian’s semantic knowledge graph enables natural language queries and contextual exploration, helping users find exactly what they need—even if they don’t know exactly what they’re looking for.
  • Most platforms offer disconnected glossaries and lineage tools, requiring users to mentally bridge the gap between business terms and the underlying technical data. Actian automatically maps these relationships across systems, roles, and tools—removing ambiguity and reducing reliance on tribal knowledge.
  • Manual lineage stitching is still the norm for many vendors, especially in complex hybrid-cloud environments. Actian dynamically updates lineage and usage patterns across data products and contracts, ensuring trustworthy insights and audit-ready governance.
  • Extensibility is a major limitation in other tools—either you use what their proprietary connectors allow, or you’re stuck. Actian’s open, API-first framework integrates seamlessly with modern data stacks, from dbt to Iceberg to Fivetran, without vendor lock-in.

The business impact is profound. With Actian, your teams no longer waste time second-guessing metadata quality, replicating datasets, or manually reconciling conflicting definitions. Instead, they’re empowered with a federated map of your enterprise data landscape, tailored to your architecture, aligned to your governance needs, and optimized for discovery at scale.

See Your Entire Data Universe

In a competitive landscape where most data platforms offer only partial insights, Actian’s federated knowledge graph delivers the full picture. It’s how modern enterprises scale trust, clarity, and collaboration across their entire data ecosystem.

Experience the Actian Data Intelligence Platform and its federated knowledge graph for yourself with a product demo.

Phil Ostroff Headshot

About Phil Ostroff

Phil Ostroff is Director of Competitive Intelligence at Actian, leveraging 30+ years of experience across automotive, healthcare, IT security, and more. Phil identifies market gaps to ensure Actian's data solutions meet real-world business demands, even in niche scenarios. He has led cross-industry initiatives that streamlined data strategies for diverse enterprises. Phil's Actian blog contributions offer insights into competitive trends, customer pain points, and product roadmaps. Check out his articles to stay informed on market dynamics.
Data Architecture

Rethinking the Medallion Architecture for Modern Data Platforms

Piethein Strengholt

August 12, 2025

Actian-Big Medallion Architecture Debate-Blog

The Medallion architecture is a popular design pattern for organizing data within a Lakehouse architecture. Many large enterprises use this pattern to logically structure their data. 

In this post, I’ll outline how the architecture works, explore its adaptability in modern enterprise environments, and highlight why it remains relevant, especially as data teams scale and federate.

Understanding the Three Layers

Bronze Layer

This layer acts as the zone for raw data collected from various sources. Data in the Bronze layer is stored in its original structure without any transformation, serving as a historical record and a single source of truth. It ensures that data is reliably captured and stored, making it available for further processing. Its key characteristics include high volume, variety, and veracity. The data is immutable to maintain the integrity of its original state.

Silver Layer

This layer refines, cleanses, and standardizes the raw data, preparing it for more complex operational and analytical tasks. In this layer, data undergoes quality checks, standardization, deduplication, and other enhancements that improve its reliability and usability. The Silver layer acts as a transitional stage where data is still granular but has been processed to ensure quality and consistency. Its key characteristics are that data in the Silver layer is more structured and query-friendly, making it easier for analysts and data scientists to work with.

Gold Layer

This layer delivers refined data optimized for specific business insights and decision-making. The Gold layer involves aggregating, summarizing, and enriching data to support high-level reporting and analytics. This layer focuses on performance, usability, and scalability, providing fast access to key metrics and insights.

Why the Layers are Logical, Not Physical

It’s crucial to think of these layers as logical, not physical. So, when discussing, for example, the Bronze layer, don’t frame it as just one physical layer. Instead, view it as a logical layer that could span across several physical layers. Below is how the Medallion architecture could look in practice:

building medallion architecture all layers

Figure 1 – Building Medallion Architectures, O’Reilly (2025)

 

This conceptual flexibility is vital, particularly in larger organizations. As these organizations expand, they face the challenge of scaling data management to support increased data volumes, accommodate more users, and address a wider variety of use cases.

Federated Medallion Architecture

In this context, it is important to understand that the Medallion architecture should not be viewed as a rigid concept; rather, it represents a spectrum of possibilities that can be adapted to unique circumstances, including the option of having multiple Medallion architectures tailored to different needs, which can influence the design of the overall architecture.

For instance, consider managing two Medallion architectures—one tailored to the source system and the other to consumption. In this case, the interaction between layers becomes crucial. You could argue that the Gold or data product layer in the source-aligned architecture effectively acts as the Bronze layer in the consumption-aligned architecture. This approach creates a more streamlined architecture by eliminating the need to duplicate the data product layer in the Bronze layer of the consumption setup.

The image below illustrates an architectural style that employs two basic consumers: a simple data provider, a single-use complex provider, and a distributor consumer.

building medallion architecture simple data provider chart

Figure 2 – Building Medallion Architectures, O’Reilly (2025)

Managing Complexity Across Teams

Building Medallion architectures can be challenging, especially when many teams are involved, each requiring access to data from others. In such scenarios, you might consider establishing separate Medallion architectures for each team, complete with their own Bronze, Silver, and Gold layers.

However, offering good guidance is essential to prevent the emergence of too many variations, which could hinder interoperability between domains and create silos that complicate data sharing and collaboration.

In conclusion, the Medallion model is not a one-size-fits-all solution. However, it remains one of the most practical and adaptable design patterns for structuring AI-ready, analytics-grade data pipelines—especially in complex, federated, and continually growing environments.


To explore the concepts in more depth, check out my book:
Building Medallion Architectures: Designing With Delta Lake and Spark (O’Reilly, 2025)

 

Or watch the full replay of the webinar:
The Big Medallion Architecture Debate
With Ole Olesen Bagneux, Actian Chief Evangelist

Piethein Strengholt headshot

About Piethein Strengholt

Piethein Strengholt is a seasoned expert in data management with significant experience in chief data officer (CDO) and chief data architect roles. He has a strong track record of collaborating with CDO executives at large enterprises, where he focuses on driving community growth and aligning strategies with business goals. Piethein is also a prolific blogger and a sought-after speaker who regularly addresses the latest trends in data management, including data mesh concepts, data governance, and scaling strategies.
Databases

Securing Your Data With Actian Vector, Part 3

Martin Fuerderer

August 7, 2025

securing your data with Actian Vector

Following up on my second blog post about Actian Vector’s functional encryption capabilities, this next blog post in the series on data security explains the different encryption keys and how they are used in Actian Vector.

Understanding Different Encryption Keys

The encryption method generally used in Actian Vector is the 256-bit variant of the Advanced Encryption Standard (AES). AES requires an encryption key, and for 256-bit AES, this key is 256 bits long. A longer key means better security. Currently, 256-bit AES is considered “secure enough” and 256 bits is the maximum key length defined for AES.

In Actian Vector, database encryption is a major use case of encryption. As described earlier, its implementation uses different encryption keys for different pieces of data. Besides database encryption, there are also other uses for encryption in the database server, and therefore, still different keys are used for these. And all these encryption keys must be secured.

To provide sufficient security, encryption keys must not be easy to guess. Therefore, encryption keys are usually randomly generated. This makes them secure, but difficult to remember (few people can easily remember a sequence of 32 random bytes). A common solution is to protect encryption keys with a passphrase, where a prudently chosen passphrase can be sufficiently secure but still easy enough to remember.

Still, it would not be safe to use a passphrase directly as an encryption key. Instead, an encryption key gets derived from the passphrase, and advanced algorithms exist for this derivation process to make sure that a secure enough key is the result.

Actian Vector uses Password-Based Key Derivation Function 2 (PBKDF2) for this purpose. PBKDF2 is part of the RSA Laboratories’ Public-Key Cryptography Standards series. This illustration shows the process:

Structure of encryption keys used for encryption at rest

How Keys are Generated, Secured, and Used for Database Encryption 

An individual “main key” is randomly generated for each database. To store the main key securely, it is encrypted with the “protection key.” This protection key results from the passphrase being processed by the PBKDF2 algorithm. The protection key does not need to be stored anywhere because its derivation from the passphrase can be repeated whenever the protection key is needed.

The “database key” for the database encryption is then derived from the main key. Because the main key is already randomly generated, an internal method that is based on a Secure Hash Algorithm (SHA) derives a sufficiently random database key without the need for the more complex PBKDF2 algorithm.

Likewise, other keys for different purposes are derived from the main key. These derived keys are not persisted, but they are kept in memory only. On the other hand, the encrypted main key is persisted, but the decrypted main key is only needed to derive the database key and other keys. Afterwards the decrypted main key is removed from memory.

The database key is used to encrypt and decrypt the container of the individual “table keys.” These table keys are randomly generated for each table and used to (finally) encrypt and decrypt the user data in tables and indexes. Because the table keys are randomly generated, they also need to be persisted and with that, secured by encrypting them with the database key. The container where the table keys are stored also contains other meta data for the database, and therefore it is also secured by encrypting the whole container, rather than individually encrypting just the table keys.

The database administrator can change the passphrase for a database, as well as rotate the main key or individual table keys. I’ll share more on this in later blog posts on key management.

Martin Fuerderer headshot

About Martin Fuerderer

Martin Fuerderer is a Principal Software Engineer for HCLSoftware, with 25+ years in database server development. His recent focus has been on security features within database environments, ensuring compliance and robust data protection. Martin has contributed to major product releases and frequently collaborates with peers to refine database security standards. On the Actian blog, Martin shares insights on secure database server development and best practices. Check his latest posts for guidance on safeguarding enterprise data.
Awards

Actian Earns Top Marks in ISG Buyers Guide™ for Data Platforms

Actian Corporation

August 6, 2025

ISG Research Buyers Guide 2025 - Actian Exemplary Winner for Data Platforms

ISG Software Research, a global analyst research firm, has recognized Actian for the second consecutive year. 

The latest ISG Buyers Guide™ for Data Platforms highlights Actian’s impressive standing in the crowded and competitive data platform market. The annual report, produced by global technology advisory leader ISG, rigorously assesses the market’s top data platform providers, serving as a crucial resource for enterprise buyers navigating solution choices by helping them make informed buying decisions. 

What truly sets Actian apart is a forward-looking roadmap. Actian is continuously evolving its product portfolio to solve both current and future data challenges, ensuring customers can meet today’s demands while preparing for tomorrow’s opportunities. From scaling data trust to enabling data intelligence and supporting AI readiness, Actian’s commitment to innovation ensures that organizations not only keep pace with change but also stay ahead of it.

Actian’s Noteworthy Placement

Actian has been recognized as an “Exemplary” performer in all three major categories evaluated by ISG: 

  • Data Platforms (Overall)
  • Analytic Data Platforms 
  • Operational Data Platforms 

This coveted placement recognizes Actian as not just a leading solutions provider, but a best-in-class vendor for both product and customer experience. In a marketplace that includes tech giants such as Oracle, Microsoft, AWS, Google Cloud, and SAP, this is a significant recognition of the Actian Data Platform’s capabilities and client value, as well as Actian’s approach to innovation.

Leading in Manageability and Customer Validation

ISG’s assessment model uses a robust methodology, scoring vendors on seven distinct dimensions. Within this framework, Actian stands out as a Leader in two key evaluation criteria for both Data Platforms and Operational Data Platforms:

  • Manageability: Reflecting Actian’s strength in enabling seamless platform operation, governance, and support for complex enterprise IT needs.
  • Validation: Recognizing Actian’s ability to deliver tangible business value and foster robust customer relationships through every stage of the client lifecycle. 

These distinctions are especially notable given the technical demands and mission-critical nature of modern data platforms.  

Beyond Actian’s leader status in Manageability and Validation, Actian secured strong marks across the full spectrum of ISG’s criteria, including adaptability, capability, reliability, usability, and total cost of ownership (TCO).

Why These Findings Matter

ISG’s Buyers Guides are known for their objectivity, depth, and focus on real enterprise requirements; badges and rankings are not influenced by vendor marketing or participation alone. Actian’s “Exemplary” placement is a direct result of its technical merit, customer validation, and depth of platform features.

Actian’s performance in the 2025 ISG Buyers Guide™ for Data Platforms validates its status as a well-rounded, enterprise-grade data platform provider, offering businesses an alternative to legacy incumbents without compromise on features, reliability, or value. Unlike legacy solutions that often lock organizations into rigid systems and high costs, Actian takes a more agile and customer-centric approach. Our platform is designed for flexibility and scalability, helping enterprises innovate without the complexity or constraints of traditional providers.

Ready to see what sets Actian apart? Learn more about the Actian Data Platform and discover the innovation and results that have driven Actian’s ongoing industry recognition. Empower your business with the performance, flexibility, and real-world value that leading enterprises rely on. 

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.