Will Data Management in Operational Technology Finally Be Standardized?
The proliferation of Operational Technology (OT) is reshaping how industrial organizations generate and use data — from high-frequency PLC telemetry and time-series sensor feeds to asset master records and digital twins. As OT converges with IT and AI, the question resurfaces: will data management in operational technology finally be standardized? The short answer in 2026 is: not universally — but we are seeing practical, standards-driven patterns (OPC UA, MQTT/Sparkplug, Unified Namespace, IEC‑62443) and architecture blueprints that make consistent, governed OT data achievable at scale. This article is a playbook: which standards matter, what architecture patterns work, how to make OT data observable and secure, and where Actian fits into that stack.
What is Operational Technology?
Before examining the standardization issue, it is important to understand the definition of “operational technology.” OT is an umbrella term that describes technology components used to support a company’s operations – typically referring to traditional operations activities, such as manufacturing, supply chain, distribution, field service, etc. (Some companies are relying on operational technology to support, for example, marketing, sales and digital delivery of services, but that is the topic of a future article.)
Operational technology includes, for example, embedded sensors within manufacturing equipment, telemetry from operations components deployed in the field (e.g., oil pipelines, traffic signals, windmills, etc.); industrial IoT devices; location-enabled tablets, which field service personnel use; and much more – the list is long. This is important because OT is not a single classification of technology, it is a descriptor of how technology components are used.
OT Standards Catalog
OT standards are maturing into a practical toolkit rather than a single, enforced model. Key standards you need to know:
- OPC UA: A robust object model and secure services layer ideal for machine-level metadata and standardized information models. Use when you need typed semantic models and discovery.
- MQTT + Sparkplug: Lightweight publish/subscribe protocol + payload conventions. Ideal for constrained devices and reliable stateful messaging across disconnected edge networks.
- ISA‑95: Enterprise-to-control hierarchy; use for mapping OT asset metadata to ERP/PLM systems and aligning production/model data with business processes.
- IEC‑62443: The de-facto OT cybersecurity standard family; implement as a process + technical control framework. Practical tip: adopt OPC UA for machine model normalization where possible, use MQTT/Sparkplug for high-volume telemetry, and publish a canonical view into a Unified Namespace for downstream consumers.
Unified Namespace (UNS)
A Unified Namespace (UNS) is a logical messaging layer where normalized, canonical OT data is published once and consumed many times. Typical implementation:
- At the edge, adapters/gateways convert protocol-specific data (Modbus, OPC DA, proprietary PLC protocols) into a normalized topic and schema (OPC UA / Sparkplug recommended).
- Publishers write stateful telemetry to the UNS (topic hierarchy aligned with asset model).
- Consumers (analytics, MES, digital twins, ML pipelines) subscribe to the UNS and apply transformations. Checklist: define canonical tag naming, publish asset master mapping, implement schema registry, and enforce message contracts with observability alerts.
Security and Compliance
Security-by-design is non-negotiable for OT. Use IEC‑62443 as your baseline: segment networks, enforce least privilege on OPC UA endpoints, use protocol wrappers for legacy PLCs, and apply continuous vulnerability scanning. Where hardware upgrades are impractical, deploy compensating controls — read‑only collectors, application layer gateways, and micro-segmentation — to protect production networks without disrupting operations.
Data Observability for OT
Operational data failures often look different than IT ETL failures: missing tags, time-skew, out-of-band high-frequency bursts, or silent edge drops. For OT, observability should track:
- Content validity (schema drift, outlier detection).
- Flow & latency (ingest SLAs from site to cloud).
- Infrastructure health (gateway/edge uptime, buffer backpressure).
- Contract enforcement (schema registry & lineage).
- Business usage & ROI (who consumes this data and for which KPIs) Start with anomaly detection on arrival rates and tag-level health dashboards; add lineage and contract checks before deploying models to production.
The Push for Interoperability
Some efforts are occurring within the industry to drive interoperability amongst IT and OT components. Open Platform 3.0 (OP3) from The Open Group is a good example. What this standard and others like it seek to do is enable components from different manufacturers to co-exist and work better together within a company’s technology ecosystem. They aren’t seeking to standardize the data coming from individual OT systems or how that data is managed. That challenge is being left to individual companies and the data sciences profession to address.
Data science professionals have been working with companies and individual technology providers for many years to determine a scalable and efficient method to aggregate data from diverse data sources. Efforts to standardize data models and interfaces have been largely unsuccessful due to the desire of some large players in the market to develop and defend closed technology ecosystems.
In light of this, most of the recent developments have been centered on the use of data warehouses to aggregate diverse data and then applying machine learning and artificial intelligence to reconcile differences.
Why Operational Technology Data Management May Never be Standardized
The biggest challenge to standardizing OT data management is managing change. It would be entirely possible to design and deploy a standardized solution to manage all the data generated from OT systems today. The problem is that the technology in this space is continuously evolving and the data being generated is changing too.
Neither technology suppliers nor the companies consuming OT have any desire to slow the pace of technological innovation or constrain it through standardization. New OT innovations will be the driving force behind the next generation of business modernization and companies are eager to consume new capabilities as soon as they can be made available.
How Companies are Integrating Operational Technology Data
Even though companies don’t have a desire to standardize the data coming from various OT source systems, they have a very critical business need to combine data and analyze it as part of an integrated data set. That is where data management tools, such as Actian, come into play.
Actian’s suite of products, including DataConnect, Actian Data Platform and Zen, provide companies with a platform to manage the ingestion of data from all of their OT data sources, reconcile it in real-time using cloud-scale analytics and machine learning, and then apply the robust statistical analysis (e.g., time series and correlation analysis) to translate data into meaningful insights in an operations context.
The operational technology space is poised to be one of the most important sectors of the IT industry during the next few years. New components will enable companies to generate data from almost all facets of their operations, and robust data management solutions, such as Actian, will enable them to interpret this data in real-time to generate valuable operational insights.
While standardization is unlikely, component interoperability is improving, and emerging technologies, such as AI, are making data analytics easier. To learn more about how Actian can support your OT efforts, visit www.actian.com/zen.
How Actian Helps
Deploy Actian DataConnect or gateways at the edge to normalize and stream protocols (MQTT/Sparkplug, OPC UA) into a Unified Namespace; use Actian Data Platform for real‑time analytics and cloud-scale ML on reconciled OT and IT data; use Zen as a lightweight embedded DB on local devices or gateways for caching and local processing. Together, they provide ingestion, reconciliation, observability hooks, and analytics to convert raw OT telemetry into operational insight.
Implementation Checklist
- Inventory protocols & asset models per site.
- Prioritize data domains (safety, downtime, OEE, energy).
- Design a canonical model / UNS taxonomy.
- Deploy edge gateway adapters (OPC UA/MQTT).
- Implement schema registry and contract checks.
- Add observability: tag-level health, latency SLAs, lineage.
- Harden networks to IEC‑62443 recommendations.
- Measure ROI (downtime avoidance, yield improvements) and iterate.
- (Recommended) Visuals, interactive assets, and technical attachments.