We all hear about how forward-thinking companies, small and large, need to be more customer focused, even customer obsessed, to be successful in this hypercompetitive world. Data drives knowledge about your customers’ needs and behaviors, so you can actively tailor your messaging and offers to rise above the competition and win their business. This knowledge comes from an increasing variety of 24/7 sources, through digital systems and increasingly a sea of sensors, devices and mobile applications that track those activities. But the volume of data can be overwhelming, and the value of your data can decay quickly with time, so it’s imperative to have an infrastructure in place to rapidly exploit that perishable information to influence when and how you engage with your intended customers. That takes a new approach to manage data in the moment, which we call an operational data warehouse (ODW). An ODW can go beyond reporting on historic, static data and can instead operate with fresh, active data to drive specific business actions – in the business moment.
Enterprises already have a number of solutions in place to deliver analytic insights, from established relational database systems to enterprise data warehouses to data lakes, within their data centers or increasingly in the cloud. Existing solutions typically involve some significant tradeoffs that an operational data warehouse can overcome.
Take the traditional enterprise data warehouse which has been around for decades. It is an established way of managing historical data, delivering batch updates, supporting regular reporting cycles, and serving as a single source of truth for the corporation. However, it’s typically an expensive solution, especially if you have to upgrade hardware, expand capacity, add new data types, and modernize access. An EDW carefully managed by IT for governance and controlled costs requires new reports to go through a formal change process that can slow development. While an EDW handles planned workloads well, it is poor at ad hoc queries, making it difficult to do data discovery and generate actionable analytics without impacting existing reporting workloads.
Another option for some is an operational data store which provides more data flexibility and a separate environment to allow ad hoc analysis, but usually rigidly focuses on one area or data type and is not comprehensive. Like an EDW, it may not be optimized for interactive analytic query performance needed for discovery.
Data lakes are viewed by many as a more economical and scalable solution, with storage for many data sources and data types. However, it can become a dumping ground for data with poor governance and validation. Its architectural heritage, designed for easy, flexible data ingestion results in turn with slow query performance, subpar user concurrency and unpredictable outcomes.
The latest shiny object to appear is the cloud-only analytics database, promising economical storage and performance and unbounded elastic deployment. In reality, these cloud-only solutions can result in expensive or unpredictable compute costs, limited deployment options with a high potential for vendor/architecture/data lock-in, and relatively new and immature management and tools. Is there a better way?
The ideal solution for operational analytics would have all the best characteristics of the alternatives mentioned above without any of the shortcomings. This new approach would need to be:
- Fast – It would have an underlying architecture optimized for analytic query performance, requiring little or no tuning in anticipation of certain workloads (like indexing or aggregations), maximizing the variety of workloads it could support.
- Scalable – It would scale to large data capacities with economical and flexible storage layer, connecting to a variety of existing legacy and new sources of data.
- Flexible – It would offer flexible deployment options, on-premises or on different cloud platforms.
- Current – It would be capable of near real-time updates from operational systems to keep current with the business, without slowing down the performance of ongoing analytic queries.
- Robust – It would deliver enterprise-level security, reliability, and manageability.
- Secure – It would offer a number of data protection mechanisms to meet enterprise security requirements and comply with tougher regulatory environments.
These characteristics define what we call an operational data warehouse. With such a solution, you would have a database system able to deliver near-real-time insights into the business for a variety of users, from data scientists to business analysts. It would support ad hoc self-service data discovery and analytics using the most current operational data, without burdening transactional systems and workloads.
Actian Vector analytic database was innovated from the ground up to be that operational data warehouse, to harness data in the moment. Not only is it fast, scalable, and flexible, it is ready for production with mature security, administration, and resource management.
Vector is the fastest analytic database available on industry standard servers, on-premises or in the cloud. The original goal was to execute SQL code as fast as if it were written in optimized C code, by taking advantage of the vectorized instructions in standard CPUs as well as a columnar data format to process analytic queries more efficiently. It achieved that goal and more, racking up a number of impressive record-setting benchmark results over the past six years. Moreover, Vector does not need special performance tunings or optimizations like indexing and tuning, providing great performance out of the box. That makes Vector great for ad hoc self-service data discovery, with interactive performance and reduced cycle times for faster iteration, and on full data sets, not samples.
Vector offers scalability from single server to clusters of hundreds of nodes, using Hadoop’s Distributed File System and YARN to manage the resources and distribute the workload to where the data is stored. Vector handles GBs to TBs to PBs of data, and scales to numbers of concurrent users well beyond other MPP solutions.
Vector inherited the administrative infrastructure of Actian’s more established transactional RDBMS products, leveraging the proven maturity of query planning, query optimization, data ingestion, data quality, security, reliability, and manageability. Actian DataFlow perfectly complements Vector by adding faster and more intuitive control over data ingestion and analytic workflows, including a graphical user interface powered by KNIME, which makes it easy to create and optimize query workloads.
Analytics can deliver the best insights with current data, but most analytic solutions expect batch updates and write-once, read-many access patterns that cannot support frequent changes. Vector employs a patented technique called positional delta trees to handle updates to existing data without impacting query performance, resulting in analytics that can incorporate regular and frequent updates to deliver the most current insights into your business.
With the advent of GDPR we’ve seen heightened focus on privacy and security. Vector releases include all the capabilities required to support a GDPR-compliant deployment, and recent additions ease the administration and development of secure solutions. For example, data masking ensures only authorized users can see the underlying data, while others may see only a masked value.
Vector offers a wide range of deployment options, running on industry standard servers under Linux or Windows, and also supporting different Hadoop distributions to scale out on clusters or cloud infrastructure. Vector supports a wide range of storage options as well, reducing any technology lock-ins for your operational data warehouse.