A transactional database is engineered to support high-speed business interactions. This database management system is optimized to read and write individual rows of data exceptionally fast while ensuring high data integrity. Transactional databases automatically roll back a transaction or operation that can’t be fully completed. They are application-specific and tuned to support high volumes of updates from large user populations.
How are Transactional Databases Different from Data Warehouses?
Transactional databases existed before data warehouses and were designed to support high-volume workloads such as banking transactions and airline reservations. By contrast, data warehouses evolved from the need to support decision making by using analytic queries. A transactional database uses the minimum number of indexes required by the specific interactions they support.
Data warehouses use more normalized schemas that provide the flexibility to answer a broad range of queries. Additional indexes and views can be added to fine-tune the data warehouse to support different queries. The Transaction Processing Council (TPC.org) has created specific benchmarks for comparing transactional databases using the TPC-C and TPC-E workloads. These benchmarks are concerned with transactions with large populations of users.
The decision support benchmarks such as the TPC-H, TPC-DS, and TPC-DI workloads use more complex queries that focus on supporting more extensive data sets.
Brief History of Transactional Databases
The first generation of transactional databases used a hierarchical schema that applications were hard coded to navigate. IBM’s Information Management System (IMS) database is still used today for ATM and airline reservation systems written in Assembler and PL/I.
IBM’s Virtual Storage Access Method (VSAM) represented the first indexed file system with its key data store. This same key-indexed access method is provided by the Actian Zen database today.
The next generation of transactional databases used linked lists of records and were known as CODASYL or network database designs. Network databases were popular in the 1970s and early 1980s. The classic network database is an Integrated Database Management System (IDMS), initially written by B. F. Goodrich in 1964 and once owned by Cullinet and CA Technologies. IDMS is maintained today by Actian engineers for Broadcom.
Relational databases emerged in the 1980s based on the seminal paper by Edgar F. Codd published by IBM’s San Jose Labs. Oracle was the first to commercialize the relational database, closely followed by Actian Ingres, Sybase, and Informix. Actian markets Ingres and Informix today. Initially, relational databases were too slow for transactional workloads, but the database model was well suited for decision support applications. As computer hardware became more powerful, relational databases evolved to become pervasive for transaction processing database applications.
Essential Elements of a Transactional Database
Transactional applications and databases are engineered for efficiently creating new records and updating existing data. In contrast, decision-support databases are optimized for fast query performance. The following core features of a database management system enable it to support transactional applications effectively:
Transactional databases must provide transaction integrity. All the committed changes a transaction makes in a database are written to a non-volatile form of storage. A journal of all changes is maintained in a transaction log file or a set of mirrored log files to protect against device failure.
Checkpointing to Maintain Database Integrity
A transactional database must maintain its own integrity. It does this by setting regular sync points called checkpoints. Checkpoint records are points of consistency that the whole database management system can roll back to on every restart before rolling forward. This protects any committed transactions since the checkpoint.
Transaction Rollback and Commit Function
At the transaction level, every change to a schema, which includes tables and indexes, is either committed or rolled back. Any abandoned sessions are implicitly rolled back.
Database Archiving and Recovery
A database can contain customer data that has to be protected from catastrophic failures and disasters. To do this, database backups and log files provide increment updates that are archived before they are overwritten. The archives are moved to offsite locations; if they use physical media such as tape or are in a cloud scenario, they are copied to a different region for safeguarding. Transactional databases support online archiving to minimize downtime and often maintain a hot standby database at the remote archive location to minimize the mean time to recovery.
Transaction Isolation and Read Consistency
Database management systems such as Actian Ingres allow software engineers to control transaction locking and read consistency by letting the developers control the locking of records that are being changed. By default, every transaction will see a consistent data view since the transaction began, and other transactions will not see the changes until committed.
Transactional Database Use Case
HSS Hire is a leader in the UK for construction equipment for hire and rental. It is a long-time Actian Ingres (Actian X) customer and has built its entire back-office transactional system on the Ingres database. The database supports mission-critical operational applications, including tool and equipment inventory management, accounts management, invoicing, product availability, and delivery. For decision support, the company uses the Actian Vector database.
Actian and the Transactional Database
Actian was spun off from Computer Associates as a transactional database business. Actian Vector was developed into a high-speed columnar database for lightning-fast query processing using Ingres SQL and developer APIs. Actian created a hybrid transactional and decision support product called Actian X that integrated the two database engines so users can create objects in both data stores.