What is Concurrency?
The term concurrency is used in computing to describe the ability to run multiple parallel software tasks using multiprocessor systems, which reduce processing times compared to a single-threaded approach.
Why is Concurrency Important?
Without concurrency capabilities, applications can not scale horizontally or vertically. Applications would be slow. Servers could not handle more than 20 users even if written efficiently. Even edge systems with small footprints are designed to exploit multiprocessor hardware, as single-core systems have become a rarity.
How Concurrency is Applied
Below are examples of how concurrency has evolved to support modern applications.
The first generation of computers exemplified by the famous IBM System/360 were uniprocessor systems. Software programs were organized into queues for batch processing. A program might have been a deck of punch cards, and the output would have been a hardcopy printout. In the 1980s, concurrency was introduced by the first dual processor, IBM 3081 and 4-way 3084 systems. New job entry utilities allowed these systems to process multiple streams of software. As each CPU cost $1M, these systems were only used by Fortune 500 companies. The first online multi-user system, such as IBM CICS, was tied to a single processor, severely limiting the number of users and transaction rates.
Tandem Non-Stop Computers
Banking and stock market systems used Tandem and Stratus systems to protect transactions from system failures. These systems provided concurrency using redundant hardware that executed each transaction in parallel. If a system fails, the remaining functional server becomes the primary execution engine.
Today, CPUs commonly carry 4 or 8 cores on a single chip. In the 1990s, symmetric multiprocessing systems from Sequent, HP and IBM became popular because they could scale applications vertically in a single server. More users could be supported by simply upgrading the hardware.
Applications can be written to scale across multiple physical servers in the same data center or rack connected by a high-speed interconnection. These tightly coupled systems are known as clusters. Early clusters included the IBM Parallel Sysplex, VAX Clusters and Sun Clusters. Now, applications were no longer limited to scaling up in a single server; they could also scale across servers in the cluster.
Concurrency Using Massively Parallel Processor Systems
Early clusters were limited to around eight nodes. The IBM SP2 MPP could support 64-way clusters in a single box. Today, supercomputers can scale thousands of nodes. For example, applications such as weather simulations use concurrency to run a real-time model of global weather.
The open-source Hadoop project commoditized MPP using cheap Intel-based servers and free software. Action Vector customers such as Expandium use a 12-node Hadoop cluster to cost-effectively analyze more than a billion call records per day by exploiting both the vertical and horizontal scalability Vector provides.
Supporting More Users
Concurrency is often used to describe how many users are connected to an application, such as a database system. As new connection requests arrive, they are assigned to the next processor using a circular round-robin or random allocation algorithm. Workload manager software ensures even utilization across all the available CPU resources.
Concurrency in the Cloud
Public cloud platforms add a new dimension to concurrency thanks to massive elasticity. Cloud applications can scale to millions of concurrent users. Serverless computing offers the potential for unlimited concurrency.
Actian and Concurrency
The Actian Data Platform uses a highly parallel query capability that is provided by the built-in vector processing database engine. Actian Vector can take a large and complex SQL query and parallelize it to take advantage of all the CPU cores in a single system and across a cluster to return the result set faster than any other database. This is possible partly due to Vector’s ability to use the Intel SIMD technology that allows a single instruction executed in one CPU core to parallel process data cached in the registers of all the other CPUs in the system.
The Actian Data Platform runs on-premise and in the cloud. The built-in data integration technology connects to hundreds of data sources. External data connectors support SQL access to Spark formats.
Discover proven performance and unbeatable value with a free trial.