Blog | Actian Life | | 2 min read

Austin’s First Salsa and Guacamole Contest

Actian Culture

I am so thrilled to tell you about all about our first social event which put a little “spice” into everyone’s work week!

We sponsored a best salsa and guacamole contest. We asked everyone to bring in their favorite salsa and guacamole recipe and let their fellow colleagues’ taste and vote for the best one in each category. There were several entries, and everyone got a chance to place their vote anonymously.

We all had multiple “taste testing’s, just to make sure the vote chose was correct. 

We decorated our café area and blew up balloons which came in cactus and avocado shapes; red balloons, mini maracas and a table runner completed the look. 

The winner for best salsa went to Jay Clark and the winner of best guacamole went to Melanie Richards. They each won an Amazon gift card.

Everyone had a good time, sampling and resampling, talking to each other and enjoying the event. Sometimes in life, it’s the little things that bring a smile to someone’s face.

The biggest take away we received from this event; when is our next event? 

Thank you for reading about one of the events that make the Actian Culture phenomenal!


Managing and analyzing heterogeneous data is a challenge for most companies, which the oncoming wave of edge computing-generated datasets has only exacerbated. This challenge stems from a rather large “data-type mismatch” as well as how and where data has been incorporated into applications and business processes. How did we arrive here?

At one time, data was largely transactional, and Online Transactional Processing (OLTP) and Enterprise Resource Planning (ERP) systems handled it in-line, and it was heavily structured. Primarily, Relational Database Management Systems (RDBMS) managed the needs of these systems and eventually evolved into data warehouses, storing and administering Online Analytical Processing (OLAP) for historical data analysis from various companies, such as Teradata, IBM, SAP, and Oracle.

As most manual processes utilizing paper moved to digital records management, content management systems emerged as a means to manage all the unstructured documents from knowledge workers, or which the expanded functionality within ERP and personal computing systems generated. These systems contain semi-structured and unstructured document data stored in eXtensible Markup Language (XML) and JavaScript Object Notation (JSON) formats.

In parallel, and more so during the last few years with the Internet of Things (IoT) revolution, the third wave of digitization of data is upon us, operating at the edge in sensors, video, and other IoT devices. They are generating the entire range of structured and unstructured data, but with two-thirds of it in a time-series format. Neither of these later datasets lends itself to RDBMS systems that underpin data warehouses due to how the data is processed and analyzed, the data formats used, and the mushrooming dataset sizes.

Consequently, separate Document Store Databases, such as MongoDB and Couchbase, as well as several time-series databases, including InfluxDB and a multitude of bespoke Historians, emerged to handle these very distinct datasets. Each has a separate Application Programming Interface (API), lumped together as NoSQL, as in everything that’s not Structured Query Language (SQL).

The aftermath of these three waves of data types and database structures is data architects must now implement separate databases for each type of data and use case or try to merge and aggregate all of the different data types into a single database. Until recently, the only significant or enterprise-wide aggregation point for multiple databases and data types was the traditional data warehouse. The legacy data warehouse, however, is lagging as an aggregation point for two reasons.

First, many of them are based on inflexible architectures in terms of their capability to manage JSON and time-series data and the cost to expand them to administer larger datasets or complexity of modern analytics, such as artificial intelligence (AI) and machine learning (ML). Second, sending all the data to them in a single, centralized location on-premises can be costly and hinders decision-making at the point of action at the edge of the network.

During the era of edge computing and a wholesale flip of the majority of data being created and emanating from the edge instead of from the data center or a virtualized image in the cloud, specialized applications and platforms have an essential purpose in business process enablement. Just as each business process is unique, the data requirements for that technology to support those processes are also unique. While it may seem best-of-breed database technology for document store versus time-series versus traditional, fully structured transactional data may remove constraints on the use of technology within a business, you should be very careful before you go that route.

In general, the more APIs, underlying database architectures, resulting differences in supporting file formats, management, and monitoring systems and changes in which ones you use based on use case simply increase the complexity of your enterprise data architectures. This is particularly the case if you offer or implement multiple products, technologies and integration methodologies with this medley of databases. This complexity tends to have a domino effect into your support lifecycle for any software leveraging these databases – even the procurement of the databases.

Provided you can find a single database with similar performance and addresses all the data types and SQL as well as direct manipulation of the data through a NoSQL API, it makes far more sense to merge and aggregate heterogeneous data into a common database structure, particularly in Edge Computing use cases. For example, if you are looking at video surveillance data, sensor networks, and logs for security, then combinations of these and other disparate data sets must be aggregated for cross-functional analytics.

If you need to analyze, create reports and dashboards based on data of different types and in different source systems, then you will need some sort of capability for normalizing the data, so it can be queried either onsite or remotely from a single data set.

The requirements have changed during the last 30 years and Actian has built a new modular database that is purpose-built for edge-computing technologies and use cases and is capable of handling all datasets through a single NoSQL API, yet provides full SQL compliance. In both SQL and NoSQL functions, our 3rd party benchmark results show far better performance than any of the major Document Store, Time-Series or traditional SQL databases capable of handling Mobile and IoT.


Blog | Data Integration | | 3 min read

What You Should Know Before Moving to a Multi-Cloud Environment

Multi-Cloud Environment

Most IT leaders will agree: the cloud is the right place to host and operate many of their applications and systems. What isn’t quite as clear for these leaders is “which cloud” they should use.

The answer is “There is no right answer.” Each company (and application) has its own unique set of technical and operating needs. They will drive the decision about whether it should run in one of the public cloud environments or a company’s private cloud or on a 3rd party’s cloud environment (e.g., hosted and consumed as Software as a Service (SaaS)). This is a decision that should be made on a case-by-case basis and informed by a clearly documented and enforced enterprise-cloud policy.

Most organizations will likely choose and operate a hybrid environment with capabilities stratified across multiple clouds. Sensitive data apps may be on a private cloud, customer-facing systems that require geographic scalability may be on public clouds and SaaS components will likely be on someone else’s cloud. In this configuration, your focus should be on how you manage and connect the data in these systems. You will likely have business processes and data-driven analytics that require data to be integrated from different systems for your company to operate effectively.

Establishing and maintaining data connections in a multi-cloud environment is often more complex than managing connections in a single-cloud environment. Although it may be difficult, there are 3 reasons why data integration across multiple clouds is essential:

  1. Avoiding latency in business processes.
  2. Enabling data aggregation for analytics.
  3. Replicating data to manage business continuity risk.

Cloud environments provide a tremendous amount of cost leverage and scalability potential but require more robust data-connectivity capabilities than your IT department’s likely experience in legacy environments. In multi-cloud setups, it is often necessary to shift your integration patterns from point-to-point integrations between applications to more of a centralized data integration hub architecture, leveraging a platform, such as Actian DataConnect, to broker the connections to all your systems.

In addition to making manageability easier, DataConnect can help you monitor the flow of data between different cloud environments – you’ll make more informed decisions about how to improve end-to-end system performance and how to reduce infrastructure operating costs. Most companies make their initial cloud-selection decisions based on functionality criteria; however, cost savings is often the driver for making the decision to migrate systems from one cloud to another. Since cloud infrastructure is charged on a consumption model, understanding how your cloud applications are being used can help you identify cost-savings opportunities quicker.

Whether you are designing your cloud strategy in preparation to begin migrating capabilities to the cloud or seeking to optimize your current cloud utilization – selecting the right set of tools to help you manage your data connections is an essential step for harvesting maximum value from your cloud investments. Actian is an expert in hybrid data management, with modern solutions to help you manage data throughout the lifecycle of your IT systems. To learn more, visit DataConnect.