This is the final installment on the series of blogs I wrote on the continued use of flat files and why they are no longer viable for use in the future. The first installment focused on flat files and why embedded software application developers readily adopted them.  Then in the second installment, I discussed why embedded developers are reluctant to use databases.  The third installment looked at why developers cling to the flat file systems.

For this final installment, I realized that the argument for migrating off flat files probably needs to be done in a more prescriptive fashion. So along with our head of engineering, Desmond Tan, we’ve developed a list of reasons that aren’t really about what you’re currently using or what you should use in the abstract. But instead, focus on what your actual requirements would justify the switch away from flat files. This list is a function of where we see use cases evolving for embedded Edge data management in the mobile, IoT, and complex intelligent equipment spaces. It also reflects what our customers are telling us in private meetings concerning their challenges and business and product requirements.

Based on this input, here are the top eight requirements for modern edge data management:

1. Local Data Persistence and its Changing Use

Generally, data stored locally in flat files was needed as a cache for immediate operations on data exclusive to the device in question.  Furthermore, that data was evaluated without a long-standing yet continually refreshed baseline.  However, the need to combine data from multiple devices and data sources and the use of historical data as a pattern baseline for everything from simple signal-to-noise ratio to machine learning inference means the need for data management of complex operations that feed local data processing and analytics outstrips the bare-bones functionality provided by file systems.

2. Support for Multiple OS Environments With a Single Storage Format

Across product designs, operational technology portfolios, and Enterprise IT support of lines of business, a range of systems at the Edge and back into the enterprise must share data across OSes from Android/iOS to Windows/Linux to cloud-virtualized environments.  A single storage format would significantly reduce integration time and cost as well as improve security.

3. Data Management Support for All Roles Involved With Data Processing and Analytics

Different roles will need to leverage access to the Edge data management platform remotely and locally on this range of underlying platforms from different access mechanisms, including CLI, programming and scripting languages.  There’s probably a much needed, longer discussion of why that deserves a separate blog.

4. Handle Big Data at the Edge

Pop quiz, what are the four Vs. for Big Data – yes, you can all name them if woken from deep sleep (for those of you who refuse to leave that wonderful dream: Volume, Variety, Velocity, and Value).  Just as Hadoop was a first step at the Enterprise level to address the need for a common reservoir for Big Data, there needs to be an equivalent at the Edge.  And, as was the case for the last requirement, a full blog is needed to discuss further separation from flat files.  For now, the key tangible takeaway is that you need to be able to support varied types of data, including JSON, BLOB, and traditional structured data in a single database.

5. Share Data Across OT, Cloud and IT Traditional Environments

Data sharing must be able to handle device to device and device to gateway scenarios in OT environments as well as Edge to Cloud and Cloud to Data Center for remote and branch environments at the Edge to cloud interface.  In other words, you need a single platform for Client, Peer-to-Peer, Client-Server, and Internet/Intranet architectures.

6. Operate Stand-Alone During Periods of Dis-Connectivity

While the Edge may be overall moving into a state of hyper-connectivity, this should not be mistakenly seen as removing or reducing the need for the ability to process and analyze locally.  In many operational technology applications, a web-only approach is unworkable. For example, if you have autonomous vehicles, processing, and making decisions based on video and lidar signals to determine where to steer the car, at what speed and acceleration – or deceleration cannot be handled by the cloud, latency alone would ensure accidents.  In other cases, convenience or ease-of-use may be the reason you choose to handle operations locally. Say you have thousands of albums locally in your iTunes collection, and you want to search for a song by its lyrics on your fourteen-hour flight from San Francisco to New Delhi.

7. Handle High-Speed, Multi-Channel Data Collection

Many of the core signals collected won’t change as we move to a more connected and automated state; pressure, volume, and temperature are three great examples of this.  However, as referenced above, video streams for everything from facial recognition to machine vision are becoming more prevalent and at far higher resolutions and frame rates – think 4K UHD at 120 Hz.  Also referenced above are lidar signals.  At one time, I was an engineer and I built laser radar systems. Even in the dark ages, I could easily collect 400 MB of data per day, making decisions in milliseconds on each set of lidar signals collected.  I could go one with example after example.  The point is, with modern edge data processing and analysis, running into the multiple terabytes per day where the data is both processed immediately but also retrieve in part or wholly for additional processing at a later time – all of these scenarios will be commonplace.

8. Leveraging Complementary Off-the-Shelf Ecosystem Components

It goes without saying that those using flat files are probably do-it-yourself types on other parts of their solutions as well.  Moving from flat files will optimize productivity by opening up their use of plug-in-play options for advanced analytics, reporting and visualization tools, and platforms.

In summary, if you are seeing one or more of these requirements you really should consider migration from flat files over to a single, scalable, secure architecture that can handle multi-platform deployment and development and provides you with the horsepower to support a myriad of advanced data processing and analytics use cases.  Flat files just won’t do the trick any longer, my friend.


Robotic process automation (RPA) is one of the fastest-growing trends in the IT industry. This next generation of workflow management capabilities is poised to be an essential enabler for companies seeking to optimize their digitally transformed business processes for peak efficiency. For RPA vendors, one of the critical capabilities that must be developed in your solution is the integration capabilities with a wide variety of third-party systems and data sources.

Integration is a Must-Have Feature for RPA Solutions

When customers are implementing RPA solutions, they are often doing so alongside the deployment of things like IoT devices, mobile apps, SaaS solutions, embedded sensors in machinery, or other connected data sources. The purpose of RPA is to connect these independent technology components seamlessly and to orchestrate business and operational workflows to achieve the desired outcome. The possibilities of what can be automated using RPA solutions should be limitless. However, most RPA solutions can be limited by the number and types of endpoints to which they can connect.

The diversity of data sources and components and the lack of universal standards in the industry make integration and connection management a significant challenge in the RPA space. Those vendors who master the integration challenge find themselves in a much better competitive position than those who don’t.

Differentiator or Minimum Viable Product?

For many market segments in IT, integration capabilities are a differentiator – customers see the value. However, you can still sell your offering without it. That isn’t the case for RPA. Integration is an essential capability and part of the minimum viable product (MVP) for taking your RPA offering to market. If your solution can’t integrate or cannot manage integrations, your offering will not be well received.

Basic integration capabilities may be required features for all RPA offerings, but more robust capabilities can genuinely differentiate your offerings in the eyes of customers. There are three key things that customers are looking for when evaluating integration capabilities that RPA vendors should focus on:

  1. Ability to connect to a diverse set of end-points.
  2. Ease of establishing new connections and maintaining those long-term.
  3. Performance in processing data.

The Buy vs. Build Decision

For RPA vendors, the need for integration capabilities is clear – without them, customers won’t buy your solutions. The decision that product development teams need to make is how to acquire the capabilities that customers demand – build them yourself or buy them from someone who is an expert in the space. Because of the complexity of the integration challenge, building an integration engine and enhancing it to support new end-points can be a daunting task. Many smaller RPA vendors are turning to open-source technology for integrations. Using this approach comes with many risks, the viability of the open-source solution is unknown, and the ability to access support or maintenance for these solutions may not be available in the long-term.

Larger vendors are increasingly choosing to leverage embedded integration capabilities into their solutions. Not because they can’t build it themselves, but because it helps them get products to market faster.  Why spend months building something when you can use a pre-built capability and have it fully integrated in a few weeks?

Ultimately, RPA vendors must make decisions that balance engineering cost, time to market, and product differentiation when selecting what integration capabilities to include in their offerings. And how to acquire them. In most situations, leveraging an embedded integration platform like DataConnect Inside from Actian is a better and more cost-effective option than trying to develop something yourself.


Blog | Data Management | | 13 min read

Layman’s Guide to Machine Learning and Customer Data Privacy

leveraging machine learning to enhance enterprise data analytics

It feels like only yesterday that “machine learning” and “artificial intelligence” were equated with science fiction by most regular folks outside of tech. As a movie-going culture, we have long history of associating these concepts with self-aware robots or rogue computer programs that could escape the laboratories where they were created and threaten all humanity. But in 2020, ML and AI have been considerably demystified. And yet, even if it seems less likely now that the Singularity will be triggered by the reckless ambition of some tech startup, consumers have new reasons to be concerned. 

Yes, ML and AI are going mainstream, and it’s not just disruption-minded startups that are leveraging these technologies. Enterprises in a wide range of established industries are finding solid business reasons to fund these advanced projects and bring them out of the laboratory into production, with lots of exciting implications for their customers 

One implication, which is the subject of this article, is the creation of a new class of personal data privacy vulnerabilitiesAnd a majority of businesses that want to leverage ML are going to have to learn to protect their customers from these new vulnerabilities 

These concerns arise in the first place because the “models” that make ML work have to be trained with data – lots of it. As enterprises seek to create business value from these new ML programs (such as conversational agents, real-time risk and fraud analysis, and predictive healthcare)they are going to train their models with customer data of some sort. In many cases, deeply private customer data. 

As we usher in what is certainly a new era in consumer awareness of data privacy rights, combined with the advent of new regulations such as GDPR and CCPA, it is timely to contemplate how ML and consumer data privacy will co-exist. 

No Longer a Hypothetical Problem

Unfortunately, some of the toothpaste has already escaped the tube. A number of recent controversies expose the potential scale of the ML + Customer Data Privacy problem. Google (whose health data-sharing arrangement with Ascension became the subject of scrutiny in November) ditched its plans to publish chest X-ray scans over concerns that they contained personally identifiable information. The Royal Free London NHS Foundation Trust, a division of the UK’s National Health Service based in London, provided Alphabet’s DeepMind with data on 1.6 million patients without their consent. This past summer, Microsoft quietly removed a data set (MS Celeb) with images of more than 10 million people after it was revealed that some weren’t aware they had been included. 

And it turns outhose of us whove been getting a creepy feeling whenever we expressed our deepest desires to an AI-based wish fulfillment engine, had good reason to. Apple and Google have been the subject of recent reports that revealed the potential misuse of recordings collected to improve artificial agents like Siri and Google Assistant. In April, Bloomberg revealed that Amazon had been using contractors to transcribe and annotate audio recordings pulled from Alexa-powered devices, prompting the company to roll out new user-facing tools that let you delete your cloud-stored data. 

Why ML Exacerbates Data Privacy

Within a database, the various data points associated with an individual can be distinguished, from a privacy standpoint, by what class of information they contain. A dataset is made up of data points (specific members of a population) and features (the values of the attributes associated with each person). In the case of medical records, for example, features might be their nameagegenderstatereligion, and disease. The first column represents Personally Identifiable Information (PII), which uniquely identifies a person, egtheir full name or social security number. The second type of feature contained is termed Quasi-Identifiers (QI), which are categories like age or gender that may be attributable to more than one individual. Therefore, this information on its own is not sufficient for identification. However, if combined with other QIs and external information, it is sometimes possible to re-identify an individual. 

Traditionally, removing the column containing sensitive information in a dataset meant that this specific information could not be re-inferred from the dataset itself, but only by combining and querying external information. AI, however, can recreate identities even with the identity-indicator removed. From a set of job applicant resumesfor example, gender might be removed to protect against gender discrimination during the candidate evaluation process. Although the resumes have been de-identified in that sense, an ML tool might be able to pick up subtle nuances in language use and from this infer the candidate’s gender. Here, removing the column is not enough to strip out sensitive information securely. 

AI technologies have not historically been developed with privacy in mind. To reach reliable levels of accuracy, models require large datasets to ‘learn’ from. In order to shield individual privacy in the context of big data, different anonymization techniques have conventionally been used. The three most relevant are K-anonymity, L-diversity, and T-closeness, of which we will briefly examine the first. In K-anonymity, selected Quasi-Identifiers (egname, religion) of certain individuals are removed or generalized (eg: replacing a specific age with an age span) so that every combination of identity-revealing characteristics occurs in at least k different rows of the dataset. K-anonymity is a “hiding in the crowd” approach to protecting privacy. Ieach individual is part of a larger group, any of the records could correspond to a single person. L-diversity and T-closeness are extensions of this concept, which are described in more detail here. These modifications would be applied before data is shared or used in a training model. This is called Privacy-Preserving Data Publishing. However, with the rise of AI, this form of protection is insufficient. 

Conventional statistical modeling methods would only be able to consider a limited number of variables. But today, because of regularization techniques and the declining cost of cloud-based computation, it has become possible for ML models to consider thousands of variables from which to make a single prediction. With algorithms that can make inferences from such large and complex datasets, three new conceptual issues arise. Firstly, with the expanded dimensionality in ML training sets, implicitly there is now greater likelihood of sensitive information being included. Second, these powerful new models are more likely to be able to discern that sensitive information (egreconstructing gender from subtle differences in word choice). And thirdly, ensuring comprehensive privacy and anonymity for the vast amounts of data incorporated into complex ML models itself presents a major challenge. 

Intro to Privacy-Preserving Machine Learning

To address the above challenges, there are a number of promising techniques that are being tested to provide suitable protection of individual data privacy in ML. These include Federated LearningDifferential Privacy, and Homomorphic Encryption. For the most part, these are all in the preliminary stages of exploration as regards their potential use to protect consumer data privacy in ML at scale, and are in the hands of researchers in academe or at the largest technology players. Which of these becomes the standard and how they will bridgthe gap to meet the needs of ML in production remains to be seen. 

Federated Learning

Federated Learning is an example of the more general approach of “bringing the code to the data, instead of the data to the code”and thus addresses some of the basic problems of privacy, ownership, and physical location of data. Federated Learning is a collaborative approach that involves training ML models on a large set of decentralized data present on multiple client devices. The model is trained on client devices and thus there is no need for transferring the user’s data. Keeping their personal data on the client’s device enables them to preserve direct and physical control of their own data. Holding the data samples on client devices, without the need for exchanging those samples, enables multiple parties to develop a common Mmodel without having to share all the data amongst themselves which creates the increased vulnerability that comes from putting all the data together in any one place. 

Google, a Federated Learning pioneer, has used FL for personalization in its Gboard predictive keyboard across tens of millions of iOS and Android devices. And together with the launch of the Pixel 4, Google debuted an improved version of its Now Playing music-recognition feature that aggregates the play counts of songs in a federated fashion, identifying the most popular songs in a given geographic location.  

Among the drawbacks of the Federated Learning approach are the fact that it requires lots of processing power and memory from the federated devices. Also, because the models can only be trained when the devices are connected and able to transfer data, this may introduce a situational bias to the data that enters the model. For example, a user may listen to different music sources (and therefore different songs) when on WiFi versus cellular data. And lastly, Federated Learning is vulnerable to Poisoning Attacks”, where a generative adversarial net (or GAN) may pretend to be a benign participant to gain control of the model. 

Differential Privacy

Differential Privacy is a promising, if not new, approach to the preservation of privacy in MLDeveloped by Cynthia Dwork et al at Microsoft in 2006, DP attempts to ensure that no individual can be linked to the data used to train an ML model. This doesn’t mean you can’t discover anything about an individual in a dataset. For example, publishing data that shows a strong correlation between smoking and lung cancer would reveal sensitive information about an individual known to smoke. Rather, the ultimate privacy goal is to ensure that anything that can be learned about an individual from the released information, can be learned without that individual’s data being included. In general terms, an algorithm is differentially private if an observer examining the output is not able to determine whether a specific individual’s information was used in the computation. 

DP works to protect individual privacy by adding random noise to the dataset in a carefully determined distribution, which will lead to the “perturbation of the true answer. The true answer plus noise is always returned as output to the user. The degree of perturbation can be accounted for so that overall accuracy does not significantly decrease, while for individual data there always remains a degree of “plausible deniability” due to the randomness of the noise. 

desirable aspect of DP is that it is mostly compatible with, or even beneficial to, meaningful data analysis despite its protective strength. Within empirical science, there is often the threat of overfitting data to permit conclusions that are specific to the dataset, and lose accuracy when predictions are generalized to the larger population. Because DP also offers protection from such overfitting, its benefits thus go even beyond data security. 

Apple has been using some form of DP since 2017 to identify popular emojis, media playback preferences in Safari, and more. The company combined DP with Federated Learning in its latest mobile operating system release (iOS 13). Both techniques help to improve the results delivered by Siri, as well as apps like Apple’s QuickType keyboard and iOS’ Found In Apps feature. The latter scans both calendar and mail apps for the names of contacts and callers whose numbers aren’t stored locally. 

Homomorphic Encryption

Homomorphic Encryption, like DP, is not new but is enjoying renewed relevancy for its potential utility in privacy preservation for machine learning. The essential idea is that we can use data in encrypted form to train and run the ML model. From Wikipedia: “Homomorphic Encryption is a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext.  For example, this means you could use “Qhjr Thykhjr“ (using the Ceasar cipherin an ML training model in place of my name (Jack Mardack), and return a similarly encrypted outputYou can also encrypt the ML model itself, which is valuable in the case of Federated Learning, where it is necessary to transfer the model to data (egto the customer’s device). This means you can protect the model itself with encryption, as well as the training data. 

It may seem obvious to use encryption to protect data, but the usefulness of the approach is significantly reduced because of performance implications. Homomorphic Encryption libraries don’t take advantage of modern hardware acceleration, making the ML models ~10X slower than other approaches. But there are research teams at Intel, Facebook, and IBM (among others) that are working to help close the gap.  

There is naturally a lot of interest in Homomorphic Encryption in ML use cases from the more highly-regulated industries, such as healthcare and banking, where the possibility of end-to-end encryption is highly desirable. 

Conclusion

We are at an interesting crossroads, to be sure. There is near-universal agreement that ML and AI are poised to radically transform human experience on multiple life-changing dimensions, from how we stay healthy, to how we work and create, to the facilitation of myriad mundane human activities. 

But it seems both sides of the risk/reward scale are changing for consumers. Up until now, the benefits of sharing our data with commercial enterprises have been comparatively modest – relating to more personalized news feeds in our social apps, or more relevant recommendations from the e-commerce sites we buy fromVery soon, however, the value we stand to gain from ML and AI is going to be much, much greater. That these technologies will mean the difference between life and death for many of us is not an exaggeration. But, the nature of the data we will have to share in order to take advantage is also much more sensitive, creating unprecedented exposure for consumers. The interplay between both sides of this equation is going to drive both our adoption (the willingness with which we’ll share our most personal data), as well as the further evolution of privacy-protection methods, such as those described above. 

In that regard, Privacy-Preserving ML is very much in its infancy. The work to date on these fronts has been done almost entirely by researchers based on their own speculations about the nature of likely attacks or breaches. Unlike, for example, the state of protection we enjoy from computer viruses today (which draws on decades of real-world attacks), we have no idea what the “bad guys” will actually do. We’ll have to wait and see, and then learn, improve, and catch up. 

Further reading: A Major Drug Company Now Has Access to 23andMe’s Genetic Data. Should You Be Concerned?


Blog | Data Analytics | | 5 min read

Real-Time Dashboards Show Ways for Operational Optimization

Operational optimization

You’ve gone through digital transformation – now what?  IT and business leaders are learning quickly that digital transformation is only one of many steps on the journey toward operational optimization. Once you have your people and processes in order, it’s time to shift focus towards your data. Digital transformation brings with it a whole host of new IT systems that produce data about your processes that can be a powerful tool in operational optimization – but first, you need to figure out how to manage the data and harvest the insights it contains. That’s where your data warehouse and some well-designed real-time dashboards come in. They can help show you where the trouble spots and improvement opportunities are in your operations.

Start With the Operational Optimization Decisions That You Want to Make

Most IT professionals start with the technology and data because that’s what they are most comfortable with. Operational optimization isn’t a technical problem, though it is a business challenge. As such, the most significant successes come from starting with the decisions you want to make and then figuring out how to build the puzzle from there.  Operational optimization is all about process performance, quality, cost, continuity, and agility to respond to changes. These are the things that leaders and decision-makers will want to optimize. They want to understand what the trouble spots are in their operations, where inefficiencies lie, where risks and issues exist that need attention and what factors are constraining performance. Once they decide and initiate some sort of action, they need to see how those actions impact operational performance.

What Data Do You Need to Create Actionable Insights?

The key to effective decision-making is harvesting actionable insights from operational data. Many IT systems support your digitally transformed business processes, each capturing and/or creating data about the pieces of your process that the system supports. Deriving actionable insights requires assembling these puzzle pieces, identifying what is important and actionable, and then presenting those insights in a way your staff and leaders can understand. Operational dashboards that model out a digital representation of your business processes, supported by data from all of the enabling systems, are a powerful tool to drive informed decisions.

The Role of Your Data Warehouse

A big challenge for organizations that have gone through digital transformation is too much data. Technology plays an integral role in your operational processes, and it seems that every app, device, system, service, and interface is churning out data. The problem is that most of this data is fragmented (representing only a small slice of your process), and much of the data (while accurate and informative) isn’t very useful for decision-making. Before you can harvest insights, you first need to bring the different pieces of your data puzzle back together, constructing a digital representation of your operational processes and filtering out raw data that isn’t needed for analysis. Your data warehouse provides a platform for doing this. Modern data warehouse solutions like Actian Data Platform are designed with the speed and scalability features needed to manage streaming data from operational systems and the tools to feed your dashboards with the data your decision-makers need.

Managing Data From Source Systems

Before you can analyze data in your data warehouse, you first need to get it there. Along with an increase in the number of source systems, digital transformation has led to increased diversity in the types of technology components on which your business processes depend. Mobile apps, embedded sensors, IoT devices, and cloud services are just a few examples of the distributed end-points generating data bout your processes. In addition to a modern data warehouse, you are likely to need a modern set of data integration capabilities to help you manage the flow of data from your source systems. Actian DataConnect is an easy-to-use Integration Platform as a Service (IPaaS) offering that makes it easy to connect anything, anytime, anywhere so you can focus on the data and the decisions, not managing connections.

Speeding it Up Into Real-Time

Pulling the big picture together,  you have a bunch of technology source systems supporting your digitally transformed business processes. You have an integration platform pulling the data into a modern data warehouse where you can model out a digital representation of your business process. You have built operational dashboards that present actionable insights to business leaders who can make informed decisions. You’ve built a data management capability that works, but how long does this process take?

If something happens within your operations or a trouble spot develops, will you know about it in minutes, hours, days? Let’s hope not – digitally transformed business processes require real-time insights and decision-making. If you want to optimize your operations, you need to see problems quickly, make decisions, initiate action, and see immediate results. That is why you need a set of data management capabilities like those from Actian that are designed to leverage modern technology and deliver real-time business performance.

Visit www.actian.com/data-platform to learn more.


Many software as a service (SaaS) companies are looking at embedding integration capabilities into their SaaS offerings as a way of making their solutions both more appealing to customers and more profitable for them as providers. Embedded integration can make SaaS solutions easier to adopt, cheaper to integrate and more difficult to replace when a new bright shiny offering comes around from a competitor. This is the final part of a three-article series on the topic of embedded integration and how SaaS companies are leveraging integration platform as a service (iPaaS) into their commercial SaaS offerings.

The first article in this series provided a high-level overview of why embedding an integration platform into your SaaS offering is important and how it can help you establish a competitive advantage in the marketplace. The second article explored the factors to consider when selecting an Integration Platform as a Service (IPaaS) vendor. Here we will look at the features of DataConnect Inside and how they address your embedded integration needs.

Let Us Integrate – So You Can Innovate

Embedded iPaaS solutions are made from building blocks of pre-built templates that provide an environment designed for creativity within safe boundaries. Actian DataConnect Inside is an iPaaS (integration platform as a service) offering that provides tools that enable you to build tailored solutions for your customer needs rather than worrying about the technical complexities needed to build, manage and maintain the solution. With an embedded iPaaS you can create new integrations rapidly by activating them within your integration platform’s environment. This enables you to integrate business data from disparate sources and cloud applications quickly and easily. It also gives you an intuitive graphical interface to define connectivity, data delivery, data consistency, or transactional data integration among multiple systems.

Seamlessly Deliver Integration at Scale

Organizations need to be able to accomplish seamless integration between their cloud and on-premises applications as well as their customers’ on-premises applications. Integration needs to be fast and easy to give your offering a competitive advantage in the market. Actian DataConnect Inside provides hundreds of pre-built connectors, guided workflows, and event rules that you can use to accelerate the design and the ongoing maintenance of your data pipelines.

Actian DataConnect Inside gives you:

  • Tools to Build “Out of the Box” Integrations Quickly – A scalable connectivity framework, lightweight, embeddable runtime engine, low-code visual IDE and ready to use APIs help you deliver capabilities quickly.
  • Enable White-Label Integration – The power of Actian DataConnect seamlessly integrated into your company and product’s branding and user experience.
  • A Unified Experience On-Premises or in the Cloud – All companies are dealing with hybrid environments, but Actian DataConnect provides a simplified way to focus on what the data means instead of where it is located.
  • Support for Real-Time Integration – Every integration is automatically assigned a URL endpoint for consumption by client applications. This effectively makes every integration real-time enabled and available as an API.
  • Predictable and Flexible Pricing Model – Procured as a subscription service, customers purchase only the processing bandwidth they need. This helps you keep the cost of your SaaS solutions down and improves profitability.
  • Enterprise-Grade Security and Reliability – Actian DataConnect Inside uses a modern microservice architecture that is SOC 2 Type 2 compliant. Authentication is done using OAuth2, data connectivity supports TLS1.2, and all server communications are encrypted via SSL.

With Actian DataConnect Inside, you can integrate nearly anything – meaning you can accelerate time to value by designing your integrations once, embedding into your solutions, and deployed anywhere.  With Actian DataConnect Inside, you can offer the capabilities that your customers want, the tools that your developers need and the profitability that your company demands.

To learn more, visit www.actian.com/data-integration/dataconnect/