Data Intelligence

The DPO in 2019: The Results

Actian Corporation

January 20, 2020

DPO-2019

Since May 2018, the General Data Protection Regulations (GDPR) have required companies to assign a “DPO”, or Data Protection Officer, within their organization. This new job consists of managing personal data and informing employees of obligations to be respected in regard to European regulations.

More than a year after the implementation of these regulations, the Actian Data Intelligence Platform organized a workshop with DPOs from different business sectors with one idea in mind: How to help them in their GDPR implementation? We would like to share their feedback with you today.

Current Assessment

To better understand Data Protection Officers and their function, let’s assess their current situation.

The Tools

Our audience affirms that the applications used are only a means for implementing governance on data.

Enterprises have nevertheless adopted new tools to help DPOs put GDPR in place. This software is considered to be unintuitive and complicated to use. However, some manage to stand out:

Among the DPO’s tools, one of the most appreciated ones is the catalog application, mainly for its macro vision of the exchanges between different apps, and the easy and rapid detection of personal information.

At the same time, data catalogs, one of the most recent tools in the market, are starting to reach the DPO community. Investing in these tools is a strategic choice that some participants have already made. The possibility of informing and historicizing information on data by collecting cataloged company data has indeed convinced them!

Governance

DPOs are well aware that the efforts must be placed on acculturation and raising employee awareness in order to hope for better results.

The search for governance only aims to help the business side understand and assess the risks on the data they handle. Their energy is thus placed on the implementation of effective management and communication of shared rules so that the company acquires the right reflexes. Because yes, data remains a subject that few employees master in business.

Information Systems

The heterogeneity of information systems is a “normal” environment with which DPOs are confronted.

They are thus faced with trying by all means to bring IS into conformity, which very often prove to be complex and costly to update technically.

Internationally

We associate GDPR Data Regulation with DPOs, often forgetting the “the rest of the world”.

Many countries also have their own regulations such as Switzerland and the United States. DPOs are no exception and neither are their companies.

One thing is certain, the scope of the work is gigantic and requires a strong prioritization of subjects. But beyond the priorities linked to urgency, this requires finding the right cursor between meeting compliance standards and meeting business requirements!

The Challenges of DPOs for 2020

In light of this previous observation, the workshop concluded with 2020 and its new challenges.

Together with them, we drew up a list of “resolutions” for the new year:

  • Invest more in improving the qualification and requirements for data documentation.
  • Integrate more examples on good practices in the employee awareness phase.
  • Provide precise indicators on the use and purpose of the data in order to predict the risks and impacts as soon as possible.
  • Become a stakeholder in the implementation of data governance to guarantee effective data acculturation in the enterprise.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Management

Layman’s Guide to Machine Learning and Customer Data Privacy

Actian Corporation

January 16, 2020

leveraging machine learning to enhance enterprise data analytics

It feels like only yesterday that “Machine Learning” and “Artificial Intelligence” were equated with science fiction by most regular folks outside of tech. As a movie-going culture, we have long history of associating these concepts with self-aware robots or rogue computer programs that could escape the laboratories where they were created and threaten all humanity. But in 2020, ML and AI have been considerably demystified. And yet, even if it seems less likely now that the Singularity will be triggered by the reckless ambition of some tech startup, consumers have new reasons to be concerned. 

Yes, ML and AI are going mainstream, and it’s not just disruption-minded startups that are leveraging these technologies. Enterprises in a wide range of established industries are finding solid business reasons to fund these advanced projects and bring them out of the laboratory into production, with lots of exciting implications for their customers 

One implication, which is the subject of this article, is the creation of a new class of personal data privacy vulnerabilitiesAnd a majority of businesses that want to leverage ML are going to have to learn to protect their customers from these new vulnerabilities 

These concerns arise in the first place because the “models” that make ML work have to be trained with data – lots of it. As enterprises seek to create business value from these new ML programs (such as conversational agents, real-time risk and fraud analysis, and predictive healthcare)they are going to train their models with customer data of some sort. In many cases, deeply private customer data. 

As we usher in what is certainly a new era in consumer awareness of data privacy rights, combined with the advent of new regulations such as GDPR and CCPA, it is timely to contemplate how ML and consumer data privacy will co-exist. 

No Longer a Hypothetical Problem

Unfortunately, some of the toothpaste has already escaped the tube. A number of recent controversies expose the potential scale of the ML + Customer Data Privacy problem. Google (whose health data-sharing arrangement with Ascension became the subject of scrutiny in November) ditched its plans to publish chest X-ray scans over concerns that they contained personally identifiable information. The Royal Free London NHS Foundation Trust, a division of the UK’s National Health Service based in London, provided Alphabet’s DeepMind with data on 1.6 million patients without their consent. This past summer, Microsoft quietly removed a data set (MS Celeb) with images of more than 10 million people after it was revealed that some weren’t aware they had been included. 

And it turns outhose of us whove been getting a creepy feeling whenever we expressed our deepest desires to an AI-based wish fulfillment engine, had good reason to. Apple and Google have been the subject of recent reports that revealed the potential misuse of recordings collected to improve artificial agents like Siri and Google Assistant. In April, Bloomberg revealed that Amazon had been using contractors to transcribe and annotate audio recordings pulled from Alexa-powered devices, prompting the company to roll out new user-facing tools that let you delete your cloud-stored data. 

Why ML Exacerbates Data Privacy

Within a database, the various data points associated with an individual can be distinguished, from a privacy standpoint, by what class of information they contain. A dataset is made up of data points (specific members of a population) and features (the values of the attributes associated with each person). In the case of medical records, for example, features might be their nameagegenderstatereligion, and disease. The first column represents Personally Identifiable Information (PII), which uniquely identifies a person, egtheir full name or social security number. The second type of feature contained is termed Quasi-Identifiers (QI), which are categories like age or gender that may be attributable to more than one individual. Therefore, this information on its own is not sufficient for identification. However, if combined with other QIs and external information, it is sometimes possible to re-identify an individual. 

Traditionally, removing the column containing sensitive information in a dataset meant that this specific information could not be re-inferred from the dataset itself, but only by combining and querying external information. AI, however, can recreate identities even with the identity-indicator removed. From a set of job applicant resumesfor example, gender might be removed to protect against gender discrimination during the candidate evaluation process. Although the resumes have been de-identified in that sense, an ML tool might be able to pick up subtle nuances in language use and from this infer the candidate’s gender. Here, removing the column is not enough to strip out sensitive information securely. 

AI technologies have not historically been developed with privacy in mind. To reach reliable levels of accuracy, models require large datasets to ‘learn’ from. In order to shield individual privacy in the context of big data, different anonymization techniques have conventionally been used. The three most relevant are K-anonymity, L-diversity, and T-closeness, of which we will briefly examine the first. In K-anonymity, selected Quasi-Identifiers (egname, religion) of certain individuals are removed or generalized (eg: replacing a specific age with an age span) so that every combination of identity-revealing characteristics occurs in at least k different rows of the dataset. K-anonymity is a “hiding in the crowd” approach to protecting privacy. Ieach individual is part of a larger group, any of the records could correspond to a single person. L-diversity and T-closeness are extensions of this concept, which are described in more detail here. These modifications would be applied before data is shared or used in a training model. This is called Privacy-Preserving Data Publishing. However, with the rise of AI, this form of protection is insufficient. 

Conventional statistical modeling methods would only be able to consider a limited number of variables. But today, because of regularization techniques and the declining cost of cloud-based computation, it has become possible for ML models to consider thousands of variables from which to make a single prediction. With algorithms that can make inferences from such large and complex datasets, three new conceptual issues arise. Firstly, with the expanded dimensionality in ML training sets, implicitly there is now greater likelihood of sensitive information being included. Second, these powerful new models are more likely to be able to discern that sensitive information (egreconstructing gender from subtle differences in word choice). And thirdly, ensuring comprehensive privacy and anonymity for the vast amounts of data incorporated into complex ML models itself presents a major challenge. 

Intro to Privacy-Preserving Machine Learning

To address the above challenges, there are a number of promising techniques that are being tested to provide suitable protection of individual data privacy in ML. These include Federated LearningDifferential Privacy, and Homomorphic Encryption. For the most part, these are all in the preliminary stages of exploration as regards their potential use to protect consumer data privacy in ML at scale, and are in the hands of researchers in academe or at the largest technology players. Which of these becomes the standard and how they will bridgthe gap to meet the needs of ML in production remains to be seen. 

Federated Learning

Federated Learning is an example of the more general approach of “bringing the code to the data, instead of the data to the code”and thus addresses some of the basic problems of privacy, ownership, and physical location of data. Federated Learning is a collaborative approach that involves training ML models on a large set of decentralized data present on multiple client devices. The model is trained on client devices and thus there is no need for transferring the user’s data. Keeping their personal data on the client’s device enables them to preserve direct and physical control of their own data. Holding the data samples on client devices, without the need for exchanging those samples, enables multiple parties to develop a common Mmodel without having to share all the data amongst themselves which creates the increased vulnerability that comes from putting all the data together in any one place. 

Google, a Federated Learning pioneer, has used FL for personalization in its Gboard predictive keyboard across tens of millions of iOS and Android devices. And together with the launch of the Pixel 4, Google debuted an improved version of its Now Playing music-recognition feature that aggregates the play counts of songs in a federated fashion, identifying the most popular songs in a given geographic location.  

Among the drawbacks of the Federated Learning approach are the fact that it requires lots of processing power and memory from the federated devices. Also, because the models can only be trained when the devices are connected and able to transfer data, this may introduce a situational bias to the data that enters the model. For example, a user may listen to different music sources (and therefore different songs) when on WiFi versus cellular data. And lastly, Federated Learning is vulnerable to Poisoning Attacks”, where a generative adversarial net (or GAN) may pretend to be a benign participant to gain control of the model. 

Differential Privacy

Differential Privacy is a promising, if not new, approach to the preservation of privacy in MLDeveloped by Cynthia Dwork et al at Microsoft in 2006, DP attempts to ensure that no individual can be linked to the data used to train an ML model. This doesn’t mean you can’t discover anything about an individual in a dataset. For example, publishing data that shows a strong correlation between smoking and lung cancer would reveal sensitive information about an individual known to smoke. Rather, the ultimate privacy goal is to ensure that anything that can be learned about an individual from the released information, can be learned without that individual’s data being included. In general terms, an algorithm is differentially private if an observer examining the output is not able to determine whether a specific individual’s information was used in the computation. 

DP works to protect individual privacy by adding random noise to the dataset in a carefully determined distribution, which will lead to the “perturbation of the true answer. The true answer plus noise is always returned as output to the user. The degree of perturbation can be accounted for so that overall accuracy does not significantly decrease, while for individual data there always remains a degree of “plausible deniability” due to the randomness of the noise. 

desirable aspect of DP is that it is mostly compatible with, or even beneficial to, meaningful data analysis despite its protective strength. Within empirical science, there is often the threat of overfitting data to permit conclusions that are specific to the dataset, and lose accuracy when predictions are generalized to the larger population. Because DP also offers protection from such overfitting, its benefits thus go even beyond data security. 

Apple has been using some form of DP since 2017 to identify popular emojis, media playback preferences in Safari, and more. The company combined DP with Federated Learning in its latest mobile operating system release (iOS 13). Both techniques help to improve the results delivered by Siri, as well as apps like Apple’s QuickType keyboard and iOS’ Found In Apps feature. The latter scans both calendar and mail apps for the names of contacts and callers whose numbers aren’t stored locally. 

Homomorphic Encryption

Homomorphic Encryption, like DP, is not new but is enjoying renewed relevancy for its potential utility in privacy preservation for Machine Learning. The essential idea is that we can use data in encrypted form to train and run the ML model. From Wikipedia: “Homomorphic Encryption is a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext.  For example, this means you could use “Qhjr Thykhjr“ (using the Ceasar cipherin an ML training model in place of my name (Jack Mardack), and return a similarly encrypted outputYou can also encrypt the ML model itself, which is valuable in the case of Federated Learning, where it is necessary to transfer the model to data (egto the customer’s device). This means you can protect the model itself with encryption, as well as the training data. 

It may seem obvious to use encryption to protect data, but the usefulness of the approach is significantly reduced because of performance implications. Homomorphic Encryption libraries don’t take advantage of modern hardware acceleration, making the ML models ~10X slower than other approaches. But there are research teams at Intel, Facebook, and IBM (among others) that are working to help close the gap.  

There is naturally a lot of interest in Homomorphic Encryption in ML use cases from the more highly-regulated industries, such as healthcare and banking, where the possibility of end-to-end encryption is highly desirable. 

Conclusion

We are at an interesting crossroads, to be sure. There is near-universal agreement that ML and AI are poised to radically transform human experience on multiple life-changing dimensions, from how we stay healthy, to how we work and create, to the facilitation of myriad mundane human activities. 

But it seems both sides of the risk/reward scale are changing for consumers. Up until now, the benefits of sharing our data with commercial enterprises have been comparatively modest – relating to more personalized news feeds in our social apps, or more relevant recommendations from the e-commerce sites we buy fromVery soon, however, the value we stand to gain from ML and AI is going to be much, much greater. That these technologies will mean the difference between life and death for many of us is not an exaggeration. But, the nature of the data we will have to share in order to take advantage is also much more sensitive, creating unprecedented exposure for consumers. The interplay between both sides of this equation is going to drive both our adoption (the willingness with which we’ll share our most personal data), as well as the further evolution of privacy-protection methods, such as those described above. 

In that regard, Privacy-Preserving ML is very much in its infancy. The work to date on these fronts has been done almost entirely by researchers based on their own speculations about the nature of likely attacks or breaches. Unlike, for example, the state of protection we enjoy from computer viruses today (which draws on decades of real-world attacks), we have no idea what the “bad guys” will actually do. We’ll have to wait and see, and then learn, improve, and catch up. 

Further reading: A Major Drug Company Now Has Access to 23andMe’s Genetic Data. Should You Be Concerned?

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Analytics

Real-Time Dashboards Show Ways for Operational Optimization

Actian Corporation

January 15, 2020

Operational optimization

You’ve gone through digital transformation – now what?  IT and business leaders are learning quickly that digital transformation is only one of many steps on the journey toward operational optimization. Once you have your people and processes in order, it’s time to shift focus towards your data. Digital transformation brings with it a whole host of new IT systems that produce data about your processes that can be a powerful tool in operational optimization – but first, you need to figure out how to manage the data and harvest the insights it contains. That’s where your data warehouse and some well-designed real-time dashboards come in. They can help show you where the trouble spots and improvement opportunities are in your operations.

Start With the Operational Optimization Decisions That You Want to Make

Most IT professionals start with the technology and data because that’s what they are most comfortable with. Operational optimization isn’t a technical problem, though it is a business challenge. As such, the most significant successes come from starting with the decisions you want to make and then figuring out how to build the puzzle from there.  Operational optimization is all about process performance, quality, cost, continuity, and agility to respond to changes. These are the things that leaders and decision-makers will want to optimize. They want to understand what the trouble spots are in their operations, where inefficiencies lie, where risks and issues exist that need attention and what factors are constraining performance. Once they decide and initiate some sort of action, they need to see how those actions impact operational performance.

What Data Do You Need to Create Actionable Insights?

The key to effective decision-making is harvesting actionable insights from operational data. Many IT systems support your digitally transformed business processes, each capturing and/or creating data about the pieces of your process that the system supports. Deriving actionable insights requires assembling these puzzle pieces, identifying what is important and actionable, and then presenting those insights in a way your staff and leaders can understand. Operational dashboards that model out a digital representation of your business processes, supported by data from all of the enabling systems, are a powerful tool to drive informed decisions.

The Role of Your Data Warehouse

A big challenge for organizations that have gone through digital transformation is too much data. Technology plays an integral role in your operational processes, and it seems that every app, device, system, service, and interface is churning out data. The problem is that most of this data is fragmented (representing only a small slice of your process), and much of the data (while accurate and informative) isn’t very useful for decision-making. Before you can harvest insights, you first need to bring the different pieces of your data puzzle back together, constructing a digital representation of your operational processes and filtering out raw data that isn’t needed for analysis. Your data warehouse provides a platform for doing this. Modern data warehouse solutions like Actian Data Platform are designed with the speed and scalability features needed to manage streaming data from operational systems and the tools to feed your dashboards with the data your decision-makers need.

Managing Data From Source Systems

Before you can analyze data in your data warehouse, you first need to get it there. Along with an increase in the number of source systems, digital transformation has led to increased diversity in the types of technology components on which your business processes depend. Mobile apps, embedded sensors, IoT devices, and cloud services are just a few examples of the distributed end-points generating data bout your processes. In addition to a modern data warehouse, you are likely to need a modern set of data integration capabilities to help you manage the flow of data from your source systems. Actian DataConnect is an easy-to-use Integration Platform as a Service (IPaaS) offering that makes it easy to connect anything, anytime, anywhere so you can focus on the data and the decisions, not managing connections.

Speeding it Up Into Real-Time

Pulling the big picture together,  you have a bunch of technology source systems supporting your digitally transformed business processes. You have an integration platform pulling the data into a modern data warehouse where you can model out a digital representation of your business process. You have built operational dashboards that present actionable insights to business leaders who can make informed decisions. You’ve built a data management capability that works, but how long does this process take?

If something happens within your operations or a trouble spot develops, will you know about it in minutes, hours, days? Let’s hope not – digitally transformed business processes require real-time insights and decision-making. If you want to optimize your operations, you need to see problems quickly, make decisions, initiate action, and see immediate results. That is why you need a set of data management capabilities like those from Actian that are designed to leverage modern technology and deliver real-time business performance.

Visit www.actian.com/data-platform to learn more.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Architecture

All Cloud Data Warehouses are NOT Created Equal

Actian Corporation

January 14, 2020

Cloud Data Warehouse

There are a lot of myths and misconceptions about cloud data warehouses. One of the biggest ones is that all cloud data warehouses cost the same.  On the surface, cloud data warehouse vendors may talk the same language – describing similar features, benefits and touting the performance gains of operating in the cloud.  But when you start looking at the details of implementation, migration performance and scalability, the differences become apparent.

“We’re moving our data warehouse to the cloud to save money.”

Migrating from an on-premises data warehouse to a cloud data warehouse is a great way to gain greater control over your IT costs, improve performance, and achieve scalability to support your business.  How much of these benefits you will gain depends on which cloud data warehouse you select and how you implement it.  Most cloud data warehouse solutions provide you with some deployment options: on-premises, private cloud, public cloud, multi-cloud, and hybrid.  If the one you are looking at doesn’t give you these choices, you might want to pause here and consider how confident you are in the solution you’re implementing.

Deployment choices give you the flexibility to change course in the future (and considering how fast business environments evolve, flexibility is essential).  Assuming the solutions you are looking at offer the standard deployment options, you might assume that costs and performance will be effectively the same – after all, if it’s running on AWS, it’s all the same cloud infrastructure, right?

The cloud environment, whether public or private, is only one piece of the performance puzzle. Most cloud providers offer a wide variety of capabilities that software solution providers can choose from.  The design and configuration of the solution will have a significant impact on your costs and the performance benefits you receive in your implementation.  Here are three key issues you should understand to know how your cloud data warehouse solution stacks up.

Elasticity for Minimizing Waste and Scaling for Increased Demand

One of the most significant value propositions of moving your data warehouse to the cloud is minimizing the waste that comes from under-utilized infrastructure and idle capacity.  Cloud systems are intended to be scaled up for peak demand periods, and then scaled back down when capacity is not needed to save resources (and costs).  When it comes to cloud data warehouses, each provider has their own capabilities for optimizing resource utilization (supply) against consumption (demand).  Some solutions require full database backups in order to shut down services and a full restoration to bring the service back online.  This means that it isn’t practical to “turn off the lights when you aren’t in the office.”

Other cloud data warehouse providers take a stepwise approach to scaling up capacity, adding new instances for every eight or so users.  This means you end up paying for more than you really need.  The key when it comes to elasticity and scaling is having fine-grain control over how much capacity you are using (and paying for) and being able to adjust it up and down to align to your unique usage patterns. If you have greater control over your costs, you can minimize waste and save money.

Performance – Be Sure You Understand What You Get in a “Resource Unit.”

In on-premises data centers, it’s easy to measure what resources you are using: it’s this host, this memory and these CPUs.  How do we know that? Because that’s the hardware that my data warehouse runs on.  In the cloud, because the infrastructure has been optimized for shared use, vendors define “resource units” as a way of describing capacity in a simple way.  But here’s the catch–not all resource units are equal, and each vendor defines their own unit of measure.  You need to understand what you are getting in a resource unit in terms of speed, performance, scale, and resource size.  In some cases, things like memory are bundled with compute; in other cases, they are measured separately.  Read the fine print and know what you are getting.

Efficiency and Parallel Processing

Parallel processing is one of the biggest differentiators between cloud data warehouse solutions.  If you process data in a linear fashion (one record at a time), big data sets take time to process.  Some vendors speed things up by running multiple transactions in parallel over a set of different CPUs.  It’s faster than going in a single-file line, but there is another option that is even faster.  Vectorization of data enables multiple transactions to run on a single CPU cycle.  This means you get the speed of parallel processing without the overhead cost of parallel hardware.

There are a lot of myths out there about cloud data warehouses, and this was just one of them.

Actian Data Platform

Actian Data Platform is built for high performance and maximizes compute, memory and disk efficiency–ultimately providing high-speed analytics in less time and at a much lower cost than Snowflake.

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.
Data Integration

Why DataConnect Inside Delivers Embedded Data Integration

Actian Corporation

January 13, 2020

dataconnect for innovative technologies integration

Many software as a service (SaaS) companies are looking at embedding integration capabilities into their SaaS offerings as a way of making their solutions both more appealing to customers and more profitable for them as providers. Embedded integration can make SaaS solutions easier to adopt, cheaper to integrate and more difficult to replace when a new bright shiny offering comes around from a competitor. This is the final part of a three-article series on the topic of embedded integration and how SaaS companies are leveraging integration platform as a service (iPaaS) into their commercial SaaS offerings.

The first article in this series provided a high-level overview of why embedding an integration platform into your SaaS offering is important and how it can help you establish a competitive advantage in the marketplace. The second article explored the factors to consider when selecting an Integration Platform as a Service (IPaaS) vendor. Here we will look at the features of DataConnect Inside and how they address your embedded integration needs.

Let Us Integrate – So You Can Innovate

Embedded iPaaS solutions are made from building blocks of pre-built templates that provide an environment designed for creativity within safe boundaries. Actian DataConnect Inside is an iPaaS (integration platform as a service) offering that provides tools that enable you to build tailored solutions for your customer needs rather than worrying about the technical complexities needed to build, manage and maintain the solution. With an embedded iPaaS you can create new integrations rapidly by activating them within your integration platform’s environment. This enables you to integrate business data from disparate sources and cloud applications quickly and easily. It also gives you an intuitive graphical interface to define connectivity, data delivery, data consistency, or transactional data integration among multiple systems.

Seamlessly Deliver Integration at Scale

Organizations need to be able to accomplish seamless integration between their cloud and on-premises applications as well as their customers’ on-premises applications. Integration needs to be fast and easy to give your offering a competitive advantage in the market. Actian DataConnect Inside provides hundreds of pre-built connectors, guided workflows, and event rules that you can use to accelerate the design and the ongoing maintenance of your data pipelines.

Actian DataConnect Inside gives you:

  • Tools to Build “Out of the Box” Integrations Quickly – A scalable connectivity framework, lightweight, embeddable runtime engine, low-code visual IDE and ready to use APIs help you deliver capabilities quickly.
  • Enable White-Label Integration – The power of Actian DataConnect seamlessly integrated into your company and product’s branding and user experience.
  • A Unified Experience On-Premises or in the Cloud – All companies are dealing with hybrid environments, but Actian DataConnect provides a simplified way to focus on what the data means instead of where it is located.
  • Support for Real-Time Integration – Every integration is automatically assigned a URL endpoint for consumption by client applications. This effectively makes every integration real-time enabled and available as an API.
  • Predictable and Flexible Pricing Model – Procured as a subscription service, customers purchase only the processing bandwidth they need. This helps you keep the cost of your SaaS solutions down and improves profitability.
  • Enterprise-Grade Security and Reliability – Actian DataConnect Inside uses a modern microservice architecture that is SOC 2 Type 2 compliant. Authentication is done using OAuth2, data connectivity supports TLS1.2, and all server communications are encrypted via SSL.

With Actian DataConnect Inside, you can integrate nearly anything – meaning you can accelerate time to value by designing your integrations once, embedding into your solutions, and deployed anywhere.  With Actian DataConnect Inside, you can offer the capabilities that your customers want, the tools that your developers need and the profitability that your company demands.

To learn more, visit www.actian.com/data-integration/dataconnect/

actian avatar logo

About Actian Corporation

Actian empowers enterprises to confidently manage and govern data at scale. Actian data intelligence solutions help streamline complex data environments and accelerate the delivery of AI-ready data. Designed to be flexible, Actian solutions integrate seamlessly and perform reliably across on-premises, cloud, and hybrid environments. Learn more about Actian, the data division of HCLSoftware, at actian.com.