facebooklinkedinrsstwitterBlogAsset 1PRDatasheetDatasheetAsset 1DownloadForumGuideLinkWebinarPRPresentationRoad MapVideofacebooklinkedinrsstwitterBlogAsset 1PRDatasheetDatasheetAsset 1DownloadForumGuideLinkWebinarPRPresentationRoad MapVideo
Actian Blog / As AI Algorithms Become More Sophisticated in Edge Devices, Persistent Data Requirements Must Advance at the Same Pace

As AI Algorithms Become More Sophisticated in Edge Devices, Persistent Data Requirements Must Advance at the Same Pace

Machine Learning, Artificial Intelligence, Cloud Computing And Networks Design Concept With Geometric Network Mesh And AI Label

Artificial Intelligence (AI) systems seem to be everywhere and for a good reason. AI represents the next generation of computing capabilities. It is leveraging the speed and scale of cloud computing to deliver not only high-speed automation but also continuous learning and adaptation capabilities that can finally match the pace of change in the natural environment. As AI capabilities and AI algorithms mature, organizations are developing new business, consumer and governmental usage scenarios that are revolutionizing how humans interact with machines.

Autonomous AI services

The AI evolution is more than capabilities. AI algorithms are also migrating from centralized data centers (both on-premise and in the cloud) to distributed devices on the edge of networks. AI is no longer the “modern equivalent to a mainframe,” instead, it is evolving into a new type of embedded capability in end-user devices and edge computing. This development is important for two reasons:

  1. AI systems are becoming more autonomous. It is not a singular AI system, but, instead, a network of independent AI bots performing tasks and “learning” as separate units.
  2. Distributing AI across the network means better performance for both industrial automation and end-user interactions. Two of the key use cases for AI are natural language processing and image analysis. Performing these operations “in the field” means less traffic on networks and faster response times.

How AI at the edge is being used

Companies use distributed AI algorithms to monitor and optimize real-time operations – receiving inputs from embedded sensors, GPS-enabled mobile applications, IoT devices, and video cameras and aggregating this data into a holistic, digital representation of the physical operations. The AI system then analyzes this digital representation directly or transmits it to the centralized operations staff for interpretation.

Companies are also using AI systems embedded into edge devices as a platform to deploy the next generation of human-interaction technologies. AI is well suited to natural language processing (NLP), translation, and suggesting responses based on the analysis of previous interactions. During the past few years, network latency has been the biggest barrier to AI bots being indistinguishable from human agents. By moving the AI algorithms to edge devices, latency is removed, and the seamless machine-to-human interaction can be achieved.

The need for persistent data

It’s great that AI algorithms can operate independently at the edge of the network, but there are a few key reasons these systems must be connected back into a core AI infrastructure.

Shared learning – Independent AI systems will each learn different content/input at different rates based on the types of experiences and interactions to which they are exposed. However, to provide a consistent, system-wide AI experience, these independent bots must share what they’ve learned with other AI systems and develop a collective knowledge.

Interactions in Motion – Mobile devices, such as cellular phones, connected automobiles, and other moveable devices, facilitate most end-user interactions with AI devices. To maintain a consistent AI interaction while an end-user is in motion (crossing different network access points or cell towers), certain data about the AI interaction must be persisted to a centralized location and shared with other AI bots.

Workflow – Most transactional activities and process-automation workflows enabled by AI will require some interaction with centralized services or other remote devices. Data persistence enables workflow continuity and tracking across multiple systems. AI systems must be tuned to know when they can operate independently and when they must interact with centralized infrastructure services.

Artificial Intelligence is arguably the most important technological development of the modern era. AI systems’ capabilities are becoming more sophisticated and are now being distributed around the globe. As these distributed AI algorithms in edge devices become more sophisticated, persistent data requirements must advance at the same pace to enable the emerging use cases and immersive experiences that the market demands.

Actian’s Zen It Edge database provides an embeddable, a nano footprint persistent data store for smart devices that are easily connected Actian’s Cloud and on-premise analytic databases. You can learn more about Actian’s Cloud Data Warehouse here.

About Pradeep Bhanot

Product Marketing professional, author, father and photographer. Born in Kenya. Lived in England through disco, punk and new romance eras. Moved to California just in time for grunge. Worked with Oracle databases at Oracle Corporation for 13 years. Database Administration for mainframe IBM DB2 and its predecessor SQL/DS at British Telecom and Watson Wyatt. Worked with IBM VSAM at CA Technologies and Serena Software. Microsoft SQL Server powered solutions from 1E and BDNA.