Engineered Decision Intelligence: The Best Way Forward Part 2 By Teresa Wingfield February 3, 2022 Part 2: You Need Composable Data and Analytics In my first blog on engineered decision intelligence, I shared information about what this concept means and why you need it, and then I elaborated on the Gartner recommendation for pairing decision intelligence tools with a common data fabric. But there was a second piece of advice from Gartner: You will need composable data and analytics. That’s the subject I’m covering this time around. What Are Composable Data and Analytics? Composability is all about using components that work together even though they come from a variety of data, analytics, and AI solutions. By combining components, according to Garner, you can create a flexible, user-friendly, and user-tailored experience. Many types of analytical tools exist, and the purpose of and value delivered by each varies greatly. Composability enables you to assemble their results to gain new, powerful insights. 4 Ways a Modern Data Warehouse Can Better Support Composable Data and Analytics A modern data warehouse should provide a platform that can empower all the different users in the enterprise to analyze anything, anywhere, anytime using whatever combination of components they want to use. Here are a few “arrangement” tips. Extend the data warehouse with transactional and edge data processing capabilities Historically, there was a clear distinction between a transactional database and a data warehouse. A transactional database tracked and processed business transactions. A data warehouse, in contrast, analyzed historical data. However, modern needs for real-time insights have brought these formerly distinct worlds ever closer together—to the point where, today, there is strong demand for mixed workloads that combine transactional processing and analytics. You see this in a range of use cases, from automated personalized e-commerce offers and real-time quotes for insurance to credit approval and portfolio management, to name just a few. Likewise, decision makers are looking for ways to act faster using data from their billions of connected mobile and Internet of Things (IoT) devices. Predictive maintenance, real-time inventory management, production efficiency, and service delivery are just a few of the many areas where real-time analytics on IoT data can help a company cut costs and drive additional revenues. Real-time transactional analytics and artificial intelligence-enabled insights from IoT data are likely to play increasingly important roles in many organizations. What we’re seeing today is just the beginning of benefit streams to come. Realizing greater benefits will depend upon an organization’s ability to deliver varied data to decision intelligence solutions. Bring in any data source, anytime The real-time needs of engineered decision intelligence mean that analytic tools can no longer rely solely on historical data for insights. Decision makers still want on-demand access to data from traditional batch processing sources, but they also want the ability to act on current trends and real-time behaviors. This requires seamless orchestration, scheduling, and management of real-time streaming data from systems throughout the organization and the Internet that are continuously generating it. In a world that is evolving, data must be available for analysis regardless of where it lives. Since most companies have some combination of cloud and on-premises applications, the data warehouse needs to integrate with systems in both environments. It also needs to be able to work with any type of data in the environment. Business decision-makers that can gain insights from the real-time analysis of both semi-structured and unstructured data, for example, may be able to seize opportunities more efficiently and increase the probability that strategic initiatives will be successful. Take advantage of the efficiencies enabled by containerization A containerized approach makes analytics capabilities more composable so that they can be more flexibly combined into applications. However, this is more advantageous if the data warehouse architecture itself supports containers. Support is key to enabling an organization to meet the resource demands associated with artificial intelligence, machine learning, streaming analytics, and other resource-intensive decision intelligence processing. These workloads strain legacy data warehouse architectures. Container deployment represents a more portable and resource efficient way to virtualize compute infrastructure versus virtualized deployment. Because containers virtualize the operating system rather than the underlying hardware, applications require fewer virtual machines and operating systems to run them. Accommodate any tool It’s all well and good if a data warehouse offers its own analytical tools—as long as it can easily accommodate any other tool you might want to use. As I mentioned at the start, the purpose and value delivered by different types of analytical tools vary greatly, and different users—including data engineers, data scientists, business analysts, and business users—need different tools. Look for the flexibility to integrate decision intelligence easily with the data warehouse. Or, if you have unique requirements that require you to build custom applications, look at the development tools the platform supports so that you can achieve the composability that a modern analytics environment requires. Learn More If you have found this subject interesting, you may want to check out some of these blogs related to the benefits you can derive from broader decision intelligence composability: · Real-Time Streaming–Actionable Insights Drive Business Responsiveness · It’s Time for Data Historians to Become . . . History · Semi-Structured Data: What It Is and Why It Matters · The Top 10 Benefits of an Operational Data Warehouse For 2021 Gartner Top 10 Data and Analytics Trends for 2021  Semi-structured data is information that does not reside in a relational database but that has some organizational properties that make it easier to analyze (such as XML data). Unstructured data either is not organized in a predefined manner or does not have a predefined data model (examples include Word, PDF, and text files, as well as media logs). This article was co-authored by Lewis Carr. Senior strategic vertical industries, horizontal solutions, product marketing, product management, and business development professional focused on Enterprise software, including Data Management and Analytics, Mobile and IoT, and distributed Cloud computing. About Teresa Wingfield Teresa Wingfield is Director of Product Marketing at Actian where she is responsible for communicating the unique value that the Avalanche Cloud Data Platform delivers, including proven data integration, data management and data analytics. She enjoys applying her extensive knowledge in these areas to help customers find solutions that will help them achieve long-lasting success. Teresa brings a 20-year track record of increasing revenue and awareness for analytics, security, and cloud solutions. Prior to Actian, Teresa managed product marketing at industry-leading companies such as Cisco, McAfee, and VMware. She was also Datameer’s first VP of Marketing for big data analytics built on Hadoop, and has served as VP of Research at Giga Information Group, acquired by Forrester, providing strategic advisory services for data warehousing and analytics. Teresa holds graduate degrees in management from the MIT Sloan School of Management and software engineering from Harvard University.