The Big Data Dilemma: Innovate or Die

Share

Clayton Christenson famously articulated the Innovator’s Dilemma – already successful companies spend so much attention on customers’ current needs that they fail to innovate new technology and business models that meet customers’ future needs.  Disruptors in their space burst onto the scene and overtake the entrenched “successful” organizations.

I see an adrenalin-popping parallel in the Big Data arena.  Every organization is being deluged with ever-growing data volumes from ever-proliferating data sources.  Most of that data is either parked, unanalyzed, in a repository, whether a data warehouse or a Hadoop “data lake,” or worse, flows by the organization without ever being meaningfully captured.

But here’s the catch: if you don’t do something meaningful with your data to radically transform your business, if you don’t pounce on the opportunity to become a Big Data disruptor, someone else in your industry will cut you off. Just as with the Innovator’s Dilemma, by the time you realize your competition is leapfrogging you, it is too late. (Our CEO Steve Shine gives a great keynote at TiEcon on this topic.)

If you are weighed down with legacy architectures that can’t scale, you miss the opportunity for Big Data disruption.  (That’s why AMD ditched their Oracle database in favor of Hadoop for their 276 TB test and assembly datasets.)   And disruptors who embrace modern software designed for commodity hardware, like Hadoop, struggle to attract scarce specialized (and pricey!) talent to harness Big Data technology.

Actian’s focus on providing an end-to-end Big Data analytics platform lets any organization become a Big Data disruptor.  It’s easy to adopt modularly, alongside your existing infrastructure, and the architecture combines completely modern software with the ability to run on readily available commercial hardware.  And it provides what’s been called an “Exoskeleton for Hadoop” to embrace the scalability of Hadoop with some huge enhancements:

  1. A visual design environment to rapidly create analytics applications that run natively in Hadoop through YARN (no need for MapReduce)
  2. Taking analytics to where the data lives – in HBase and HDFS – rather than forcing you to incur the friction of moving Big Data into an analytical repository
  3. A massively parallel columnar database for lightning-fast SQL queries on Hadoop, including pre-defined parallel analytics operators

Like death and taxes, one thing is certain: staying on legacy architectures spells Big Data doom. Whether you’re committed to Hadoop or are looking for an easy-to-implement end-to-end Big Data analytics platform, you need to seize the opportunity to be a Big Data disruptor.   It’s time to create the future – not be overtaken by it.

About Ashish Gupta

Ashish Gupta joined Actian in 2013, where he is responsible for marketing and business development. Ashish brings more than 21 years of experience in enterprise software companies where he focused on creating go-to-market approaches that scale rapidly and building product portfolio that became category leaders in the industry.

View all posts by Ashish Gupta →