My grandmother grew up during the Great Depression of the 1930’s. She would wash and reuse plastic wrap, aluminum foil, plastic containers and glass jars, something I found odd as a kid. No one else I knew did that back then. When I would ask her about it, she’d say, “Waste not, want not.” She grew up with the habit of wasting nothing, and lived with the benefits of those frugal habits the rest of her life.
Now, I have a special bin to put that sort of thing in, so that someone else can wash it, melt it, and form it into new products to be reused. What was second nature to my grandmother, we are just beginning to relearn. Waste is expensive. As a society, we can’t afford it.
But both for me, and for my society, bad habits are difficult to break. It took a lot of hard lessons of overwhelmed landfills and energy shortages, and declining air and water quality, for things to change. We had to pay a high price to learn that lesson. We had to actually feel the deep cost of waste, and see the benefits of recycling and reusing, before, as a society, we realized that we had to do something differently. On a personal level, it took money coming out of my pocket to change my habits, despite my personal beliefs. Old habits are hard to break. My city has a limit on how much I can throw away without paying extra, but no limit on how much I can recycle. That did it.
Big data analytics platforms are still very much in the early days as an industry. Data centers using Hadoop to distribute compute loads across multiple commodity servers in a cluster configuration are still a relatively rare occurrence in most businesses. But they’re becoming less so as data volumes and analytics demands grow, and the need for inexpensive compute power becomes more pervasive.
In January, the birth of a new year, we think about how to make the new year better than the last. Now is the birth of a new technology revolution. Now is the time to resolve to learn from the costs associated with data centers already in production. Current data centers waste over 80% of compute power, waiting for peak loads and using single-threaded serial software designed for the machines of the last millennium. That’s the kind of bad habit we simply can’t afford if the new big data technology strategies are going to become widespread. That much waste is not only unforgivable in the microcosm of a company’s IT budget, but unsustainable in the macrocosm of the world economy.
We don’t have that much energy to waste.
It may be idealistic of me, but I would love to see more efficient compute methods implemented BEFORE this new technology gets entrenched in hundreds of thousands of businesses. It’s far easier to establish good habits from the beginning than it is to break bad old ones. And, in order to break entrenched habits, we first have to pay a high cost. Let’s save a lot of money, energy, and time, and get off on the right foot at the beginning.
The Hadoop distributed computing concept is inherently parallel and, therefore, should be friendly to better utilization models. But parallel programming, beyond the basic data level, the embarrassingly parallel level, requires different habits. MapReduce is already heading us in the wrong direction. Most Hadoop data centers aren’t doing any better when it comes to usage levels than traditional data centers. There’s still a tremendous amount of energy and compute power going to waste.
YARN gives us the option to use other compute models in Hadoop clusters; better, more efficient compute models, if we can create them. It’s up to the software industry to get our heads out of the sand, or, if not us, then maybe the open source community will come up with the programming models that will take off, so we can build new distributed analytics data centers from the ground up with the right habits.
Use every type of parallelism possible. Use all available hardware, not just one core per node, and balance the work evenly so that even peak analytics loads don’t require a bunch of extra hardware. Use it all, not 20%.
Waste not, want not.