Why Big Data Is Meaningless

Mon, 06/17/2013 - 9:40am
Michael Rothschild, Founder and CEO, Profit Velocity Solutions

Vendors, managers and bloggers often cite the benefits of “big data,” though most miss the mark in understanding exactly what it is used for, and how it is analyzed. By itself, big data is just raw material, and without a focus on bottom-line results, it is not only meaningless — it can actually be harmful, diverting resources away from a company's core mission.

Often, big data focuses too much on the collection and parsing of data, and too little on how it can be used to make better decisions in specific areas of a business. The process is simply to collect large amounts of data, with the hope that something useful will emerge from analyzing it at some point in the future.

Manufacturers are some of the biggest consumers of big data, often spending millions of dollars on implementations that ultimately have a high rate of failure. The failure isn't in the technical ability to collect big data, but rather in not having a specific decision-support goal at the beginning. It is often compounded by the inability to convert the raw data into valuable and actionable information once it has been collected. Without a reliable and consistent ability to process, analyze, understand and make decisions on the information contained within the mountains of data, resources spent on accumulating that data is money down the drain.

It's only when big data is transformed into useful, actionable information — and gathered with a specific goal in mind — that it can be used on a daily basis to improve business. At that point, big data becomes information that can jumpstart the process of driving a tighter focus on organizational effectiveness and profitability. Ultimately, this can result in significant improvements to the operational excellence and profit optimization of manufacturers worldwide.

The question then isn't "Why is big data meaningless?" but rather, "How do you make big data meaningful?" New BI systems and interesting software-as-a-service offerings from innovative startups are redefining big data, how it is collected, and how it is interpreted. The true value isn't in the collection of big data, but in its analysis, transformation and application in decision making. To be useful, the "big data last mile" has to be considered — that is, how is that big data converted into useful, actionable information. How does it become part of continuous process improvement to make better decisions that improve business operations, increase profits and increase shareholder return?

The component of continuous process improvement is key to such success. Any company can spend countless hours plugging last quarter’s data into an Excel spreadsheet to be analyzed in graphs and charts, but such labor-intensive snapshots are a grossly inefficient use of resources. To be of value, the system being used to process a company’s big data has to be capable of automatically imputing data in real-time in order to draw current, as well as predictive analytics, in order to drive real and measurable profit optimization.

An outside analogy

An analogy can be drawn to a nation’s methods of gathering intelligence, which are in truth a lot more mundane than Hollywood portrays. What is termed the “intelligence cycle” involves four stages: collection, processing, analysis and policy implementation. In the past two decades, the United States has become the unparalleled leader in electronic and signals intelligence collection. With this ability, the issue is no longer the lack of intelligence, but rather the inability to process and analyze the significance of this massive amount of incoming data, and to make appropriate policy changes to reflect the value of the information collected.

While such a world may appear distant from the problems faced by large, asset-intensive manufacturers, the reality is that the concept of big data being meaningless without valuable analysis is essentially the same. 

False solutions

Many ERP systems and Business Intelligence products claim to provide solutions to this inherent problem by offering systems such as visual analytics, self–service BI and similar methods to display data in ways a user can interpret more easily. However, these solutions face the same fundamental issue: such data without analysis is just that, empty data. While there are obvious benefits to displaying big data visually, many of these systems do not provide a mechanism that shows decision-makers how alternative courses of action or specific decisions can directly affect a company’s bottom line profitability.

Fear not

The good news is that with recent developments and new software platforms, there are feasible solutions that offer these crucial elements. Beyond just being able to plug in the data and have it visually appear on a chart, these new systems are able to offer scenario-based, “what if” planning. An executive can alter inputs and pricing in order to see alternative courses of action in product, customer and asset mix that will produce significant unrealized profits.

Companies are missing a major opportunity by focusing too much on the data gathering side and not enough on the analytical or implementation sides. The ability to access and analyze this large amount of data presents a significant opportunity to bring in new metrics that could not have been analyzed before, and to yield much more accurate insights that can be used to improve the bottom line in a meaningful way.

For example, product unit margin is a common and useful metric, but may yield false positives by itself. The inclusion of the new metric of profit-per-unit of time, or Profit Velocity, provides a deeper analysis of the big data for many complex, asset-intensive manufacturers. It can reveal hidden opportunities previously inaccessible to decision makers looking to increase profitability.

Start with the end in mind

Going forward, manufacturers seeking to extract maximum value from investing in big data projects should work backwards and ask a few fundamental questions:

  1. What business processes or decisions do you want to improve? (Make sure to get management buy-in at this level.)
  2. How will these decisions improve the business? Customer profitability? Product rationalization? Capacity planning?
  3. What are you trying to maximize? Profits? Asset utilization? ROI? Revenues?
  4. What are the most meaningful metrics to measure progress toward those goals? Unit margins? Profit-per-hour of machine time?
  5. What types of analysis do you need to perform to expose the data, explore ‘what if’ scenarios and iterate through alternatives to maximize profitability?
  6. And then, finally, what types of data do you need to collect in order to feed the above analysis and decision-making?

Big data projects will only be successful when teams first focus on the desired end result. Using that as a filter for the data collection and aggregation will lead to success. Focusing just on massive data collection will lead to confusion, wasted time and resources, and missed deadlines as projects change with each new request for data or disagreements surface around different metrics.

Using the processes above will enable the utilization of big data in a useful and meaningful way. Without such a solution, big data is nothing more than meaningless empty noise.

Michael Rothschild is the Founder and CEO of Profit Velocity Solutions, a San Francisco-based technology company that has pioneered a unique platform for continuous profit improvement called PV Accelerator™. PV Accelerator provides manufacturers with entirely new capabilities in the planning and control of profitability. One of its key modules, the PV Planner provides unique “what if analysis” capabilities that evaluate the profit impact of alternative prices, costs, productivity, volume, mix, and capacity scenarios.


Share this Story

You may login with either your assigned username or your e-mail address.
The password field is case sensitive.