Create a free account to continue

Not All Data is Created Equal

Going beyond the initial collection of connected device or sensor data to derive actionable insights is essential in today’s highly competitive, digital business landscape.

Going beyond the initial collection of connected device or sensor data to derive actionable insights is essential in today’s highly competitive, digital business landscape. For example, in order to optimize operational efficiency, satisfy customer demand and maintain security, manufacturing organizations must continually monitor and understand all of the data their machines are constantly producing.

Depending on the specific vertical of a manufacturing organization, the specific use case for what can be done with machine data varies significantly. For many manufacturers, operational processes are tightly controlled, so leveraging machine data to gain insights is fairly straightforward. For those on the light industrial (e.g. agriculture or food processing) and the heavy industrial (e.g. oil, mining or steel) sides, however, processes are perpetually in flux, making it difficult to do anything other than simply collect machine data.

Thankfully, there are tactics light and heavy industrial manufacturers can employ to put their unique breed of machine data to work. Consider the four below best practices to improve operational efficiencies, reduce costly downtime and even implement predictive maintenance initiatives.

1. Assess the data quality.

As a first step, take a step back and ask, “Do we have enough data here to do anything meaningful?” Often, light and heavy industrial manufacturers only have a single source of data, such as a vibration sensor, or perhaps their machines are leased and therefore frequently changing location, providing little consistent data. Another common issue is not having existing datasets that indicate patterns of machine failure, as catastrophic machine failures usually occur so rarely. 

To make existing data more workable, try building out a wider dataset by incorporating additional, similar machines and looking at a shorter time period. In instances of single sensor sources, lean on subject matter experts to decipher patterns and define phases of machine cycles and performance for each. To establish a more robust picture of instances of machine failure, introduce a wider array of sensors or data sources. For instance, it might help to incorporate ERP data to better quantify outputs or leverage other machine data to build a bigger picture of a production line.

2. Examine the data collection process.

Once the quality of the data has been assessed, it’s time to analyze the data collection process and recognize any limitations it might produce. For example, for a light industrial agricultural manufacturer versus a heavy industrial steel manufacturer, data collection is going to look very different. It’s not cost effective to create a network of sensors to cover a 1,000 acre farm, so chances are they’ll need to rely on sensor stations and have tractors pass by to harvest data in a batch process manner. In a steel factory, however, machines are kept close together, so harvesting data is easier. There are other risks heavy industrial manufacturers will need to consider, such as network bandwidth limitations, security breaches, or general interference from the metal on the factory floor.

3. Determine the data consistency and velocity.

In conjunction with examining the data collection process and any complications it might introduce, it’s important to recognize the speed and quality at which data is being processed. For instance, in scenarios where data is being pulled in from different locations, the data quality will likely be highly inconsistent. Consider the light industrial agricultural manufacturer example: their data is unusable for roughly half the year due to the seasonality of their business. And because the business cycle for a farm is measured in months, it will require years of data collection to build a sufficiently full picture. For heavy industrial manufacturers, however, the data velocity will likely be higher, and depending on the specific vertical of heavy manufacturing, the data consistency could be fairly reliable.

4. Confirm the data value.

One of the most important steps in preparing to leverage machine data is confirming its value. More data is not always better, especially considering the cost of acquiring and storing large amounts of data. For example, a well failure in an oil field and unplanned downtime with a CNC machine are going to produce very different economic impacts. A CNC machine may be generating thousands of data readings per second, with a downtime event only resulting in $5 - 10K in costs, while an oil field well may be generating a fraction of that data (say, 5 - 10 readings every few minutes), with a downtime event costing upwards of $250K per hour. An economic case can be made for both situations. However, using data to anticipate any future outages of the oil field well is clearly the more cost effective scenario.

No matter the data’s source(s), collection process, consistency or value, there’s a path forward for both light and heavy industrial manufacturers that seek to make their existing machine data actionable. The key is recognizing the data’s variables, rather than just blindly capturing all data and expecting instantly productive insights. Work to define specific use cases for your data, and lean on any available subject matter experts in your organization to identify the most promising datasets and patterns. In doing so, manufacturers can obtain a more realistic and full view of their machine data, and apply that intelligence to improve their business operations in a scalable, cost-effective manner.

Kyle Seaman works in Product and Partnerships at Sentenai.

More in Software