The Industrial Internet of Things (IIoT) will be adding a new requirement on enterprise networks: responding in real-time.
Before we can discuss what the impact of real-time requirements are on a network’s infrastructure, we need to come to a definition of what real-time is.
What is Real-Time?
The definition varies, but generally, a real-time system is one that provides a smooth, seamless, experience. This is certainly the case when watching HDTV or listening to streaming music. The video frames and audio samples arrive quick enough and at the right time that the viewer or listener cannot distinguish them. This definition also applies to digital control system that may be implemented on the factory floor or a flight control system. In those applications, if the digital control system does not respond quick enough, bad things can happen. But that does really answer the question.
How fast does a digital system must respond to be characterized as real-time? That depends.
For movies in a theater, the frame rate is 24 frames per second (fps), or a new frame every 42mS. Presenting the individual frames at 24 fps or faster, the viewer experiences smooth uninterrupted movement as they would in the real world. To enhance the experience even further, some filmmakers are shooting movies using 48 fps. For CD quality audio, a new sample of the music arrives every 22.6mS, although an audiophile might argue the sample rate should be faster. A digital flight control system takes an action 20 times per second, or every 50mS.
There is a difference though between viewing data in real-time and acting on data in real-time. While listening to music or watching a movie, one is consuming the content in real-time, however one does not have to take an action in real-time. A digital control system is more complex than that. Not only does it need to take a sample from a sensor at the appropriate sample rate, it also must analyze the sampled data and possibly provide a response within that sample rate. An example of a digital control system operating in real-time is the fly-by-wire control system of the F-35 Lightning II.
Process Control is Generating Real-Time Data
Several concurrent technological advances are being taken advantage of to deploy IIoT: sensors, Moore’s Law, and the ubiquity of bandwidth. Without them, IIoT and the linkage of the factory floor to the enterprise data center would not be possible.
- Sensors – The advancement of sensors like microelectromechanical systems (MEMS) accelerometers, gyroscopes, and inertial measurement units (IMU), have become small enough with a reduced cost making wide deployment practical.
- Moore’s Law – The doubling of the number of transistors in an integrated circuit every two years have resulted in small, cheap, CPUs and memories. The Raspberry Pi single board computer is an example.
- Ubiquity of Bandwidth – IIoT devices that gather data will need to send that data upstream for analysis. The ability to be connected to a network is virtually available everywhere. There are a wide range of ways IIoT devices can be attached to the network: copper or fiber optic cabling; Wi-Fi; ZigBee; Cellular; and many more options.
The large amounts of data that will be generated by deploying IIoT devices is not the problem. The problem is that the data will need to be analyzed and acted up on in real-time.
Data Centers and Real-Time Data
With several exceptions, most enterprise data centers do not need to process and act on data in real-time. Although streaming websites such as Netflix and Spotify are sensitive to the real-time nature of their end-users’ experience, the streams are sufficiently compressed so that that real-time requirement is not a burden in their data centers.
Examples of data centers that do need to support real-time applications would be those that support audio or video chat services. A telephone conversation is extremely sensitive to latency, or the delay through a network. It becomes increasingly difficult to conduct a telephone conversation in real-time when the mouth-to-ear delay is greater about 200ms. The one-way latency using a geosynchronous satellite can be as much as 300mS. This leads to several problems, including doubletalk, or talking over one another.
Other applications that are sensitive to latency is virtual desktop infrastructure (VDI), or thin clients. The typical desktop or laptop contains sufficient processing power, memory (RAM), and disk storage to support the user’s applications. Everything is self-contained and can function without connected to a network. With VDI devices, most of the processing power, memory, and disk storage reside elsewhere, either in the enterprise’s data center or in the cloud. An example of a VDI device, or thin client, is Google’s first generation of Chromebooks. Thin clients require low-latency networks as the time delay between the thin client and its resources could impact the real-time responsiveness. Users expect that a thin client will have the same responsiveness as a typical desktop or laptop.
Improving Response Time to Real-Time Data
There are several options network architects or network managers can do to prepare their data centers for supporting real-time applications and IIoT.
- Use a low-latency infrastructure – Different types of media have different latencies. For the latency through an optical fiber link may be 100mS, a direct attach copper (DAC) cable assembly is approximately 300mS, whereas the latency of a 10GBASE-T link is on the order of 2 to 2.5mS. This may not seem like much of a difference, but depending on the network’s architecture and the number of hops, the latency of the media could have an impact. Looking at the media that is used for the network is one of the easier way to lower latency.
- Upgrade the networks speed – Although increasing the network’s speed does not shorten the media’s inherent latency, it does speed up how fast packets can be reassembled at the receiving end.
- Use lower latency equipment – Switches, routers, and servers all have a latency associated with them. One can improve a data center’s responsiveness by selecting lower latency equipment.
- Adopt a spine-leaf architecture – A traditional network architecture has three layers: access switches, followed by aggregation switches, and core switches. One could adopt a spine-leaf architecture where one whole layer of switching is removed. This improves latency and responsiveness by removing the latency added by a layer of switching.
- Edge computing – To shorten latency, one can locate the necessary computing resources closer the IIoT devices that are generating the data. Edge computing is a trend that is contrary to locating computing resources in a few very large, hyperscale data centers.
Ready for IIoT
The result of deploying IIoT is not just that it will generate vast amounts of data, it is that some of that data will need to be acted upon in real-time. What the definition of real-time is depends on your application. Depending on how fast one needs to respond, the typical enterprise data center may not be able to support real-time IIoT applications. If one finds that the latency through their network and data center is too long to support the desired IIoT application, there is a range of options one could entertain from something straight forward as changing out networking infrastructure, all the way to adopting edge computing.
Tom Kovanic is manager of business development at Panduit.