Create a free account to continue

Improving Downtime And Disaster Recovery With Virtualization

Like the cloud and like the IOT, virtualization isn't going away.

If you know what cloud based computing and the Internet of Things are, you probably remember a time when the terms were eye-rolling buzzwords rather than concrete, tangible resources. You can likely recall the grayscale IBM commercials pushing the concepts in the mid-2000's when there was a marked effort to educate the world about the coming sea-change in how we connect devices and use this new, omnipresent connectivity to do more, better. These campaigns are designed to demystify concepts rather than obscure them because there is a recognition that the more people we have who understand a tool, no matter how abstract or intimidating its technical term is, the more useful it becomes to us all. If there is one term right now that fits this bill, it would be “virtualization”.

Manufacturing Business Technology has some great resources on the technical ins-and-outs of server virtualization in manufacturing (here and here) but, and this is coming from a boots-on-the-ground at facilities perspective, there is still a sizeable chasm in owners and operators comfort with and understanding of the concept of virtualization. Virtualization in its most basic form refers to the act of creating a virtual version of something, typically a hardware platform, operating system, storage device, or computer network resource.

These complete virtualized instances, called a virtual machine or VM, act like a real computer with an operating system; they simply operate as an instance on whatever machine it is hosted on. Everything executed on these machines is separated from the underlying hardware resources of the computer they are hosted on. Think of it this way: virtualization allows you to run a complete computer including the OS, hardware, and software, as a single, standalone application on another computer. So why exactly does this matter to you? Because virtualization cuts margins and saves money.

Disaster Recovery

If you work long enough at any automated manufacturing or process facility, you've inevitably had a system failure. Whether or not it's comprised of discrete controls, distributed controls, an execution system, or a centralized supervisory control system, failure of any one component along the chain can bring production to a halt. These failures can result in significant downtime and expense in addition to the loss of productivity and the loss of a batch. Additionally, there is a whole host of reasons failures like these occur, ranging from actual hardware failure through software problems, and up to operator error.

On a traditional system your point of control is a PC running an operator interface of some type. When this point in system fails, it requires several man hours to bring the system back online. Replacing a PC and installing and reconfiguring an OS, OI/HMI software, a historian client, and an I/O server can take four to seven man hours depending on your skill. Using a VM can cut this potential downtime to anywhere from one half to one quarter, depending on how your system is configured.

This is because a majority of the traditionally time consuming factors associated with disaster recovery, reinstallation and configuration in particular, are eliminated. They are eliminated because they are already configured on your virtual machine, which as an instance remains unchanged. Rather than spending three hours reconfiguring OI/HMI software and an I/O server it requires you to simply reinstall a VM player (VMWare or VirtualBox for example) and load your given VM, a task that can be completed in minutes by comparison.

While this example is a much faster way of bringing your system back online, it actually isn't the fastest. The above example qualifies as a hybrid system where a standalone PC is running a VM. The next logical step in a system like this is eliminating standalone PCs in lieu of a server and utilizing a thin client to access your VM. While the hybrid system eliminates the need to reconfigure an OI/HMI and I/O server when the system fails, a fully virtualized system running on a thin client eliminates the need to reconfigure and reinstall your operating system and historian client as well.

As a thin client, your VM operates on your server and is viewed and controlled remotely by a VNC. VNC (virtual network computing) is a GUI program designed for remotely controlling desktops, nothing more. If a piece in your control system fails you simply need to replace whatever hardware caused the failure and VNC back to your server. In this example you can have your system back online in the time it takes you to replace your damaged hardware and power it up. This is not only because your control system is a virtual machine, but because it is remotely accessed on-site, meaning it is highly protected from the typical causes of system failure. Further, utilizing a server allows for a variety of different options for configuring redundancy.

Return on Investment

As with anything in business, embracing a new technology requires carefully weighing investment gains with investment costs. It also requires a level of comfort with the technology that reduces the intimidation factor. Virtualization is one of these technologies. The sunk costs for virtualizing a system are more expensive than traditional methods and the initial virtualization of your system does require more time. However, a virtualized system is one that can significantly reduce prospective costs.

Virtualization is a method for preventing failure and improving disaster recovery, but it can do so much more. For example, a completely virtualized control system can reduce your need for very expensive, annual software licenses at each point of control. These systems can also be converted dynamically, meaning you do not need to bring your plant down to upgrade, and utility tools exist to export and virtualize your current OS in its entirety, making the process very straight forward.

Virtualization as a manufacturing tool is powerful because it's hardware independent for systems that cannot fail. If you rely on lots of electronics and large interconnected machines then you understand how important it is to eliminate potential points of failure; virtualization effectively does this in the areas that have historically been the most difficult to control for. Like the cloud and like the IOT, virtualization isn't going away. It has proven benefits and superior utility. It's only a matter of time before it graduates from an uncomfortable buzzword to standard industry knowledge.

Jacob Haugen is Communications Director at Portland Engineering, Inc., (PEI) and writes on behalf of the Control System Integrators Association (CSIA)

More in Operations