Create a free Manufacturing.net account to continue

How To Implement Virtualization In Process Manufacturing

Every day process engineers are challenged with tight regulations, data security issues and aging plant infrastructures. In addition, more and more plant operators are insisting on easier access to plant floor operations, control systems and production data. To meet these demands, engineers need to get the most out of their IT-based plant assets –­ and virtualization can help.

Every day process engineers are challenged with tight regulations, data security issues and aging plant infrastructures. In addition, more and more plant operators are insisting on easier access to plant floor operations, control systems and production data. To meet these demands, engineers need to get the most out of their IT-based plant assets –­ and virtualization can help.

The benefits of virtualization apply to both the IT enterprise and industrial automation. These include: hardware consolidation, decreased energy consumption and footprint, increased fault tolerance, high availability, improved uptime, application load-balancing, rapid disaster recovery, extended lifecycles and more.

But by far the most important benefit of virtualization – at least for the Process Industry – is hardware independence. Users are no longer tied to specific hardware with virtualization, as virtual machines extend the software lifecycle to more than 10 years. With this benefit in mind, process engineers are asking for advice on how best to implement virtualization.

A Virtual Infrastructure

A complete virtual solution consists of both software and hardware components. Virtualization software – like that from VMware – a global leader in virtualization and cloud infrastructure – decouples the physical hardware of a computer from its operating system (OS) and software applications, creating a pure software instance of the former physical computer, commonly referred to as a Virtual Machine (VM). A VM behaves exactly like a physical computer, contains it own “virtual” CPU, RAM hard disk and network interface card, and runs as an isolated guest OS installation within the host OS or actual machine.

As seen in Figure 1, typical virtualization architectures consist of physical components including servers (hosts), storage arrays, Ethernet networks, a management PC, and desktop clients.

Management of the virtual environment is done through the VMware vSphere Client software.  Physical hosts allow for some level of direct management, but for the most part are designed for headless operation. The vSphere Client, also from VMware, provides a GUI for managing all components of the user’s topology from a single point of contact and leverages VMware vCenter Server as the backbone management service. 

Operators are able to access their workstations through the use of thin clients, traditional desktops, or even tablets.  One of the major benefits of virtualized operator workstations is that critical hardware is no longer exposed to plant conditions. When damages occur to a traditional workstation, engineers often spend a significant amount of time rebuilding software, OS and application code. If a thin client is damaged, however, it can be easily replaced without any impact to the remote virtual machine.  If configured properly, it will not be apparent to an operator that their workstation is virtualized.

If users are already making use of thin client technology, it can provide added benefits for engineering.  As an engineer, a user can setup permissions in the virtual environment so that their credentials provide them access to a number of virtual machines.. 

For example, an engineer may need to access code for a particular application that is not available on the local operator workstation thin client.  If the engineer were to login into the thin client, infrastructure software would recognize that the engineer has permission to access a number of different virtual machines, one of which is an engineering workstation.  That thin client is not converted from an operator workstation to an engineering workstation with the change of a login.

Hardware Components

Hosts (Physical Servers)

Hosts run hypervisors and provide the CPU and memory resources for each of the VMs. In the event that one server fails, the system will remain in a protected state across the remaining two servers.  This also provides opportunities for one server to be taken offline for maintenance while maintaining protection.  In both of these scenarios, the sizing of the servers must take into consideration that two servers must be sized to provide resources for the full system.  VMware makes it very simple to scale the system size upward by adding servers in the future to provide additional resources.  

Data Storage

Storage for the system will either be local to the physical host or on a separate network storage device that is shared between hosts. To take full advantage of a virtualized system the user needs to make use of a shared storage device.  For systems that require a single host where only a few virtual machines will be running, it would be more cost effective to run a local solution. 

When sizing storage, the input/output operations per second (IOPS) and overall storage capacity of the target process control system must be determined.

Data storage on a storage area network (SAN) via the iSCSI protocol is highly recommended.  iSCSI provides a reasonable compromise between cost and IOPS (I/O per second) performance.  Fiber Channel can provide much higher IOPS performance (3x that of iSCSI) but may be cost prohibitive to many users. Direct Storage can only be configured to connect to a single host and is not recommended as it cannot make use of many VMware features.

Network Connections

Ethernet networks with gigabit managed switches provide the backbone for the entire virtualization system, which makes use of virtual switches and virtual Ethernet adapters to organize network traffic in the virtual environment.  

Figure 2 shows a diagram depicting how these two virtual components work with physical network interface cards (NICs) on each server. Physical NICs allow each host and the VM’s they contain to access different VLANs, through the use of virtual switches. To keep systems highly available and fault tolerant, redundant switches are recommended. This entire process allows each VM access to the information it needs, when it needs it.

To determine how many physical gigabit NICs are required, the number of VLAN segments must be determined.  As a guide, every system requires separation of management, VM (Process Control Network), and storage networks.  This separation is done to increase performance and stability while minimizing potential issues with bandwidth contention.

Considerations for Designing a Solution

Virtualization is a powerful tool for improving the reliability and cutting the cost of Windows-based software applications in process plants. But before designing a virtualized system, engineers should have a general understanding of their control system architecture and sizing guidelines. It’s important to review and discuss server and storage sizing, network configuration, OS licensing and virtualization software licensing with plant managers and vendors before investing in a solution.

Some suppliers such as Rockwell Automation have also validated and published recommendations for virtualizing process control systems. Engineers can use the characterized data (CPU, Memory, IOPS, etc.)  to understand system requirements and specify system components. Most importantly, before ordering any hardware, always be sure to check the VMware Hardware Compatibility List available at VMware.com. 

 

The Rockwell Automation Network & Security Services team provides services for the evaluation and virtualization of an existing system or can provide system specifications for new projects. . 

In the future, the relative advantages of virtualization over the traditional one application-one OS-one PC approach will multiply as the technology becomes more widespread – and as PC hardware, Microsoft Windows operating systems and process control applications become further optimized for operation in  virtual environments.

For more information, visit http://literature.rockwellautomation.com/idc/groups/literature/documents/wp/proces-wp007_-en-p.pdf.


Tony Baker is the Product Manager for Network Security Products.  In this role, Tony has ownership of the network security product portfolio.  In addition, he is responsible for driving security standards and features into other Rockwell Automation infrastructure products.  He has been with Rockwell Automation for seven years in various roles including: Process System Engineering, System Engineering & Test Manager, and Product Management.  In his various roles, he has taken the lead on the adoption of Virtualization Technologies into the system architecture.