Create a free Manufacturing.net account to continue

Streamlining the Innovation Lifecycle, Part 1

True productivity and quality gains will only be realized when data and processes can move bi-directionally across both upstream and downstream activities.

By KEN RAPP, Managing Director, Analytical, Development, Quality and Manufacturing, Accelrys

For businesses that rely on scientific innovation to differentiate themselves, being first to market with a novel, more effective or attractively priced product is the Holy Grail — a chance to “own” the customer base before competitors catch up. But as the lifecycle that transforms a good idea into a viable product grows ever more complex, both in relation to the science that drives innovation and the increasingly global and extended nature of the research-develop-manufacture value chain, the very factors critical to success are suffering.

Data overload, siloed information systems, disjointed processes and a lack of transparency plague the many activities that contribute to product development and commercialization, leading to delays, errors, rework and cost overruns that negatively impact time-to-market, compliance and profitability. In fact, according to IDC Manufacturing Insights[1] , only 25 percent of R&D projects ultimately result in new products.

To improve the performance of their efforts, today’s organizations need to take a much more streamlined and holistic approach to managing both data and processes across the end-to-end innovation lifecycle, from early research at the atomistic and molecular level all the way up to manufacturing.

Data, Data Everywhere

Information management and access — particularly at those crucial hand-offs between research, development and manufacturing — is one activity that’s especially susceptible to productivity gaps during the innovation lifecycle. Modern R&D involves enormous amounts of data from a variety of sources: Information generated from lab experiments, modeling and simulation, QA/QC test results, historical findings and more all contribute to the knowledge that drives the creation of new products.

This data should be an asset to multiple stakeholders across the organization — imagine the efficiency gains that could be realized if plant managers had immediate knowledge of compliance or production issues uncovered during sample testing. But all too often, project contributors can’t take full advantage of valuable information because large swaths of it are inaccessible beyond departmental, disciplinary or system “silos.”

Why? Increasing organizational complexity is one reason. As more and more businesses outsource core components of their product development and manufacturing activities, critical information becomes scattered across a global and highly-fragmented extended enterprise. Today, for example, it’s not uncommon for manufacturing companies across all industries to work with contract research facilities, dozens of external factories and a long list of ingredient suppliers — all based in different locations around the world.

The research team may be thousands of miles away from manufacturing operations, yet a single discovery in the lab (like the impact of temperature on a compound’s stability) can have a huge impact on later stage activities, such as processing or the selection and calibration of plant equipment. The problem is that these specialized contributors end up isolating their data in proprietary systems and applications — a laboratory notebook here, a laboratory information management system (LIMS) database there — making it difficult to share, collate, analyze and report on. 

Secondly, not only have traditional information management approaches failed to pace with externalization trends, they are also struggling with the ever more sophisticated nature of scientific research. During the course of product development, data that needs to be accessed and analyzed may include unstructured text, images, 2- and 3-dimensional models and more, and may be generated by a host of advanced software systems, laboratory equipment, sensors, instruments and devices. Yet, the technologies used to capture and share this information are often woefully behind the times.

For instance, paper lab notebooks are still common, as is the need for human intervention to transfer data beyond a single department or disciplinary group. On the manufacturing floor, formulation recipes and SOPs are often paper-based as well, with operators recording the batch record data manually and then re-keying the record into other systems. As a result, project collaborators can spend hours searching files, cutting-and-pasting needed information together. Or they enlist IT resources to hand-code customized “point-to-point” connections to move data between systems and applications, especially between the systems that are used in R&D and the product lifecycle management (PLM) and enterprise resource planning (ERP) systems. Unfortunately, these ad-hoc attempts at data capture and integration are time-consuming, expensive and prone to error.

A Unified Approach to Scientific Informatics

What’s needed is a single, unified, enterprise-class informatics framework that allows organizations to electronically integrate diverse information silos, make data more accessible to all stakeholders and move the product more efficiently through the research-development-manufacture pipeline. And thanks to the evolution of service-oriented architecture and the use of web services, an enterprise-level approach to scientific informatics is now possible.

Web services can, for example, be used to support the integration of multiple data types and scientific applications, without requiring customized (and costly) IT intervention. As data previously scattered throughout the organization is electronically captured and made useful through a single platform, information can be utilized by all of the contributors thereby speeding innovation cycle times — no matter where, when or how it was generated.

For example, data from real commercial operations can be used to help direct R&D teams to better define product and process conditions for improved technology transfer and operational efficiency. Toxicologists can make their history of assay results available to formulators developing recipes for a new cosmetic, or chemists can work more closely with sourcing experts to ensure the compounds they are developing in the lab are actually viable candidates for large-scale production. Most important, those critical-yet-problematic data hand-offs between functional areas (the transfer of important R&D data to downstream PLM and ERP systems) can be automated, reducing the need for manual human intervention.



Today there are a number of technologies that can help organizations better capture and leverage scientific data and improve operations, such as chemical and biological registration systems or multi-disciplinary electronic lab notebooks (ELNs). Services-based informatics platforms are specifically designed to help organizations build a more integrated and holistic information environment across the entire innovation lifecycle.

With these types of solutions in place, organizations will be better equipped to deal with the data access and sharing challenges that are becoming increasingly problematic as innovation activities become more and more specialized, sophisticated and distributed. For example, a QC Lab Execution System (LES) with direct interfaces to LIMS, PLM and ERP systems further streamlines the “right-first-time” needs of manufacturing.

[1] IDC Manufacturing Insights: Accelerating Science-Led Innovation for Competitive Advantage

 

Tune into the Chemical Equipment Daily for part two of this two-part piece.

 

 

 

 

 

 

More