Create a free Manufacturing.net account to continue

The Challenges of Defining 'Dangerous'

In 2008, General Motors conducted internal training for its engineers on how to document product risks, including bans of the words 'defect' and 'problem.' Well-intended or a shameful legal dodge, the training skirted around the problem instead of attacking it head-on.

Throughout 2014, I watched General Motors’ safety issues closely, which by December had yielded 84 recalls, covering over 30 million vehicles, at a cost of about $2.7 billion. Spurred by a faulty ignition switch issue in late January 2014, by April the company was facing a federal criminal problem and testifying before Congress. GM spent the remainder of year revisiting its definition of ‘safety’, or more specifically the definition of ‘dangerous’.

It’s easy for an outsider to point to the situation, as Senator Richard Blumenthal did, and decry a broken culture or push for mandatory reporting to a national registry. Although I’m not defending GM, when working recently with a medical device manufacturer I saw first-hand how difficult defining ‘Dangerous’ can be for an organization of any decent size.

Unlike the U.S. auto industry, the Federal Drug Administration (FDA) has had an incident reporting system registry in place since 1984; the European Union and Canada have similar databases for medical device incidents, and the processes supporting these databases provide a valid comparison.

The FDA defines a reportable incident as “an event reasonably suggesting a [medical] device may have caused or contributed to a death, [or] serious injury…”

Sounds easy, right? Imagine a General Motors staffer receiving information on an incident, any date between 2004 and 2014, and thinking of reasonable causality. Now imagine that same staffer escalating the issue to their manager and then to that manager’s manager, and so on, and each person individually applying the test of reasonable causality. What’s gotten lost in the GM case, even before reporting externally to a database, is that the communication channel internally is a process, a series of evaluations and decisions amongst multiple parties, each with their own definitions, biases, and perhaps agendas. It would take only one person in a serial process up the organizational ladder to scuttle any external reporting.

Perhaps a decision by a committee is the way to go, as it would avoid any one person influencing the process unduly. That might work for one point in time, a single incident, but can your organization ensure that the same committee personnel will be present at every meeting so that the decisions’ criteria are consistent over time? If not, a committee may look at an incident and decide to report it today but not necessarily six years from now or nine years from now. What is the thread that ties a committee’s historical decisions together? At GM, there was none.

The organization my team supported in their own decision process wasn’t dissimilar to GM’s in product complexity or size. Unlike GM, they proactively chose to tackle this problem rather than allow their CEO to be grilled by congress the way Mary Barra of GM had been (twice). At GM, according to a report released in June 2014 by an external lawyer, GM’s policies created a dysfunctional dynamic in which engineers deferred to lawyers who placed a greater emphasis on legal concerns than on solving a defect that led to deaths.

In and of itself, my client’s process is similar to what exists in GM as a result of the recalls. A customer complaint is logged into a database by a customer service rep and flagged as a safety issue; the head of product safety (a position that did not exist in GM until after the first recalls of 2014) reviews the list of recent safety complaints, and the results of any subsequent investigation, and determines whether to report it to the medical device reporting databases or not. In some cases, a committee is convened or other resources consulted to help make a determination. The process itself is simple, just a few steps, but its messiness we found was in the determination of reasonable causality. 

In the case of manufacturing a product, these kinds of judgment decisions exist ubiquitously, completed by inspectors inspecting qualitative attributes of a product. Will an inspector, if randomly handed the same part multiple times, accurately determine its acceptability to ship each time? If that same part were handed to multiple inspectors across shifts or production lines or factories, would they consistently all make the same determination?

For our client, the process begins with a customer service representative flagging a customer complaint as a safety issue. The accuracy here was better than expected: Of the most recent 1500 complaints over several years, I found not a single incident that should have been flagged as a safety issue yet had been not. In other words, the head of product safety had been reviewing a complete and accurate list of potential reportable incidents. (In statistics parlance, we would say the first step of the process had no Type II errors, and I was unconcerned with Type I errors.) The next process step is to pull a report of the safety complaints, and this portion of the process too was working as hoped and intended.

So the reporting process came down to the decision of whether to report an incident, as opposed to the mechanics of the process and its upstream decisions. So we assessed how consistently the organization determined reasonable causality, as though it were a product being inspected. Would multiple internal stakeholders decide to report a given incident or not? What was the organization’s baseline level of consistency? I chose nine random employees, including the head of product safety, regulatory engineers, internal sales representatives and design engineers. I presented each with nine safety incidents and asked which they would report. In manufacturing environments, this type of consistency test is called gage reproducibility; however, the stakes could arguably be lower in manufacturing for assessing the conformance of a single product than for reporting an incident. Just ask General Motors, who was fined the maximum $35 million allowed under federal law.

After tallying the results, I asked the team what level of consistency they expected to find. In manufacturing, an organization generally shoots for 90 percent or greater, and the team expected a range of 25 percent to 90 percent consistency.

The actual level of consistency, however, was 0.00 percent.

In other words, as I told the team, in each of the nine safety incidents, one of their co-workers disagreed with their assessment. We gave a collective shudder as this fact highlighted why the organization, and perhaps any organization with similar relevant decisions, was wise to proactively explore it.

So how can an organization address this lack of consistency? In my experience, it was an interesting, lively exercise. We looked at the FDA requirements on reporting and several safety complaints from our assessment and tore them down to arrive at a consensus. We explored biases, confusion, definitions and more. Although everyone involved was a native English speaker, it’s remarkable how words that form a regulatory requirement could have such a lack of clarity. As often happens with my regulatory work, I transform from a consultant to a lawyer before transforming back again.

The end result was group-wide consensus on internal guidance, a series of bullet-point statements; we defined clearly but generally which incidents would be reported and which would not. Guidance after all is not meant to be absolute but to raise the reproducibility across stakeholders up towards that 90 percent goal. Sure, some future incidents may lie outside the guidance but now there’s a baseline, something to change, to amend, to build on.  There’s something the organization can look to, a thread that can tie the decision making today to similar decisions years from now. The guidance was documented in an internal procedure as a baseline for decision making, alongside the overall reporting process.

In 2008, General Motors conducted internal training for its engineers on how to document product risks, including bans of the words ‘defect’ and ‘problem’. Well-intended or a shameful legal dodge, the training skirted around the problem instead of attacking it head-on: What does ‘dangerous’ mean to us, and what is our baseline for any subsequent action? It’s too late for General Motors, and for the families they impacted, but not for your organization. Be proactive, borrow assessment tools that exist in manufacturing and determine your baseline. Anything less could be ‘dangerous’. 

Adam Gittler is a senior consultant at ACME Business Consulting