Watching the movie “Sully,” which dramatizes the heroic “save” of then-US Airways flight 1549 after taking off from New York's LaGuardia airport, we’re reminded of the role played by Capt. Chesley "Sully" Sullenberger III’s experience and intuition. On Jan. 15, 2009, as the plane is descending earthward with no power, his extremely professional and competent co-pilot is working through the fault tree in the Airbus A-320’s manual. But Sully has already concluded the best—and perhaps only—remaining choice was to land the plane and its 155 passengers and crew in the Hudson River. In this scenario, there was no time or opportunity for trial and error—he had one try and it had to be the correct choice.
The issues we're faced with troubleshooting in the process plant are thankfully (hopefully) less dire, and no one wants to challenge our process-pilots to solve life-threatening problems in less than three minutes. A good design provides layers of protection that are methodically evaluated to sufficiently mitigate the anticipated hazards. In many instances, these layers of protection invoke operator action, which usually relies on a measurement going into alarm and a documented, procedural response to the alarm. Even in these instances, we hope to give the operations crew at least 10 minutes to respond. If the consequences arrive too quickly or are too onerous, we’re compelled to seek other engineered solutions or automation (safety integrated functions).
Any mitigation stemming from alarms relies on a chain of causation unfolding reliably. The measurement needs to be functioning, timely and accurate. The alarm needs to be a quality one—not one that chatters or routinely creates false alarms. When the alarm does sound, the operators who acknowledges it must be trained to know they're taking responsibility for the corrective action, what that action is, and how quickly they need to get after it. It’s not uncommon that the “inside” or “board” operator has to complete the corrective action through an outside crewmate, so that individual’s training is also part of the mitigation.
Why not automate it? Like the plane in flight, there are often multiple possible causes for an alarm in a process plant, and multiple possible corrective actions. Procedures or fault trees might prompt the operator down a path from most likely to least likely, or direct them down one branch or another if various conditions are met (“Is the PC plugged in?”). Now imagine, if you will, writing all the if-then-else logic for each of the thousands of alarms in a process plant. Imagine accounting for false alarms or flat-lining measurements, or indications that contradict one another.
Automation becomes plausible when we know the answer in advance, just like simulations showed how flight 1549 could have safely landed at an airport. There are examples where automation or interlocks can protect us from making a dangerous choice, and plenty of studies showing where disasters (Three Mile Island comes to mind) could have been averted if only the operator had let the interlocks function. But when we start deploying interlocks that preempt operator judgment, we obligate the controls professional and her process engineering brethren to anticipate and account for all possible outcomes.
Even as the binder of procedures is thickening and growing more complex, we still rely heavily on the insight and intuition of experienced operators. When we lose these individuals to retirement and replace them with novices, should we expect the next generation to be robotic procedure-followers, or just replace them with real robots/automation? When the individuals endowed with the knowledge and intuition to make such judgments retire, do we imagine IBM’s Watson will be able to replace them? I think plant managers will tell us we need operators to be thinking, human decision-makers. They’re not just managing a process, but an entire worksite that includes the process plant. Humans can make choices when information is missing, when measurements contradict each other, or when previously unforeseen scenarios arise.
There’s a balance to seek between automation and relying on human perceptions, knowledge, training and decision-making. With Captain Sully and his co-pilot Jeff Skiles, they embodied a manner of “right and left brain” problem-solving of their dire situation. One could argue that Sully needed Skiles to be disciplined and methodical, which allowed him to skip ahead a few steps and prepare for the most feasible solution. Given the choice, I still think I’d choose Sully over Watson at the wheel.
[sidebar id =1]