July 16, 2002
He Said - It Said ... Let's take five with Moira Gunn. This is "Five Minutes".
Some situations force us to choose between "he said" and "she said," while others make us choose between "he said" and "the technology said." Of course, I'm talking about the mid-air collision between a DHL cargo plane and a Russian passenger jet filled with vacation-bound children.
It's now clear that the Russian pilot decided to obey the human ground controller, and begin a descent. This, it turns out, was in direct opposition to his collision avoidance technology, which ordered "Climb, climb."
So, who or what would you listen to? A human with a presumably vast array of information before him, or a very complex technology with which you may have no personal, visceral experience that it actually works?
It's an important question, and one that speaks directly to trust - both human and technological.
--
We will never know why the Russian pilot chose to trust the urgent command of the Swiss ground controller. But would he, if he had known that Zurich ground control had taken its Short-Term Conflict Alert System down for maintenance?
And there was also an implicit trust on Zurich's part. It believed that no planes would come into close contact with each other, and if they did, the technology of other nearby ground controls and the aircraft themselves would back it up. And they were right.
German air controllers - seeing the potential for collision - called the Swiss by phone. But the reality of technology stepped in again. A second Swiss technology system was also out - the telephones, and the German's increasingly frantic calls were only met with busy signals.
Some accounts fault the Russian pilot for not informing Swiss ground control he had a received a command from his technology to climb. But think about this - we build all this complex technology, and the essential link is one human telling another what the technology read-out is? Considering how many human languages were involved in this high-pressure situation, would you call this a great idea? Would you sit down and actually design it this way?
--
Let's not forget that other scenarios could have played out here, as well.
Let's say that, instead, the Russian plane's collision avoidance system failed and incorrectly commanded the pilot to "Descend," while the Swiss controller correctly told him to "Climb." If the pilot descended would he be vilified for believing too much in technology? If he climbed would he be lauded for realizing the ultimate superiority of humans?
When Buzz Aldrin and Neil Armstrong were flying the lunar lander down to the surface of the moon, at the last minute, a red light came on and error messages started flooding in. They boldly decided to ignore it. It turns out, their bold call was right. A computer program had simply been flooded with too much data.
If they had paid attention to the technology, their historic mission would have been aborted. But that was just luck. History might have played out yet another way - instead of being the first humans to walk on the moon, Buzz and Neil might have become the first humans to die there.
Back-ups. Disaster planning. Human intervention when technology may be failing. These are judgment calls. Technology is never perfect, but neither are humans.
The Russian pilot trusted his huge and wildly complicated aircraft to fly, but he didn't trust it to know it was running into something.
Go figure.
I'm Moira Gunn. This is Five Minutes.