A few decades ago, I read a book called AI: The Tumultous History of The Search for Artificial Intelligence, by Daniel Crevier. In it, the late and brilliant Donald Michie is quoted as saying something like this:
AI is about making machines more fathomable and more under the control of human beings, not less. Conventional technology has indeed been making our environment more complex and more incomprehensible, and if it continues as it is doing now the only conceivable outcome is disaster.
More recently, when I wrote about complex adaptive systems, a colleague of mine, Reza Mohsin, pointed me towards another Michie quote:
If a machine becomes very complicated, it becomes pointless to argue whether it has a mind of its own or not. It so obviously does that you had better get on good terms with it and shut up about the metaphysics.
Last month’s tragedy involving the Air France flight over the Atlantic really brought this into stark relief, as I began to understand the implications of what may have happened. I quote from a Wall Street Journal article a few weeks ago:
A theory is that ice from the storm built up unusually quickly on the tubes and could have led to the malfunction whether or not the heat was working properly. If the tubes iced up, the pilots could have quickly seen sharp and rapid drops in their airspeed indicators, according to industry officials.
According to people familiar with the details, an international team of crash investigators as well as safety experts at Airbus are focused on a theory that malfunctioning airspeed indicators touched off a series of events that apparently made some flight controls, onboard computers and electrical systems go haywire.
The potentially faulty readings could have prompted the crew of the Air France flight to mistakenly boost thrust from the plane’s engines and increase speed as they went through possibly extreme turbulence, according to people familiar with investigators’ thinking. As a result, the pilots may inadvertently have subjected the plane to increased structural stress.
I stress that investigations are continuing, that the comments above are nothing more than theories at this stage.
Thankfully, not all events arising from the behaviour of complex adaptive systems are as tragic as the Air France crash. Some of them are downright comic. Take the accidental ‘takedown’ of YouTube by Pakistan early last year, where much of the world’s YouTube traffic was directed towards a page from the Pakistani ISP saying that YouTube access had been blocked; or the Skype meltdown in August 2007, where a large number of Skype supernodes were rebooted, after downloading Vista patches, at a time of very high activity. Others range from the Northeast Blackout to more recent gmail outages.
I spent some time yesterday evening with Dave Winer, Stowe Boyd and @defrag_ami, after the end of reboot11. The evening’s valedictory keynote had been given by Bruce Sterling, and I’d found it somewhat darker and more cynical than I would have preferred. Stowe felt that I should have seen it in a more satirical light, and he’s right. He reminded me that he himself taken a similar tack the previous year at reboot10, suggesting to the Utopians in the crowd that not all problems have solutions.
[Incidentally, I will always remember the Bruce Sterling talk as the one where he introduced the comic device of “my dead grandfather”, exhorting us not to concentrate solely on climate change ideas where our efforts will always be beaten by the relative performance of our dead ancestors.]
Understanding when and why a problem becomes intractable is an art not a science, something that two close friends (and erstwhile colleagues) Malcolm Dick and Sean Park have managed to teach me over the years. Neil Gershenfeld, alluded to something similar in his book When Things Start to Think. While discussing the work of Ed Lorenz, Neil says:
The modern study of chaos arguably grew out of Ed Lorenz’s striking discovery at MIT in the 1960s of equations that have solutions that appear to be random. He was using the newly available computers with graphical displays to study the weather. The equations that govern it are much too complex to be solved exactly, so he had the computer find an approximate solution to a simplified model of the motion of the atmosphere. When he plotted the results he thought he had made a mistake, because the graphs looked like random scribbling. He didn’t believe that his equations could be responsible for such disorder. But, hard as he tried, he couldn’t make the results go away. He eventually concluded that the solution was correct; the problem was with his expectations. He had found that apparently innocuous equations can contain solutions of unimaginable complexity. This raised the striking possibility that weather forecasts are so bad because it’s fundamentally not possible to predict the weather, rather than because the forecasters are not clever enough.
Which brings me to the kernel for this post. Tunguska. For those of you who’ve never heard the word, the Tunguska event is something that happened over a hundred years ago, in a part of the Tunguska river region in Krasnoyarsk Krai, Siberia, Russia. There was a massive explosion, a large swathe of forest was destroyed, trees were reduced to matchsticks.
Recent research suggests that “clouds that form at the poles after shuttle launches are due to the turbulent transport of water from shuttle exhaust”. The ‘two-dimensional turbulence” model put forward by Michael Kelley and his team at Cornell is fascinating, insofar as it suggests a plausible reason for the Tunguska event.
I’d already been intrigued by the connection between aviation and clouds. I’ve had the privilege of spending time with Doc Searls, who has taken pains to try and educate me on the relationships between some of the cloud formations I see today and the contrails of aircraft.
So I did some personal research. Nothing significant, just a little digging around, mainly through Wikipedia. In the Tunguska event article, there’s alist of ten other events in the last 100 years where the symptoms suggested significant meteorite airburst. Of the ten, two had an explosive yield in excess of 10 kilotons.
We had the “Eastern Mediterranean Event” on June 5, 2002, and the Lugo, Northern Italy event on January 19, 1993. So I tried to correlate this with any significant space activity. And this is what I found. STS-111 was launched on June 5, 2002, with a UTS time remarkably close, and on the right side of, the eastern Med event. Earlier, STS-54 splashed down on January 19, 1993, again remarkably close to, and on the right side of, the Lugo incident.
Intriguing. Not conclusive, but intriguing nevertheless.
We live in a world where things seem to be getting more and more complex, as we represent physical things as virtual abstracts, then use software to operate and manipulate the virtual models.
We live in a world where things seem to be getting more and more connected, as devices and sensors proliferate while being reduced to nothing more than nodes on a network.
We live in a world where people are happy making snap decisions on limited and superficial information, where conclusions are drawn and propagated on the flimsiest of bases.
We need to be careful. Careful to make sure we do our root cause analysis correctly. Careful to ensure we have the right feedback loops in place for learning, so that recurrence is properly and sustainably prevented.
For all this we need patience and tolerance like we’ve never had before, and an avoidance of judgmental behaviour.
Maybe the continuing advance of complex adaptive systems means that we need to increase our understanding of the Serenity Prayer:
- God grant me the serenity
- To accept the things I cannot change;
- Courage to change the things I can;
- And wisdom to know the difference.
[While reading the wikipedia article on the prayer, I could not help but enjoy the reference to a Mother Goose rhyme with similar sentiments:
- For every ailment under the sun
- There is a remedy, or there is none;
- If there be one, try to find it;
- If there be none, never mind it.