We’re all very connected now, and news travels fast. There was a time when what constituted “news” used to be corroborated before it was sent on its way. There was a time when news had to be new to be news. There was a time when news had to be true to be news.
Those were the days.
Some of you may remember reading Bombardiers, a book by Po Bronson, which came out over 20 years ago. Here’s a review that might intrigue you enough to read it.
The principal reason for mentioning that book is that Bronson’s satire was instrumental in getting me to understand something I’ve written about before, over 10 years ago.
That something is this, paraphrasing Bronson:
In the Stone Age, Might was Right. And you could figure out who was Mighty quite easily. In the Industrial Age, Money Ruled. And you could figure out Who had the Money quite easily. In the Information Age, figuring out Who has The Information isn’t easy. In fact it is Very Hard. Because the time taken to verify the information can exceed the speed at which information changes.
So it doesn’t matter if the UK doesn’t spend £350m per week on the EU, it doesn’t matter if the NHS never receives that theoretical weekly sum. It doesn’t matter if Mexico never pays for the wall, or even if that wall never gets built.
Things move on. Turning back the clock is hard. Very hard.
There is something that is a little easier. Studying the impact of change, looking objectively at the data. Not easy, but easier.
It means knowing the right data to collect, the right way to collect it. It means preserving that data and making it accessible. It means protecting that data from change and from misuse.
It also means looking at the data over a long enough period. That’s what we didn’t do with cigarettes; but at least that battle’s over. That’s what we didn’t do about sugar; and that battle isn’t over yet. That’s what we’ve been trying to do with climate change, and that battle’s barely begun. That’s what we’re probably at the nascent stages on with fracking: the issue is not about hydraulic fracturing per se, not even about horizontal drilling, but about safe ways of disposing of the waste water, in particular learning from the seismologists who’ve been studying this aspect.
These battles get won and lost by people who are in positions of authority for far shorter periods than the time it takes for their decisions to have an effect.
And for us to learn about that effect, understand it and respond to it. Adapt as needed.
Things move on. Turning back the clock is hard. Very hard.
We need to get better at studying the impact of change over time. Proper, longitudinal study, collecting and preserving the right data sets, with the relevant discipline and safeguards in place. That’s why I have been fascinated by and supportive of Web Science.
This need for good longitudinal data is also the reason why I have been so taken with Amara’s Law, about our tendency to overestimate the effects of a technology in the short run and to underestimate it in the long run.
Take cricket for example. There was wailing and gnashing of teeth when limited-overs cricket was introduced, more of the same when T20 was introduced. The death of test cricket was foretold and prophesied, and later announced. Apparently it’s been dying ever since.
Not according to the data, especially when you look at longitudinal data. This was something I wrote about recently.
Taken over nearly a century and a half, comprising well over two thousand Tests, the data indicated that since the introduction of the changes, Tests were more prone to ending in win/loss results rather than draws, that more runs were being scored and more quickly, and that perhaps counterintuitively, the number of long individual innings (as exemplified by triple centuries scored) was also on the increase.
Events earlier this week have allowed me to look into another data set, suitably longitudinal, which reinforces all this.
I started with the hypothesis that one reason why Tests may be ending in win-loss results more often is that batsmen have learnt to truly accelerate run-scoring in bursts, using skills acquired in the short game. I surmised that we may be seeing more such bursty behaviour in the 3rd innings, thereby setting up for grandstand finishes, sometimes with “sporting declarations”. I also surmised that this bursty behaviour would be able to act as a potential insurance policy against any time lost due to inclement weather. But it was all hypothesis. I needed the facts. At the very least I needed a suitable data set collected over a sensible time period. The recent Australia-Pakistan Test gave me the catalyst I needed. The Australians scored 241 for 2 in their second innings before declaring. By itself that wasn’t unusual. But they scored the runs at a rate of 7.53 RPO, something I would associate readily with T20, something I would expect in a small percentage of 50-over games, but something I would consider a complete outlier in the five-day game.
So I went and had a look.
In the history of Test cricket, covering somewhere between 15000 and 18000 innings, there have been just 10 instances where a run rate (RPO) of 6 or more per over has been sustained in an innings lasting 20 or more overs.
- England 237/6d, RPO 6.12, 3rd innings, Mar 2009
- Pakistan 128/2, RPO 6.19, 4th innings, Oct 1978
- West Indies 154/1, RPO 6.20, 4th innings, Mar 1977
- Australia 251/6d, RPO 6.27, 3rd innings, Jan 2015
- Australia 264/4d, RPO 6.28, 3rd innings, Nov 2015
- South Africa 189/3d, RPO 6.37, 3rd innings, Mar 2012
- Pakistan 164/2, RPO 6.60, 4th innings, Nov 1978
- South Africa 340/3d, RPO 6.80, 2nd innings, Mar 2005
- West Indies 173/6, RPO 6.82, 4th innings, Feb 1983
- Australia 241/2d, RPO 7.53, 3rd innings, Jan 2017
The first limited-overs international was played in 1971. All ten instances took place after that date. The first T20 international was played in 2005. 6 out of the 10 instances took place after that date. In all ten cases, the team putting their foot on the accelerator didn’t lose; in half the cases they won.
As it is with cricket, so it is with many other things. When you change things, it takes time to figure out the real effects. Longitudinal studies are important. This is as true for technology change as for any other change. With all change, there is an Amara’s Law in operation. We tend to overestimate the short term effects and underestimate the longer term impact.
Tracking the impact of change requires good baseline data and a consistent way of collecting and preserving the data over long periods. That’s not a trivial task. It is made more complex with the need to protect the data from corruption and misuse.
While I love cricket, I only use it as an example here, to illustrate how longitudinal studies can help assess the impact of change, objectively and reliably.