- Markets are conversations, as per Cluetrain.
- Markets contain risk.
- Conversations can help you manage that risk.
- Social software can help you extract what you need from those conversations and thereby help you manage the risk.
I was reading the latest issue of Risk and Regulation, published by the Centre for Analysis of Risk and Regulation (CARR) at the London School of Economics and Political Science (LSE). I’ve read the publication ever since I had lunch with Michael Power, the author of The Risk Management of Everything. If you haven’t read that pamphlet yet, I can do no more than recommend as strongly as possible that you do. It is as fundamental to Four Pillars as Cluetrain and Social Life of Information and Emergence.
An aside: Here’s the description of Risk Management of Everything:
We live in the age of the risk management of everything. Paradoxically this still leaves organisations that diligently engage in risk management exposed to what Donald Rumsfeld called ‘unknown unknowns’ which, by definition, are out of reach of risk management.
This warning about the escalation of the risk management of everything should be taken seriously. In his first Demos book, The Audit Explosion, Michael Power warned against that companies and governments preoccupation with measuring what is measurable – the now discredited ‘targets culture’.
Power traces the start of the risk management of everything back to 1995 – the year of the collapse Barings bank Shell’s Brent Spar PR disaster. Those events illustrated the two key aspects of the new obsession with risk management: internal control and reputation.
The ability of a rogue trader to bring down a bank has prompted organisations to redouble their efforts to use internal control systems to manage risk. But the danger is that the focus on internal controls to manage risks of ‘known unknowns’ leaves organisations vulnerable to ‘unknown unknowns’.
“Reputation has become a new source of anxiety where organisational identity and economic survival are at stake And if everything may impact on organisational reputation, then reputational risk management demands the risk management of everything.â€
The anxiety about reputation means that experts and professional bodies are increasingly taking defensive steps to protect their own name, rather than managing risks on behalf of the public. One example of this the proliferation of ‘small print’ as professionals ranging from doctors to accountants attempt to hand risk back to customers, clients or society as a whole.
Now if that doesn’t get you to read the pamphlet, nothing else will. I will explain later why this is fundamental to Four Pillars.
Back to the magazine. As usual, there were a number of thought-provoking articles. All its authors should blog. Are you listening, LSE?
One of them, titled Harnessing Hindsight, looks at how, for example, near-miss incident reports are used to improve risk management in aviation.
I quote from the conclusion to the article:
What implications does this examination of practice hold for current theory? First, it suggests that current models of risk management, and methods of risk analysis, could be productively extended by more fully attending to the ‘positive’ face of operational risk – the organizational practices and social processes that underpin organisational resilience – so moving beyond the current focus on predicting and avoiding failures, errors and harm. Second, it emphasises the central place of knowledge – and its dark side, ignorance – in dealing with risk. Assessing small moments of operational failure is an interpretive process that draws on forms of knowledge that are not readily quantified or formalised, such as the particulars, specifics and details garnered from practical operational experience, or vicarious knowledge of similar events experienced by other organisations. And identifying signs of ignorance, in the form of suspicions that arise from subtle relations and mismatches between current knowledge and organisational events, equally appears to offer a useful proxy for identifying latent risks. Third, it points to the importance of institutional designs that balance the tensions between central oversight and local participation and action, and that establish organisational spaces for collective enquiry and sensemaking around risk events.
Emphases mine. I couldn’t have written a better rationale for the use of social software in risk management. Collective enquiry and sensemaking around risk events.
More later on this theme. Do read the article, even if you have zero interest in aviation. Consider how powerful social software and emergence and P2P models are in this context.
@ “professional bodies are increasingly taking defensive steps to protect their own name, rather than managing risks on behalf of the public”:
This fits well with my suspicion/experience that the primary purpose of QA/QS-departments at ISVs is to improve external quality to cover up the hideous internal quality – the primary source of follow-on costs.
@ Harnessing Hindsight:
This sounds much like the processing of a neural network where the nodes are human individuals – recognizing patterns and violations of them.
This focus on second-order and reputational risk has some other awful consequences. An entrenching of blame cultures and shoot-the-messenger responses. Whistleblower muzzling. Nanny states and governments and firms and departments.
Sadly it is primarily amongst “professionals”.
So much is written about risk without a useful definition being proposed and the ensuing conversation intensely CONFUSING. I hear people talk of risk in positive and negative terms. It is clear to me that a specific capability is a mitigation to a specific threat and with a specific attitude can become an opportunity, but only when compared with a competing capability. To me the essence here is in perspective and recognition of risk. Management of risk becomes about attitudes and the building of knowledge resources to increase capability. Like Chaos theory the trick to understanding risk is not in isolated analysis but in the recognition of its patterns and always taking a lateral approach to understanding it from a stakeholder and an external perspective.
If we are to understand risk, impacts, etc and how to manage it, the imperative is to be aware of consequences and recognise that there will always be unknown consequences because we stop looking for them and cannot know them all. Consequences do not stop they change or we ignore them. Risks are the unknown. Although we should not stop trying to define them lets stop pretending we know what they are when it is not possible. It is extremely dangerous sometimes to define a risk. The consequences can be blame ridden cultures where responsibility is attached totally inappropriately. The key to the danger is in trying to define ‘a’ something. There cannot be ‘a’ risk. The unknown is not definable.