As recommended by Coding The Markets, I ordered Larry Harris’s Trading and Exchanges; and as recommended by Martin Geddes, I ordered Adam Seligman’s The Problem of Trust.
I’ve now received both books, and I’m working my way through them. Thanks again to both.
A few quick observations:
1. When I looked at Harris’s definitions of trustworthiness and creditworthiness, I was more intrigued by the sentences that followed the definitions than by the definitions themselves. I quote:
People are trustworthy if they try to do what they say they will do. People are creditworthy if they can do what they say they will do.
[So far so good, but here comes a snowball designed for me].
Since people often will not or cannot do what they promise, market institutions must be designed to effectively and inexpensively enforce contracts.
End of quote.
Wow.
Before commenting, I’d like to recount a tangential tale from Seligman’s book. In his conclusions, as an aside Seligman discusses his changing behaviour with respect to smoking. When it was legal for him to smoke anywhere, he was always courteous and asked people around him whether they minded. Once someone else (e.g. “the state”) decided where he could smoke and where he couldn’t, his behaviour changed. Now he no longer asked for permission. If he was legally entitled to smoke somewhere he did.
I think the two stories are linked, and are all about covenant-versus-contract behaviour. In a covenant, when you hit a problem, you try and fix it. In a contract, when the same thing happens, you look for recourse. Someone to blame. Someone to sue. Someone to pay. Anyone. But not you.
And in a weird kind of way it’s linked to all the stuff we learnt about Quality First. If you remove responsibility and accountability from an actor, then you do not get quality. Instead you get output that is explicitly designed to keep the relevant  Inspection Department fully occupied. Why complain? Surely that’s the reason you set the department up in the first place. Inspectors inspect. That’s what they do.
As a complete aside I am reminded of the work done by Erik Brynjolfsson et al on Incomplete Contracts, some of which you can read here. I quote from Erik’s paper:
….unlike the contracts typically analyzed by agency theory, real world contracts are almost always “incomplete”, in the sense that there are inevitably some circumstances or contingencies that are left out of the contract, because they were either unforeseen or simply too expensive to enumerate in sufficient detail.
Random walk over. What am I trying to say?Â
Larry Harris seems to be saying that people often don’t do what they say they will do, and that market structures have to take this into account. In fact he goes further and says “Attempts to solve trustworthiness and creditworthiness problems explain much of the structure of market institutions.” And he sees contract enforceability as the way to solve it.
Adam Seligman seems to be saying that people behave differently when they no longer need to negotiate bilateral or multilateral agreements on what is mutually acceptable, especially driven by nanny- behaviour, be it state or regulator or firm. Here the enforceability of the contract (“I can smoke here but not there”) creates undesired outcomes.
Erik Brynjolfsson seems to be saying that real world contracts are incomplete anyway.
Which leaves me thinking. Just what is it we are trying to do with trust? Why are we mixing it up with contract when it is patently not contract? It seems strange to design for failure in a market, talking about trusting people but expecting them to fail; it seems strange to talk about using contracts to enforce what happens in failure when most contracts are incomplete and probably expensive to enforce; it seems strange to empower people by increasing inspection that reduces their accountability.
The costs of this set-up-to-fail design couldn’t be low. It’s a bit like planning for pilferage in supermarkets, something else that makes less than perfect sense to me. There seems to be a lot of scope to revisit what we think about trust. I will continue to read Harris and Seligman and continue to have conversations with people, because it looks as if there may be scope to think differently. I remain grateful for the pointers, so please keep them coming.
An aside. Opensource communities work on the all-bugs-are-shallow-to-many eyes principle. How come this improves quality rather than decreases it? Two reasons. One, there is no implied de-skilling of the knowledge worker, in fact peer recognition issues have the opposite effect, people want to exhibit quality work. Two, the inspectors are not a class apart, but themselves developers. Development and inspection are interchangeable roles, interchangeable within the day.
If you design for inspectors to find faults you will have faults. If you design for Post Closing Adjustments to financial figures you will have Post Closing Adjustments. If you design for Second Chances at auctions you will have Second Chances. If you design for market participants to fail then some will fail.
Alternatively you could have covenant, and the occasional ruptured bench. I must check just how many benches the merchants of Lombardy broke.
Just a thought. More later.
I’ll say it again: what if most of the money spent on “digital identity” (and authorisation, authentication and accounting) within an enterprise turns out to be pure waste?
Are we trying to impose contract where perfectly adequate covenants exist?
If Wikipedia works at the scale of the public Internet, what on earth are most of the passwords and protections at work actually saving us from?
I don’t think I’ll be invited to Digital ID World this year as a speaker…
Couldn’t have asked for a more thought-provoking response, Martin. That’s exactly what I’ve been driving at, whether we’re busy paving the cowpaths of a past paradigm; much worse, are we trying to replicate what may have been necessary for a period in time, but has no significance either before or after that period.
Let’s see where this goes :-)
I’m wondering: could it be that –part of—the question stemmed from our mental models? Most of the present expressions & metaphors about the web come from the military/prison areas: passwords, checkpoints, DMZ, backdoor, locks & what-not…could it be that we don’t have –yet—the mental models that would fit that –not so—new paradigm?
I have been working for some time on a metaphor view of the web (in the spirit of Gareth Morgan’s ‘Images of the organization’); it certainly connects at all levels with the ‘four pillars’ problematic &with the identity question.
For instance: what if the web worked like an immune system (rather then a high security prison)?
Or am I already late?
So the next question would be -nacherly- what would be the appropriate, future-oriented metaphors we could use instead?
Yes, we need new metaphors, but we need to be careful. I think people have a greater inbuilt resistance to new metaphors than we give them debit for.
This metaphor and meme issue affects every big debate we have. The Internet. Identity. Intellectual Property Rights.
We may use terms like Net Neutrality and Internet Governance and liken the internet to places and communities and roads and whatever. We may use terms like trust and privacy and confidentiality and permissioning and authentication and digital rights and wrongs. We may use terms like information security and knowledge management. And intellectual property and copyright and patent.
Terms are where the battles are being fought today.
Another thought from my own world of telephony: we’ve had hundreds of millions of people install landline phones in their homes. Little bells that can ring and get your attention. Clearly of great interest to marketers, who want to seize your attention. Yet some fairly simple analogue laws manage to keep telephone spam under control. No digital solution required. No smartcards for every telephone user, passwords, biometrics, national ID cards, or potent crytographic algorithms.
Did you ever read about CMG group in the UK, who used to internally publish everyone’s salary details? Maybe they were just ahead of their time.
Who would fiddle an expenses claim if they were all just published on the intranet for anyone to see?
Information hoarding isn’t compatible with a loosely coupled, permeable enterprise. A place where people pass through as they focus on excellence in one activity or process across many companies.
“Circles of trust” — this is a group of friends in a pub, not a meaningful social construct within and between enterprises. The moment you delineate trust, you destroy it.
Unfortunately, as the CIO of a bank, your hands are somewhat tied by regulation here — a culture of “verify, then trust”. Expensive.
Somewhat lengthy response examining the Citigroup raid on MTS and its implications for trust in the markets here…
http://etrading.wordpress.com/2006/06/14/my-word-is-my-bond-musings-on-trust-and-the-mts-raid/
I think what you want to say is this:
With full disclosure, individuals, markets and organisations will naturally behave in a utility-generating way.
Utility can be money, happiness, efficiency etc.
Without such disclosure, such behaviour will not result, and this tends to drive the policy response embedded in regulation and contract. But given that regulation and contract are trying to order behaviour that is based on undisclosed information, they cannot be perfect, and are by definition inefficient. In many cases they are so inefficient that they produce perverse results (potentially able leaders being put off from the CEO role?)
So the conclusion would be : enforce the disclosure of information, not the behaviour response to that information (look at recent debates on FSA/ISDA disclosure of derivative positions for a real world example).
Is this ground-breaking stuff? No – is very basic economic theory, and is the bedrock of undergraduate level work on perfect and efficient markets. Go read Tim Hartford (“Undercover Economist”) and Steve Levvitt (“Freakonomics”) for more real-world illustrations.
I would think about the dichotomy between trust and contracts in the following way.
Trust works well in communities that are self policing – members want to be members, members understand the unwritten rules (which are more about spirit than letter), people who break the rules are thrown out so most people don’t break the rules.
Examples of self policing communities are open source communities and Ebay.
If the community is not self policing – i.e. people can get away with gaming the system then contracts and enforcement regimes are required. The result – as evidenced in the numerous examples ahead of this comment – people follow the letter of the law rather than the spirit of the contract.
The problem we face today is that the old social contract has broken down due to too much movement of people, alienation of much of the work force (not often I get to use Marxist concepts these days …) and a general breakdown of community. The response on many levels from ASBOs to regulation to employee handbooks has been to try and legislate.
A better solution in some areas at least is to find ways to rebuild trust. Open source communities achieve this through a combination of peer review and hierarchy.
I am starting to wonder if there is space for some kind of trust infrastructure on the web driven by standards setters (e.g. RNIB for accessability or ICRA for child safety) or by peer review in some way which can’t be gamed.