Musing about inclusion in technology

My thanks to Phillie Casablanca for the delightfully evocative notice above.

I was born a foreigner.

While my hereditary roots were from southern India, I was born and brought up in Calcutta, as was my father before me. And for the first 23 years of my life, I knew no other city. Never lived anywhere else. But my surname gave away my southern roots: I wasn’t a true Bengali.

I am one of five siblings. When we were young, we used to spend a good deal of time every summer in Tambaram, on the campus of Madras Christian College. My grandfather was Professor of Chemistry there. Though I had bloodlines traced back to those parts, my accent gave away my north-eastern roots: I wasn’t a true Tamil.

I was born a foreigner.

A somewhat privileged foreigner, born into a Brahmin family (and an ostensibly well-to-do one at that). A family that took multiple copies of the Statesman so that we could each do the Times crossword on an unsullied diagram. Using a pen, of course. A family that played billiards and duplicate bridge and scrabble and chess. A family that devoured the written word.

So I didn’t really know much about being discriminated against. But, as Einstein reminded us, common sense is the collection of prejudices one collects by age eighteen. And I’m sure I had my fair share of prejudices. With three sisters, a bevy of aunts and a truly matriarchal grandmother, it was somewhat difficult for me to inculcate gender bias into my prejudice collection.

Which was probably a good thing, since the first boss I had was a woman, and since the person who gave me the job, her boss, was also a woman; I wrote about them as part of my Ada Lovelace Day pledge some time ago here.  [Incidentally, some of you may be aware of this recent incident in my life, which somehow made it into the Times City Diary, and thence into syndicated journals far and wide. Including one in Hong Kong. Which led to my getting back in touch with the woman who started me off in my professional career.]

That first job was a great job, and I learnt a lot. A genuine meritocracy; the nearest I came to any form of discrimination was when it came to publicity shots for the firm; a small number of us, foreign in origin, skin or gender, used to get wheeled in for all such occasions. It was done in such a spirit that we didn’t really consider it tokenism.

I soon learnt a little bit about discrimination the hard way, when my skull and forehead made repeated contact with some fairly large Doc Martens belonging to a group of young gentlemen with very short haircuts, and the resultant coma kept me quiet for a short while. But that was a rare and aberrant event, and all of twenty-seven years ago.

When I look back on the last thirty years, I tend to think of the industry I work for as fairly inclusive; perhaps it had more to do with the firms I worked for. BT, where I’ve been for the last four years (how time flies), for example, has an exemplary record on diversity and inclusiveness; people like Sally Davis, CEO of BT Wholesale, and Caroline Waters, director of people and policy, lead by example. Caroline was recently awarded an OBE for services to diversity and equal opportunity.

In many ways, the industry is itself designed to be inclusive: it’s about brains, not brawn. It is possible to work in an office as well as remotely. Shiftwork is possible, and there are opportunities to work in or with many timezones. The industry is just barely old enough to become ageist, so we’ve been able to avoid doing that. The work we do helps people use computers and communicate regardless of  physical or linguistic constraints; in many cases computers can be used to overcome those constraints.

Which brings me to the reason for this post: the recent debates about Women in Tech.  Shefaly Yogendra has done an excellent job in bringing together the different strands of argument and discussion, while providing us with the origins and context of the debate here.

Anyway, a number of people, including @shefaly, @thinkmaya and @freecloud, wanted to know where I stand on this issue.

So here’s my two-penn’orth:

We can all argue about the why, but there’s no disputing the what. Women are underrepresented in a number of dimensions in the tech world, and this is noticeable in conference line-ups and in start-up founder lists. This is particularly odd because there are a lot of talented women in this space: I am privileged to count many of them amongst my friends. There are many possible reasons for this phenomenon, and many possible ways of fixing it.

I think we need to make sure that one possible reason is dealt with, because it’s the kind of reason that could overlook. An anchoring-and-framing kind of reason. Let me give you an example.

Take The Indus Entrepreneurs, TiE in short. Many of you must have heard of them. While TiE is an inclusive network that advises, supports and mentors would-be entrepreneurs, its origins were different. I believe TiE was created to ensure that people of South Asian extraction were given the funding opportunities they were otherwise being denied. There was general acceptance of the engineering excellence of such people, but for some reason question marks were raised about their ability to run companies. Which meant that the “engineers” never got funded when they went forward with business plans.

I think we need to make sure that something similar is not happening here, in terms of unintended consequences as a result of anchors and frames. We need to make sure that we eradicate prejudices that go along the lines of: Women don’t code. Founders must code. So women can’t found startups…..

Generalisations, like comparisons, are always odious. Many parts of the industry are open and inclusive and meritocratic. Nevertheless, the numbers don’t add up, the evidence suggests we have a bias somewhere, and we have to do something, do whatever we can, to correct it. So I’m all for what people like TED and DLD are doing.

Systemic problems often need systemic solutions; awareness-raising initiatives can often provide the quantum energy required to remove historical biases, particularly subtle ones.

Does the Web make experts dumb? Part 3: The issues

Thanks for all the comments and conversation on the previous two posts. At this stage, I think it would be worth while setting out a simple list of principles, and see I can get your feedback on them. I feel that it will help move the argument forward constructively.

The principles I’d like to put forward are:

1. No one can become an expert without access to information. The web helps provide and broaden this access.

2. Access and opportunity alone are not enough. Will and perseverance are also required. The web does nothing to prevent this, and may actually augment the perseverance by making it easier to become an expert.

3. Having access to a mentor or moderator is valuable, particularly one who has the experience and critical skills related to the expertise sought. Teachers used to be mentors and moderators for centuries, before chalk-and-talk broadcast was adopted as an Assembly Line norm. Good teachers continue to mentor and moderate. The web facilitates this, in terms of allowing asynchronous communications with relevant links and bibliography, as well as synchronous communications when face to face is not possible.

4. Having access to a mentor or moderator who can inspire as well is invaluable. This is how expertise will really flourish. The web facilitates this as well. You only need to see one TED talk to understand how people can be helped, motivated, inspired by someone they don’t know and haven’t ever met.

5. There are 72 million children of school-going age not at school today. Rather than argue about the nature and role of experts and expertise, we should be doing everything in our power to ensure that every one of them has access to basic education as a human right. Queen Rania and her cohort are doing great things in this respect; the World Economic Forum’s Global Education Initiative, where Queen Rania is also involved, is a good place to start if you want to know more.

6. Of course the web gives us the opportunity to be superficial about learning, about knowledge, about expertise. But then this was true of all previous paradigms as well. What has changed is that the web allows us to delve deeper if we want to. And it makes that easier.

7. Of course face-to-face learning, with a moderator present, is invaluable. Of course the sense of community that comes from being in a classroom with other students is invaluable. But if for some reason this is not possible, then let’s not pooh-pooh the value of putting a computer with web access into a hole in the wall, and allowing for minimal moderation. This is what Sugata Mitra has been demonstrating, and more power to his elbow. You can keep track of what he’s been doing here: http://www.sugatam.wikispaces.com

8. The web is still in its infancy, there’s a lot broken with it. There is a lot that can, and should, be done in the context of curation, of indexing, of search tools, of filters and visualisation tools, of the semantic underpinnings. Read Esther Dyson’s recent post on the future of internet search if you have time, it’s a brilliant piece. You can find it here.  See what Tim Berners-Lee, Wendy Hall, Nigel Shadbolt, Rosemary Leith, Noshir Contractor and Jim Hendler et al are up to at the Web Science Trust.

9. The privileged position of the expert may be under stress. An environment where more people can become experts is a good thing, and should be encouraged. An environment where their heredity and background becomes irrelevant, where what matters is their willingness to apply themselves, is a good thing. So don’t let people convince you otherwise.

10. Education trumps everything. Access, opportunity, facilitation, motivation and inspiration are critical. In all this the web is an aid; it is not the answer by itself. But it helps.

In this series of posts, I have not tried to make out that individuals working in dark rooms on their own, with access to the web, will suddenly become experts. If that is the impression given, I have failed to communicate my message.

What I have been trying to say is this: people are saying the web dumbs us down. This is wrong. The web can dumb us down, but only if we choose to let it.

Comments welcome.

Does the Web make experts dumb? Part 2: Who’s The Teacher?

I try and make a point of looking for the good in people; I try and make a point of looking for the good in situations; I try and make a point of looking for the good in outlook and expectation.

Those traits in me make some people believe that I’m a wild-eyed optimist, whatever the truth might be; this is particularly true of people who tend to believe that two and two make five, who are quick to draw conclusions on superficial evidence.

Against this backdrop, factor in the following: I was born in the ’50s, grew up in the ’60s and early ’70s. I cite Jerry Garcia, Stewart Brand and Lewis Hyde as early influences (people did read in the ’60s and ’70s); I learnt to dance to Bob Dylan and Leonard Cohen (it’s harder than it sounds); I love spending time in San Francisco; and I call myself a retired hippie.

So some people think I’m a pinko lefty treehugging wild-eyed optimist. In short, a Utopian.  And you can’t blame them.

Which is why, when I make assertions like I did last night: suggesting that the Web actually reduces barriers to entry when it comes to “expertise”, and that traditional experts (myself included) are becoming less scarce, less distinctive, less “valuable”: I need to back up the assertions with some concrete evidence rather than just theory.

Which is what I intend to do tonight.

I want to point you towards evidence of the Great Leveller status of the internet. Some evidence I found intriguing at first, compelling as I got into it, and finally inspiring.

Sugata Mitra: courtesy of the TED Blog

So let me tell you the story of Sugata Mitra, polymath, professor, chief scientist emeritus. A man with an incredible vision and the willingness to do something about it. He speaks English and Bengali, a little German, spent time in Calcutta, works with computers and is passionate about education. So maybe I’m a little biased. Bear with me.

Professor Mitra is responsible for introducing me (and a gazillion others) to the concept of Minimally Invasive Education or MIE. In simple terms, over a decade ago, he ran an experiment called Hole In the Wall which took PCs and stuck them in walls in slums, with no explanation or instruction. And watched as children learnt.

Some of you must be thinking, he must have gotten lucky, a flash in the pan. Yes. Eleven years later. Nine countries later. 300 Holes-In-The-Wall later. 300,000 students later. You could say he got lucky.

I prefer to think he called it right. I was privileged to hear Professor Mitra at TED, and to shake his hand. I have had an instinctive and long-seated belief in the incredible potential of humanity, and hearing his story reinforced my belief. You can find his TED talks here and here.

One of my favourite practitioners and writers on leadership, Max De Pree, characterised leaders as people who do just two things: set strategy and direction and say thank you. In between those two things, he said leaders are servants and debtors. Since reading some of his works in the late 1980s, I’ve considered “getting out of the way” to be an essential component of good leadership.

If you ever wanted rebuttals to abominations like the Bell Curve; if you ever wanted refutations to arguments about the web making us dumber; if you ever wanted evidence to challenge assertions about the cult of the amateur; then look no further than Sugata Mitra’s research. Thank you Professor Mitra. And thank you TED, particularly Chris Anderson and Bruno Guissani for bringing Professor Mitra to my attention and then giving me the chance to meet him.

All teachers are learners. All learners are teachers. Teachers and learners are not just passionately curious a la Einstein; they want to see everyone discover their potential, achieve it and improve upon it.

Stories like Sugata Mitra’s inspire me. They make me believe that battles to ensure ubiquitous affordable connectivity are worth while; they make me believe that wars to eradicate inappropriate IPR are worth while; they make me believe that the Digital Divide can be avoided.

They remind me of the incredible potential every child represents. The incredible responsibility every parent, every teacher, every human has towards generations to come. The critical value of education in that context.

So if people want to believe the internet dumbs people down, fine. That’s their choice, and I don’t have to agree with them. It will not stop me wanting to use the internet to level the playing field, to help ensure that access to information, to knowledge, to wisdom is not the birthright of the privileged few alone.

Another data point. Last year I spent some time in Italy with my family (it was our 25th wedding anniversary, and we took the children to Sorrento, where we’d honeymooned in 1984). And we went to Pompeii. Where we met a fantastic guide called Mario. Who was 65 years old, a real expert. And he was stopping working for a while. Going back to school. Because the web had reduced the value of his expertise.

The problem, the weakening of the value of “expertise”, is instructive. His response, to go back to school at 65, is even more instructive. You can read all about it here, in a post I wrote at the time.

[By the way, thanks for your comments yesterday. I will wait for further comments tonight and tomorrow, and then try and round things off in a final post later this week.]

Does the web make experts dumb?

For information to have power, it needs to be held asymmetrically. Preferably very very asymmetrically. Someone who knows something that others do not know can do something potentially useful and profitable with that information.

Information can be asymmetric in a number of ways. The first, and simplest, is asymmetry-in-access. If you can make sure that no one else has access to information that you have access to, if you’re in a position to deny others access to the information, then you can do something useful with it. In the old days this was called keeping a secret. Keeping something secret is not wrong per se. But if that secret is privileged information, there are many things you cannot do with it. Like trade on it. Or blackmail someone as a result of it.

Nevertheless, for centuries, people have made money by having asymmetric access to information. And for the most part they’ve done it legally.

A second form of asymmetry is in effect a special case of asymmetry-in-access: asymmetry-in-creation. If you create/originate the information in question, then it is possible to prevent anyone else from knowing it. All you have to do is make sure that you don’t tell anyone. Kenny Dalglish, while managing Liverpool in the mid-to-late 1980s,  was asked how he’d managed to keep Ian Rush’s return from Juventus a secret. In answer he said ‘It was simple. I didn’t tell anyone”.

If you choose not to share something you’ve created, then you are in a position to be the only person in the world to enjoy it. Take a work of art or music or literature. As creator, you can choose to share whatever you’ve created with nobody; with just one person; with just a few people; the choice is yours. And you can charge for this access. Some people may think you’re being selfish, some people may consider you “sad” as a result, but you have every right. What you’re doing is legal. You’re protecting the scarce nature of what you’ve created, and seeking to exploit that scarcity.

For centuries people have made money out of creating unique things, scarce things, and then charging others when they want access or ownership.

A third form of asymmetry is really a derivative form, where the information is itself not of much use without some way of comprehending it, parsing it, interpreting it: asymmetry-in-education. Equality in educational rights may be a much-vaunted goal, but it’s not there. Equality of opportunity continues to be mandated, and may well happen in your lifetime. Equality of outcome cannot be legislated. Asymmetry-in-education has therefore continued to persist despite the efforts of well-meaning people over the past century or so.

This form of asymmetry has been exploited by experts in many guises: doctors, lawyers, priests, even IT consultants. And their theme song is simple. “You didn’t have to work as hard as I did to know what I know. It’s complex, you won’t understand it.”. In many cases, this situation was exacerbated by the use of foreign languages, preferably dead foreign languages. And, just in case that wasn’t enough, the smoke and mirrors of specialist terminology, jargon, abbreviation and convention was used to obfuscate the environment.

For millennia experts have exploited this asymmetry and wielded power and amassed wealth as a result.

There is a fourth, and final, form of asymmetry: asymmetry-by-design. This is where you take something that is essentially abundant and, through fair means or foul, get it redefined as scarce. Most implementations of Digital Rights Management are attempts to create asymmetric access, make something scarce by design. At a level of abstraction, iPhone and Android apps are essentially the same thing in disguise: thinly-veiled attempts to make abundant things scarce.

Creating artificial scarcity out of something that is essentially abundant is also not wrong per se. But there can be legal and moral implications. Building a dam near the source of a river and charging people for access to the water may sound reasonable; on the other hand, there may be strong grounds for “grandfathered” rights to that water. Society, through the ages, has seen fit to protect the view (as in “ancient lights”), walks (as in ramblers’ rights) and even open spaces (as in commons).

[Speaking of commons, permit me an aside. There appears to be a tendency for people to use the term “by hook or by crook” to mean the equivalent of “by fair means or foul”. This is inaccurate. If you wanted to chop down wood for firewood, you were entitled to use your hook or your crook to get to branches and limbs of trees in the commons. Only fair means. No foul means.]

Asymmetry in access. Asymmetry in creation. Asymmetry in education. Asymmetry by design.

Asymmetries all of them. Asymmetries that allowed people to wield power and to amass wealth. For the most part legally.

Then, along comes the internet. Along comes the Web.

The world’s biggest copy machine, as Kevin Kelly reminded us.

Suddenly asymmetry of access was weakened, holed amidships below the waterline. One of the nicest things about the web is that it levels the playing field for access. More accurately, it is capable of levelling the playing field for access. And it is for this reason that “net neutrality” arguments tend to get most heated where there isn’t any true competition for access. Given real transparency and real competition for access, there would not be a need for legislation.

Copying machines are not designed to make things scarce. As a result, anything made available on the internet was relatively easy to copy. Which in turn meant that anything that was expressed as a digital object was difficult to make scarce. Many many industries have made money for many many years on the basis of relative scarcity; their concepts of pricing were based on scarcity models. So they tried to make the inherent abundance of the internet into something scarcer by using DRM or its more sophisticated new form, the App.

This approach, asymmetry-by-creation, and its alter ego, asymmetry-by-design, are about creating artificial scarcity. This is fundamentally doomed. I’ve said it many times. Every artificial scarcity will be met by an equal and opposite artificial abundance. And, over time, the abundance will win. There will always be more people choosing to find ways to undo DRM than people employed in the DRM-implementing sector. Always.

So when people create walled-garden paid apps, others will create unpaid apps that get to the same material. It’s only a matter of time. Because every attempt at building dams and filters on the internet is seen as pollution by the volunteers. It’s not about the money, it’s about the principle. No pollutants.

Which brings me to the reason for this post. There’s been a lot of talk about the web and the internet making us dumber.

I think it’s more serious than that. What the web does is reduce the capacity for asymmetry in education. Which in turn undermines the exalted status of the expert.

The web makes experts “dumb”. By reducing the privileged nature of their expertise.

I have three children born since 1986. One has finished her Master’s and is now a teacher. One has just finished his A Levels and is taking a “gap year” before starting university in a year’s time. The third is still in school.

The web has made them smarter. They know things I did not know at their age, and I had privileged upbringing and access. They know things more deeply than I did. Their interest in things analog is unabated, they think of the web as an AND to their analog lives rather than an OR.

Many of you reading this are experts; I myself am considered an expert in some things. And the status bestowed upon us by our expertise is dwindling

So what?

We should rejoice that access to the things that made us experts is now getting easier, cheaper and more universal.

We should rejoice that generations to come will out-expert us in every field we care to name.

We should rejoice that we continue to enter a world where the economics of abundance is displacing the economics of scarcity.

We should rise up every time there is an attempt to pollute the path of open access.

The web is not making us dumb. It is the expert in us that is being made to look dumb. And that is a Good Thing.

Views? Comments? I suspect this post might attract a few flames….

Thinking about waste

I am beholden to TS Holen for the wonderful photograph above, which he calls Ready-made Waste

To repeat what I said yesterday, as most of you probably know, I was born and brought up in Calcutta. A busy, vibrant city inhabited by millions of people. Who create a lot of waste.

While I lived there, I was fascinated by how this waste fed an entire human and economic ecosystem, the Indian and modern equivalent of the waste-pickers, scavengers, and rag-and-bone men. This ecosystem is not unique to Calcutta or even to my lifetime; Steven Johnson does a wonderful job of describing the way all this happened in Dickensian London in his book, The Ghost Map; if you haven’t read it, get yourself a copy today, it’s well worth a read. In fact all of Steven’s books are worth a read. Really.

My thanks to Rajib Singha for his composition above, Romancing the Raj: dung cakes drying on a wall in Bagbazar while a tram approaches

When I looked at waste in this context, one of the things that excited and astounded me was the vibrancy and sheer sustainability of the ecosystem around waste, as evinced by the way cowdung is mixed with straw, dried on walls and then used as cheap fuel in many parts of the world. Growing up amidst such practices taught me something: I learnt to respect waste and to recognise that people had livelihoods deeply intertwined with waste. Last year, I had the opportunity to walk around parts of Calcutta late one night, and experienced both joy as well as shock as I saw the ecosystem in action.

Over the years I’ve carried this learning into somewhat different contexts, particularly when it comes to project management and delivery. You see, I felt it was reasonable to consider all inefficiency as waste. As a consequence, when I observed an inefficient practice at work, I tried to identify the ecosystem participants for that waste, the people whose livelihoods depend on that waste. Because they were the ones most likely to push back against any change in work practices and processes. All projects are fundamentally about change, and unless such immune-system agents are identified and taken into account,  project failure is likely.

This is not some deep personal insight. Software developers, especially those who use design patterns, are usually extremely competent at analysing the as-is context from the viewpoint of problems and workarounds. What problems need to be solved. What workarounds exist today. Which inefficiencies have become enshrined in work practices. The developer then sets out to identify the root causes for the workarounds, to design more appropriate responses and to plan for sensible migration paths from the workarounds.

Sometimes the workarounds are so deeply embedded that resistance is extremely high and, as a result, the temptation to fossilise the workaround into the system is immense. Which is why software developers are heard to say things like “there’s nothing as permanent as a temporary fix”.

Which brings me to the crux of this post. Once you accept that inefficiency can be considered equivalent to waste, you can walk untrodden paths. Like the waste built into ways of marketing, selling and distributing digital content, ways that carry the habits of the analogue world, ways that exist primarily to feed the mouths of the ecosystem around that waste.

Music. Advertising. Newspapers. All marketed, sold, distributed with analogue overlays on digital processes. The kind of thinking that encourages people to design region coding for DVDs. [What customer value does that generate?]

Music. Advertising. Newspapers. Industries with waste built into their historical processes. Industries with ecosystems of people built around that waste, people with mouths to feed and bills to pay.

And now we have the cloud. Which is fundamentally about a new way of doing business, seeking to eradicate the waste that permeates most enterprise data centres. Overprovisioning is not a bad thing per se, but there’s overprovisioning and then there’s what’s been happening for a few decades, whole orders of magnitude off from sensible overprovisioning.

The cloud is about eradicating waste.

Waste that feeds a massive ecosystem.

A massive ecosystem that will rise up and seek to prevent the eradication of that waste.

We’ve already seen this happen in the music business; we’ve already seen this happen in advertising; we’re seeing this happen in newspapers. And now we will see this happen in cloud.

People have built immense business models around erstwhile waste, the organisations have themselves grown immense as a result, and now they wield immense political and financial power. So they know how to arbitrage the situation and ensure that such inefficiencies are protected by law, by regulation. Which is what has been happening in copyright and intellectual property. Witness the abominations of the Digital Economy Act, of ACTA, of Hadopi.

Unlike the waste pickers and scavengers of prior centuries, the 20th and 21st century waste pickers haven’t evolved, haven’t adapted, haven’t faded gracefully away. Because they’re powerful enough to freeze progress, to insist on keeping their particular wastes in place.

But there’s one problem.

A big problem.

We can’t afford the waste any more. No longer sustainable.

Which is where I think Vendor Relationship Management (VRM) comes in. VRM represents a way through this impasse, by placing the power where it should be: with the customer. It is the customer who has the highest motivation to eradicate waste in a system; yes, tools are necessary to help identify the waste and to deal with the waste.

The r-button or the relationship button, a key concept in VRM

One way of looking at VRM tools is that they will reduce human transactional latency by concentrating on the customer and the relationship first and on the transaction only as a consequence of that.

Doc Searls, the driving force behind VRM, has been a personal friend and mentor for many years now. This post was catalysed as a result of a recent conversation with him. The way advertising works now, the way we buy and sell, the way CRM systems operate, it’s all one-way. There’s a lot of inbuilt waste, waste that can be reduced, even annihilated, by giving customers the right voice, empowerment and tools. Which is really what VRM is about.

There’s a workshop to do with all this coming up next week, to be held at the Harvard Law School. People can contact Doc at dsearls AT cyber.law.harvard.edu, or on Twitter through @dsearls.

Make any sense? Let me know what you think.