Smorgasbord 2

Had a number of you ping me about my last post, when I shared the tabs I had open. So I thought I’d do it again:

1. Radiation levels in Fukushima lower than predicted. Couldn’t understand why this story didn’t receive more coverage. The levels being encountered seem remarkably low, so I wanted to look into it. I was in Japan last month, due there again next month. Love it.

2. Soccket To Me: I just love the idea of Soccket, decided I would spring for giving someone the energy-producing football. Amazing invention. Great video as well. Found via TED.

3. Sustainability, 21st century style: I’d heard that Patagonia were encouraging people to create a secondary market for their products rather than keep buying new. Now that’s a story I like. Someone at Salesforce.com gave me a Patagonia garment, so I looked into it.

4. Trent Reznor on TuneCore: I’m not a big fan of Nine Inch Nails, some of their music is too noir for me. But I love Trent Reznor’s attitude to the industry. So I try and keep up with what he’s saying and doing. Their latest album is, as usual, downloadable for free.

5. Brit on the PlugBug: Friend Brit Morin launches a fascinating site, I love the PlugBug and the rickshaw. And Weduary will probably make a bang not just in the US, but also in places like India. I like seeing what friends are up to.

6. Evolutionary ecology of pungency in wild chillies: I have a real passion for capsaicin, so I tend to spend some time every week trying to understand more about it.

 

That’s it for now. Tell me if you like my doing this occasionally. Tell me to stop. All feedback useful.

Smorgasbord

I happened to look at the tabs I’d got open over the past few days, stuff I was gently drifting through, stuff I intend to complete reading/experiencing later. And I realised they were sufficiently eclectic to be worth sharing, in case some of you hadn’t come across them or were interested anyway. So here goes:

1. Malaria’s Achilles Heel: Details of a recent breakthrough in understanding how the parasite gets into red blood cells, and the discovery of a single receptor without which the parasite appears to be powerless. Early days, but there is now a real possibility that an effective vaccines emerge.

2. Crowd-curating: Continuing to track what hypothes.is is doing, something I’m very excited about. “A distributed, open source platform for the collaborative evaluation of documents”.

3. Matsutake dobin mushi: Ever since I experienced this dish a month or so ago, I’ve been mesmerised. Been trying to find out everything I can about it.

4. Unintended consequences of age-based privacy laws: danah boyd, John Palfrey, Eszter Hargittai and Jason Schultz looking into Facebook ToS and age constraints and COPPA

5. Preserving the lifesaving power of antimicrobial agents: James Hughes’ seminal paper on running out of antibiotics.

6. Keeping fit in 1919: An uproariously funny booklet issued in 1919, not intended to be funny at all, brought to life by the wonderful How To Be A Retronaut. Thank you Chris Wild.

7. Serge Storms: An excerpt from Tim Dorsey’s next book. Can’t wait.

8. Wolfgang’s Vault: The best live music downloads site in the world for retired hippies like me.

 

 

Thinking about streams of information at work

At school and at university, I was reminded by teachers not to allow the knowledge I’d accumulated to constrain unduly my thinking about the future. There was something liberating about the mere process of trying to understand that knowledge could be considered a constraint, a liberation that continued throughout my life, evinced at different times and in different ways.

Early on, it was a personal fascination with the concept of time, triggered by an experience every child in India goes through: finding out that the hindi word for yesterday: “kal” was the same as that for tomorrow: “kal“. While still at school, as with most of my fellow students in the Sciences stream, the thoughts and writings of Richard Feynman entered my life. His teachings on The Character of Physical Law, more particularly the chapter on The Distinction of Past and Future, influenced my conceptions of time even further, giving me a sense of its irreversible nature. Most 13-14-year olds have a blank-slate approach when it comes to absorbing ideas, and so it was when we were first able to think about what Einstein had been saying about relativity, allowing us to view time as a dimension.

Then came university, where I read Economics, and then work, where none of this appeared to matter. The nearest I came to thinking further about all this was in my late twenties, when I went through a long period of regular dreaming, often lucid, often with repeating themes. [The commonest theme had me in flight: I’d slowly lengthen my stride and then gently take off, more gliding than flying, able to keep myself airborne for a minute or so, soaring and banking using my arms as wings, never flapping, unable to hover.].  They were dreams rather than nightmares, relaxing me, letting me feel rested and refreshed; this, coupled with their lucidity, meant that I tended to remember my dreams. And occasionally, very occasionally, I would experience something in “real life” that seemed, if I stretched it enough, to be something I’d experienced before in a dream. But I “knew” it wasn’t possible and so I dismissed it. Sort of. It didn’t stop me from reading Michio Kaku from the mid-1990s onwards, starting with Hyperspace/Parallel Universes. But the Feynman in me ruled, time continued to be seen as something irreversible.

One other principle stayed with me, influenced by some of the sayings and quotations I’d been attracted to over the years: Einstein saying that we couldn’t solve problems by using the same kind of thinking we used to create the problems in the first place, Einstein suggesting that common sense was the collection of prejudices one has by age eighteen, someone (occasionally credited to Schopenhauer) saying that talent hits a target no one else can hit; genius hits a target no one else can see. In each case, I was reminded of what my teachers had said to me, about not allowing my “knowledge” to constrain my “thinking”. Easier said than done.

Over time, I understood more about cognitive biases and anchors and frames, a learning that was accelerated by conversations with colleagues like Sean Park, Malcolm Dick and James Montier while at Dresdner Kleinwort. So it should come as no surprise that in my roles as deputy CIO and CIO, and as chief scientist, in multiple organisations, I kept taking the “don’t let the past predict the future” tablets, religiously, systematically, every day. That didn’t always endear me to everyone, but it helped me keep my thinking fresh. It was the reason I kept wanting to connect with people outside the organisation at least as much as I spoke with people within the organisation. It was the reason that I’ve always tried to support a graduate intake program in firms I’ve worked in, one way of ensuring that fresh thinking is allowed to enter an organisation.

Which is why I loved the Wayne Gretzky quote about playing where the puck is going to be, not where the puck is or was. Which is why I loved the Steve Jobs quote, in his Stanford Commencement Address in June 2005:

Again, you can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something – your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.

In the words of Stephen Stills: Don’t let the past remind us of what we are not now. [That’s taken from my all-time number one favourite song, Suite: Judy Blue Eyes. You can hear a sample, containing the quote, here…. and even buy the MP3 download if you so wish.

It is with all this in mind that I spend time thinking about streams of information. For most of my adult life, these streams have been about the past. Transactions that had already happened. We spent a long time studying the fossil remains of human activity in order to try and predict what the future would look like, a mongrel form halfway between scatology and eschatology.

More recently, we’ve been able to “life-stream”, sharing our current activities and our “status” with others, aided and abetted by near-ubiquitous connectivity, ever-smarter devices and digital frameworks that support the social networks. In the past, we were only able to capture things that had happened. We were used to calling things that had happened “transactions” and so we called the analysis of those records “transaction processing”. We’re able to look now at what people are doing, within “activity streams”, and, because we share our activity streams within social networks, we call the study of this “social media monitoring”.

But we’re on the cusp of something way way more exciting, in a classic William Gibson future-is-here-but-unevenly-distributed kind of way: we’re beginning to what we intend to do.

Doc Searls, a good friend, was the first person I heard using the term “Intention Economy” to describe this. And I’ve signalled my intention to him, by pre-ordering his book on the subject, due May next year.

Esther Dyson, another good friend, when talking about the future of internet search, complimented Bill Gates on saying “the future of search is verbs”….. now that gets interesting, really interesting, when you consider verbs as having tenses. Tenses that help segment the continuum of time. Past. Present. And future.

The future.

When I was at university, one of the things I studied in classical economics was the works of Jean-Baptiste Say. And one of the ways in which his “Law” was paraphrased, originally by John Maynard Keynes, was as follows: Supply creates its own demand.

We’re soon going to be able to signal our intentions in ways that we could never have done before.

Over time, those signals will become more sophisticated, more evolved, more nuanced.  Social norms will be formed, telling us what we can or can’t do with our signalling of intent. The semaphoring of intent will slowly come to include disinformation, the false-carding,  feints and dummies, elaborate ways of disguising intent in order to further some other intent. With that will come the need to watch for, and to recognise, the digital “tells” in the world of supply-and-demand poker. On both sides.

And over time, the systems and processes required to interpret and assimilate those signals into actionable information, this too will evolve.

I cannot wait.

Of private clouds and zero-sum games

If I interpret the comments on my last post correctly, both online and offline, a small number of you felt that I’d been unduly strong in my bias against the “private cloud”; it sounded like you thought I’d been drinking too much of the Kool-Aid since joining Salesforce.com a year ago this weekend.

Actually, my bias against the private cloud is around a decade old. And it stems from experiences I had during my six-plus years as CIO of Dresdner Kleinwort.

First and foremost, I think of the cloud as consisting of three types of innovation: technology, “business model” and culture. Far too often, I get the sense that people concentrate on the technology innovation and miss out on the remarkable value offered by the other two types of innovation. In this particular post I want to concentrate on the business model innovation aspect.

Shared-service models have been around for some time now, they’re not new per se. At Dresdner Kleinwort, we implemented shared-service models wherever relevant, sometimes within a business unit, sometimes across business units within a business line, and sometimes across the whole company. The principle was simple: investment and operating costs (the “capex” and “opex”) for the shared service would be distributed across all the consumers of the shared service according to some agreed allocation key. Sometimes it was a simple key, like headcount. Sometimes it was predefined each year at a central level, as was the practice with “budget” foreign exchange rates. Sometimes it was hand-crafted by service, involving long hours of painful negotiation. Sometimes it wasn’t even agreed, just mandated from above. One way or the other, there was an allocation key for the shared service.

Dresdner Kleinwort was part of Dresdner Bank, and Dresdner Bank was wholly owned by Allianz. There were shared services at the Dresdner level, and at the Allianz level. So there was a whole juggernaut of allocations going on, at multiple levels.

And God was in His Heaven, All was Well With the World.

Until someone wanted to leave the sharing arrangement.

At which point all hell broke loose.

Because the capex had been spent, and the depreciation tail had to be allocated to someone. If the number of someones grew smaller, the amount allocated grew larger. This wasn’t just about capex; not all of opex was adequately volume-sensitive, so similar effects could be observed.

“Private” models of shared services were fundamentally zero-sum games: the institution coughed up all the capex and opex, and the institution had to allocate all of it. Regardless of the number of participants. Sometimes there was scope for some obfuscation: there was a central pot for “restructuring”, and all the shared-service units ran like hell to reserve as much of it as possible every time the window opened for such a central pot. If you were lucky, you could dump the trailing costs left by the exiting business into the restructuring pool, thereby avoiding the screams of the survivor units. But it was an artificial relief: the truth was that the company bore all the costs.

A zero-sum game.

Shared resources have costs that have to be shared as well. If the only people you can share it with are the people in the company, then the zero-sum is unavoidable. Things are made more complicated by using terms like capex and opex, by choosing to “capitalise” some expenditures and not others, by having complex rules for such capitalisation. Such worlds were designed for steady-state, not for change.

We’re in a business environment where change is a constant, and where the pace of change is accelerating. So there’s always something changing in the systems landscape. Business units come and go; products and services offered come and go; locations and even lines of business come and go; and entire businesses also come and go within the larger holding company structure.

Change is a constant.

So with the change comes even more pain. Lists of “capitalised” assets have to be checked and cross-checked regularly, to validate that the assets are still in use; at Dresdner these were called impairment reviews. If not, the remaining depreciation tail of the “impaired” asset has to be absorbed in the next accounting period.

What joy. [Yes, dear reader, the life of a CIO is deeply intertwined with the life of a spreadsheet jock].

In many respects, the technology innovation inherent in the cloud was foreseeable and predictable. Compute, storage and bandwidth were all going down paths of standardisation, to a point where abstract mathematical models could be used to describe them. As the level of standardisation and abstractability increased, the resources became more fungible. That fungibility could be exploited to change the way systems were architected, higher cohesion, looser coupling, better and more dynamic allocation and orchestration of the resources.

The business innovation in the cloud was, similarly, also foreseeable and predictable. The disaggregation and reaggregation made possible by the standardisation and virtualisation would allow for different opportunities for investment and for risk transfer.

Now it was no longer a zero-sum game. The company that spent the capex and opex took the risk that there would be entrants and exits, high volumes and low; the technology innovations were used to balance loads and fine-tune performance; the multitenant approach often led to lower licence costs, and these could be exploited to defray some of the continuing investments needed in the balancing/tuning technologies.

Individual business units and lines and even entire companies no longer had to carry out impairment reviews for such assets. Because they didn’t “own” the assets: the heart of the cultural innovation was the change in attitudes to ownership.

The private cloud proponents have sought to blur the lines by bringing in arguments to do with data residency.

Data.

Not code.

Data will reside where it most makes sense. Sometimes there are regulatory reasons to hold the data in a particular jurisdiction. Sometimes there are latency reasons to hold data within a particular distance limit. Sometimes there are cultural reservations that take time to overcome. The rest of the time, data can be held wherever it makes economic sense.

Serious cloud computing companies have known this, have been working on it, and will continue to work on it. The market sets the standard.

Code, on the other hand, particularly multitenant code, has no such residency requirement. Unless you happen to ask someone whose business model is to charge licences connected to on-premise processors.

Change is a constant in business life. The cloud is about change. The business model of the public cloud is designed to make that change possible, without the palaver of impairment reviews and capex writeoffs and renegotiation of allocation keys and and and

Which is why, in principle, the private cloud makes no sense to me.

Views?

Musing about the cloud and enterprise cost allocation

Over a decade ago, after a couple of years as Deputy CIO, I was appointed Global CIO of Dresdner Kleinwort in May 2001. Times were hard, and my brief was harder still: to reduce technology capital expenditure and operating expenses by 50% within eighteen months, while providing “leadership, stability and continuity” to the organisation. At the time the IT department was nearly 2000 strong and spent around £700m pa in capex alone. I was surrounded by many very talented people, and, largely due to their ingenuity and actions, we delivered the goods. I’m privileged to have worked with them, and even more privileged to be in touch with so many of them a decade later.

Some of the things we did were standard, like shutting down remote offices when we were retracting our presence from those regions, renegotiating contracts with core suppliers, stopping activities that were yesterday’s necessities but today’s luxuries, that kind of thing. A few were more non-standard: shutting down our offshore operations in India and Eire, changing our hiring policy to stop hiring laterals and increasing graduate intake, establishing a formal commitment to opensource and to start-ups.

But it all began with our trying to understand our cost and allocations structures. Easier said than done. This was because it was not enough for us to save the money, we had to save it in the right places. We had to reduce it very heavily for advisory services, heavily for equities-related asset classes and services, and less so for debt- and treasury- related activities. Which meant that we had to understand how our costs flowed from IT to each business.

For most of my life, I’ve worked in very large organisations, often as an “official maverick” but nevertheless part of an extensive and complex fabric. And for most of my life, I’ve been astounded by the incredible difficulty I’ve had in getting two questions answered: What do I spend? How many people do I have? Over the years, as my career developed in its own serendipitous way, I found myself in charge of larger and larger departments with bigger and bigger budgets. And answering these two questions became harder and harder.

Perhaps I should have known better. When I was in my teens, my father used to say that the only “truth” on a balance sheet was the cash position; everything else was a “conventional” representation of information. If you didn’t understand the conventions being followed, you had no ability to understand the information presented.

So there we were, at Dresdner Kleinwort, trying to understand how much we spent, what we spent it on, what was discretionary, what was not, why. Trying to understand how many people we had, who was permanent, who was not. Trying to understand the people we had who “didn’t exist”, because they were part of a service contract; they took up space, had kit, had desks, had phones and badges, but weren’t part of our headcount. Trying to understand and appreciate the people who weren’t there but were on the payroll: on sabbatical, on maternity leave, long-term ill, in dispute. Some were even certified insane….

It turned out that we “controlled” a relatively small proportion of the money in the first place, particularly when it came to capex, but true even for opex. Far less than half. A big chunk of our budget related to “sins of the father’, the depreciation associated with capitalised investments from prior years. Some of the money related to long-term contracts where we had no swing room. A portion related to guaranteed bonuses of staff hired in prior years, and a similar portion to the “month 13” payments that were standard in one or more of the operating units. And then there were the things we were legally obliged to do, the projects that related to legal and regulatory requirements.

Then came our allocations and overheads: as the largest shared-service department, we received the lion’s share of the shared-service costs that had to be allocated out, like premises and heating and lighting and insurances.

That didn’t leave very much. Our so-called “discretionary” expenditure was less than 20% of the overall cake. Which made the very idea of a 50% cut interesting to say the least. But we did it, nevertheless.

In that process, I learnt a lot about allocations, augmenting what I’d already learnt in other companies by then. Here’s a sample:

  • One firm allocated all its IT costs according to the floor space consumed by each department, something that was easy to calculate. As a result, the investment bankers, the lightest users of technology at the time, were charged the bulk of IT costs.
  • It made no sense to me, but apparently it was common practice for one cost centre to charge another. So IT costs for example, not only went direct to the business units, but also via other shared service units. Depended on who did the “sponsoring”; this was probably a throwback to some shared-service manager who wanted his cost centre to look as big as possible, for his CV. But the convention stuck. As a result we had strange anomalies: while our IT costs remained the same, the charge that hit the business unit differed, based on the particular allocation routes and keys used. What this meant in practice was that we “saved” the equities business more money if we took 100 people out of their direct support costs, rather than if we took 120 people out of those who supported equities settlement, whose costs were routed through operations. The idea that two people earning the same money and seated next to each other represented different levels of saving took some getting used to.
  • Some labour was capitalised and some was not; if you reduced the headcount in areas where projects were capitalised, the savings took time to flow through. Capitalisation rules were also different for different classes of resource: it was assumed that contractors worked on projects 100% of their chargeable time, but permanent staff spent only 70% of their time on projects, or some such ratio. So the way the costs flowed looked different.
  • Shared-service allocations were an art in themselves. In at least one company I worked in, as a result of successive waves of layoffs, there were large swathes of unoccupied desks. Some of these unoccupied areas were islands in the middle of occupied areas, and soon became informal meeting areas. Lo and behold, the areas were chained off and declared verboten, on the basis that you couldn’t use it unless you were paying for it…. even though the company was paying for it anyway.
  • In yet another place, we found out that it was more expensive for us not to book a meeting room than to book one…. the allocation key for unused meeting rooms hit us harder than the used version.
  • One of the odder effects we noticed was that of project delay. If you delayed the point at which you actually delivered something that went into production, then you delayed the point at which backlogged work-in-progress would start rolling out in capitalised form. [When we froze all code changes during the lead-up to the euro and similarly to Y2K, the monthly charges from IT went down dramatically, even though actual expenditure actually increased…]

By now you should have a feel for the level of complexity involved in allocating costs related to headcount and project and space and shared services in general, by accident and by design. I hope your experiences have been better than mine.

But all this pales into insignificance when you look at how IT infrastructure costs are allocated. Because now you have systems people interacting with accountants and usually a smattering of consultants as well, and between the three a truly Byzantine structure gets formed. When I looked at what happens in the allocation of data centre costs, hardware, storage, bandwidth, market data, and so on; when I looked at how per-processor licence costs were spread out; when I looked at how firewall and security costs were distributed across the organisation; when I saw how operations, maintenance, support and upgrade/fix costs were charged….. I developed a bad case of spreadsheet vertigo.

These experiences have influenced me, affected me, perhaps even scarred me. In fact I think there’s only one form of “allocation” that scares me more than IT infrastructure allocation. And that will be the subject of a post at a later date.

If you have to develop a conventional representation of the costs of your cloud, it’s not cloud.

If you have to create complex allocation keys for your cloud, it’s not cloud.

Cloud is when what you see is what you get, in the context of billing and payment.

Which is why I find all this talk of “private cloud” odd. By electing to retain hardware capital expenditure, by choosing to continue with associated maintenance and upgrade costs, by voting to stay captive within the prison of the related processor-driven licensing models, people are in effect choosing to stay in the world of complex cost allocation models.

Such cost allocation models are part and parcel of why firms find it hard to be agile, to be responsive to change.

In current economic conditions, business agility is no longer a nice-to-have, it’s a must for survival.

Companies that are “born cloud” have this in their DNA; others will have to evolve this capacity, and evolve it quickly.

It’s a tough world out there.