Thinking about streams of information at work

At school and at university, I was reminded by teachers not to allow the knowledge I’d accumulated to constrain unduly my thinking about the future. There was something liberating about the mere process of trying to understand that knowledge could be considered a constraint, a liberation that continued throughout my life, evinced at different times and in different ways.

Early on, it was a personal fascination with the concept of time, triggered by an experience every child in India goes through: finding out that the hindi word for yesterday: “kal” was the same as that for tomorrow: “kal“. While still at school, as with most of my fellow students in the Sciences stream, the thoughts and writings of Richard Feynman entered my life. His teachings on The Character of Physical Law, more particularly the chapter on The Distinction of Past and Future, influenced my conceptions of time even further, giving me a sense of its irreversible nature. Most 13-14-year olds have a blank-slate approach when it comes to absorbing ideas, and so it was when we were first able to think about what Einstein had been saying about relativity, allowing us to view time as a dimension.

Then came university, where I read Economics, and then work, where none of this appeared to matter. The nearest I came to thinking further about all this was in my late twenties, when I went through a long period of regular dreaming, often lucid, often with repeating themes. [The commonest theme had me in flight: I’d slowly lengthen my stride and then gently take off, more gliding than flying, able to keep myself airborne for a minute or so, soaring and banking using my arms as wings, never flapping, unable to hover.].  They were dreams rather than nightmares, relaxing me, letting me feel rested and refreshed; this, coupled with their lucidity, meant that I tended to remember my dreams. And occasionally, very occasionally, I would experience something in “real life” that seemed, if I stretched it enough, to be something I’d experienced before in a dream. But I “knew” it wasn’t possible and so I dismissed it. Sort of. It didn’t stop me from reading Michio Kaku from the mid-1990s onwards, starting with Hyperspace/Parallel Universes. But the Feynman in me ruled, time continued to be seen as something irreversible.

One other principle stayed with me, influenced by some of the sayings and quotations I’d been attracted to over the years: Einstein saying that we couldn’t solve problems by using the same kind of thinking we used to create the problems in the first place, Einstein suggesting that common sense was the collection of prejudices one has by age eighteen, someone (occasionally credited to Schopenhauer) saying that talent hits a target no one else can hit; genius hits a target no one else can see. In each case, I was reminded of what my teachers had said to me, about not allowing my “knowledge” to constrain my “thinking”. Easier said than done.

Over time, I understood more about cognitive biases and anchors and frames, a learning that was accelerated by conversations with colleagues like Sean Park, Malcolm Dick and James Montier while at Dresdner Kleinwort. So it should come as no surprise that in my roles as deputy CIO and CIO, and as chief scientist, in multiple organisations, I kept taking the “don’t let the past predict the future” tablets, religiously, systematically, every day. That didn’t always endear me to everyone, but it helped me keep my thinking fresh. It was the reason I kept wanting to connect with people outside the organisation at least as much as I spoke with people within the organisation. It was the reason that I’ve always tried to support a graduate intake program in firms I’ve worked in, one way of ensuring that fresh thinking is allowed to enter an organisation.

Which is why I loved the Wayne Gretzky quote about playing where the puck is going to be, not where the puck is or was. Which is why I loved the Steve Jobs quote, in his Stanford Commencement Address in June 2005:

Again, you can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something – your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.

In the words of Stephen Stills: Don’t let the past remind us of what we are not now. [That’s taken from my all-time number one favourite song, Suite: Judy Blue Eyes. You can hear a sample, containing the quote, here…. and even buy the MP3 download if you so wish.

It is with all this in mind that I spend time thinking about streams of information. For most of my adult life, these streams have been about the past. Transactions that had already happened. We spent a long time studying the fossil remains of human activity in order to try and predict what the future would look like, a mongrel form halfway between scatology and eschatology.

More recently, we’ve been able to “life-stream”, sharing our current activities and our “status” with others, aided and abetted by near-ubiquitous connectivity, ever-smarter devices and digital frameworks that support the social networks. In the past, we were only able to capture things that had happened. We were used to calling things that had happened “transactions” and so we called the analysis of those records “transaction processing”. We’re able to look now at what people are doing, within “activity streams”, and, because we share our activity streams within social networks, we call the study of this “social media monitoring”.

But we’re on the cusp of something way way more exciting, in a classic William Gibson future-is-here-but-unevenly-distributed kind of way: we’re beginning to what we intend to do.

Doc Searls, a good friend, was the first person I heard using the term “Intention Economy” to describe this. And I’ve signalled my intention to him, by pre-ordering his book on the subject, due May next year.

Esther Dyson, another good friend, when talking about the future of internet search, complimented Bill Gates on saying “the future of search is verbs”….. now that gets interesting, really interesting, when you consider verbs as having tenses. Tenses that help segment the continuum of time. Past. Present. And future.

The future.

When I was at university, one of the things I studied in classical economics was the works of Jean-Baptiste Say. And one of the ways in which his “Law” was paraphrased, originally by John Maynard Keynes, was as follows: Supply creates its own demand.

We’re soon going to be able to signal our intentions in ways that we could never have done before.

Over time, those signals will become more sophisticated, more evolved, more nuanced.  Social norms will be formed, telling us what we can or can’t do with our signalling of intent. The semaphoring of intent will slowly come to include disinformation, the false-carding,  feints and dummies, elaborate ways of disguising intent in order to further some other intent. With that will come the need to watch for, and to recognise, the digital “tells” in the world of supply-and-demand poker. On both sides.

And over time, the systems and processes required to interpret and assimilate those signals into actionable information, this too will evolve.

I cannot wait.

Of private clouds and zero-sum games

If I interpret the comments on my last post correctly, both online and offline, a small number of you felt that I’d been unduly strong in my bias against the “private cloud”; it sounded like you thought I’d been drinking too much of the Kool-Aid since joining Salesforce.com a year ago this weekend.

Actually, my bias against the private cloud is around a decade old. And it stems from experiences I had during my six-plus years as CIO of Dresdner Kleinwort.

First and foremost, I think of the cloud as consisting of three types of innovation: technology, “business model” and culture. Far too often, I get the sense that people concentrate on the technology innovation and miss out on the remarkable value offered by the other two types of innovation. In this particular post I want to concentrate on the business model innovation aspect.

Shared-service models have been around for some time now, they’re not new per se. At Dresdner Kleinwort, we implemented shared-service models wherever relevant, sometimes within a business unit, sometimes across business units within a business line, and sometimes across the whole company. The principle was simple: investment and operating costs (the “capex” and “opex”) for the shared service would be distributed across all the consumers of the shared service according to some agreed allocation key. Sometimes it was a simple key, like headcount. Sometimes it was predefined each year at a central level, as was the practice with “budget” foreign exchange rates. Sometimes it was hand-crafted by service, involving long hours of painful negotiation. Sometimes it wasn’t even agreed, just mandated from above. One way or the other, there was an allocation key for the shared service.

Dresdner Kleinwort was part of Dresdner Bank, and Dresdner Bank was wholly owned by Allianz. There were shared services at the Dresdner level, and at the Allianz level. So there was a whole juggernaut of allocations going on, at multiple levels.

And God was in His Heaven, All was Well With the World.

Until someone wanted to leave the sharing arrangement.

At which point all hell broke loose.

Because the capex had been spent, and the depreciation tail had to be allocated to someone. If the number of someones grew smaller, the amount allocated grew larger. This wasn’t just about capex; not all of opex was adequately volume-sensitive, so similar effects could be observed.

“Private” models of shared services were fundamentally zero-sum games: the institution coughed up all the capex and opex, and the institution had to allocate all of it. Regardless of the number of participants. Sometimes there was scope for some obfuscation: there was a central pot for “restructuring”, and all the shared-service units ran like hell to reserve as much of it as possible every time the window opened for such a central pot. If you were lucky, you could dump the trailing costs left by the exiting business into the restructuring pool, thereby avoiding the screams of the survivor units. But it was an artificial relief: the truth was that the company bore all the costs.

A zero-sum game.

Shared resources have costs that have to be shared as well. If the only people you can share it with are the people in the company, then the zero-sum is unavoidable. Things are made more complicated by using terms like capex and opex, by choosing to “capitalise” some expenditures and not others, by having complex rules for such capitalisation. Such worlds were designed for steady-state, not for change.

We’re in a business environment where change is a constant, and where the pace of change is accelerating. So there’s always something changing in the systems landscape. Business units come and go; products and services offered come and go; locations and even lines of business come and go; and entire businesses also come and go within the larger holding company structure.

Change is a constant.

So with the change comes even more pain. Lists of “capitalised” assets have to be checked and cross-checked regularly, to validate that the assets are still in use; at Dresdner these were called impairment reviews. If not, the remaining depreciation tail of the “impaired” asset has to be absorbed in the next accounting period.

What joy. [Yes, dear reader, the life of a CIO is deeply intertwined with the life of a spreadsheet jock].

In many respects, the technology innovation inherent in the cloud was foreseeable and predictable. Compute, storage and bandwidth were all going down paths of standardisation, to a point where abstract mathematical models could be used to describe them. As the level of standardisation and abstractability increased, the resources became more fungible. That fungibility could be exploited to change the way systems were architected, higher cohesion, looser coupling, better and more dynamic allocation and orchestration of the resources.

The business innovation in the cloud was, similarly, also foreseeable and predictable. The disaggregation and reaggregation made possible by the standardisation and virtualisation would allow for different opportunities for investment and for risk transfer.

Now it was no longer a zero-sum game. The company that spent the capex and opex took the risk that there would be entrants and exits, high volumes and low; the technology innovations were used to balance loads and fine-tune performance; the multitenant approach often led to lower licence costs, and these could be exploited to defray some of the continuing investments needed in the balancing/tuning technologies.

Individual business units and lines and even entire companies no longer had to carry out impairment reviews for such assets. Because they didn’t “own” the assets: the heart of the cultural innovation was the change in attitudes to ownership.

The private cloud proponents have sought to blur the lines by bringing in arguments to do with data residency.

Data.

Not code.

Data will reside where it most makes sense. Sometimes there are regulatory reasons to hold the data in a particular jurisdiction. Sometimes there are latency reasons to hold data within a particular distance limit. Sometimes there are cultural reservations that take time to overcome. The rest of the time, data can be held wherever it makes economic sense.

Serious cloud computing companies have known this, have been working on it, and will continue to work on it. The market sets the standard.

Code, on the other hand, particularly multitenant code, has no such residency requirement. Unless you happen to ask someone whose business model is to charge licences connected to on-premise processors.

Change is a constant in business life. The cloud is about change. The business model of the public cloud is designed to make that change possible, without the palaver of impairment reviews and capex writeoffs and renegotiation of allocation keys and and and

Which is why, in principle, the private cloud makes no sense to me.

Views?

Musing about the cloud and enterprise cost allocation

Over a decade ago, after a couple of years as Deputy CIO, I was appointed Global CIO of Dresdner Kleinwort in May 2001. Times were hard, and my brief was harder still: to reduce technology capital expenditure and operating expenses by 50% within eighteen months, while providing “leadership, stability and continuity” to the organisation. At the time the IT department was nearly 2000 strong and spent around £700m pa in capex alone. I was surrounded by many very talented people, and, largely due to their ingenuity and actions, we delivered the goods. I’m privileged to have worked with them, and even more privileged to be in touch with so many of them a decade later.

Some of the things we did were standard, like shutting down remote offices when we were retracting our presence from those regions, renegotiating contracts with core suppliers, stopping activities that were yesterday’s necessities but today’s luxuries, that kind of thing. A few were more non-standard: shutting down our offshore operations in India and Eire, changing our hiring policy to stop hiring laterals and increasing graduate intake, establishing a formal commitment to opensource and to start-ups.

But it all began with our trying to understand our cost and allocations structures. Easier said than done. This was because it was not enough for us to save the money, we had to save it in the right places. We had to reduce it very heavily for advisory services, heavily for equities-related asset classes and services, and less so for debt- and treasury- related activities. Which meant that we had to understand how our costs flowed from IT to each business.

For most of my life, I’ve worked in very large organisations, often as an “official maverick” but nevertheless part of an extensive and complex fabric. And for most of my life, I’ve been astounded by the incredible difficulty I’ve had in getting two questions answered: What do I spend? How many people do I have? Over the years, as my career developed in its own serendipitous way, I found myself in charge of larger and larger departments with bigger and bigger budgets. And answering these two questions became harder and harder.

Perhaps I should have known better. When I was in my teens, my father used to say that the only “truth” on a balance sheet was the cash position; everything else was a “conventional” representation of information. If you didn’t understand the conventions being followed, you had no ability to understand the information presented.

So there we were, at Dresdner Kleinwort, trying to understand how much we spent, what we spent it on, what was discretionary, what was not, why. Trying to understand how many people we had, who was permanent, who was not. Trying to understand the people we had who “didn’t exist”, because they were part of a service contract; they took up space, had kit, had desks, had phones and badges, but weren’t part of our headcount. Trying to understand and appreciate the people who weren’t there but were on the payroll: on sabbatical, on maternity leave, long-term ill, in dispute. Some were even certified insane….

It turned out that we “controlled” a relatively small proportion of the money in the first place, particularly when it came to capex, but true even for opex. Far less than half. A big chunk of our budget related to “sins of the father’, the depreciation associated with capitalised investments from prior years. Some of the money related to long-term contracts where we had no swing room. A portion related to guaranteed bonuses of staff hired in prior years, and a similar portion to the “month 13” payments that were standard in one or more of the operating units. And then there were the things we were legally obliged to do, the projects that related to legal and regulatory requirements.

Then came our allocations and overheads: as the largest shared-service department, we received the lion’s share of the shared-service costs that had to be allocated out, like premises and heating and lighting and insurances.

That didn’t leave very much. Our so-called “discretionary” expenditure was less than 20% of the overall cake. Which made the very idea of a 50% cut interesting to say the least. But we did it, nevertheless.

In that process, I learnt a lot about allocations, augmenting what I’d already learnt in other companies by then. Here’s a sample:

  • One firm allocated all its IT costs according to the floor space consumed by each department, something that was easy to calculate. As a result, the investment bankers, the lightest users of technology at the time, were charged the bulk of IT costs.
  • It made no sense to me, but apparently it was common practice for one cost centre to charge another. So IT costs for example, not only went direct to the business units, but also via other shared service units. Depended on who did the “sponsoring”; this was probably a throwback to some shared-service manager who wanted his cost centre to look as big as possible, for his CV. But the convention stuck. As a result we had strange anomalies: while our IT costs remained the same, the charge that hit the business unit differed, based on the particular allocation routes and keys used. What this meant in practice was that we “saved” the equities business more money if we took 100 people out of their direct support costs, rather than if we took 120 people out of those who supported equities settlement, whose costs were routed through operations. The idea that two people earning the same money and seated next to each other represented different levels of saving took some getting used to.
  • Some labour was capitalised and some was not; if you reduced the headcount in areas where projects were capitalised, the savings took time to flow through. Capitalisation rules were also different for different classes of resource: it was assumed that contractors worked on projects 100% of their chargeable time, but permanent staff spent only 70% of their time on projects, or some such ratio. So the way the costs flowed looked different.
  • Shared-service allocations were an art in themselves. In at least one company I worked in, as a result of successive waves of layoffs, there were large swathes of unoccupied desks. Some of these unoccupied areas were islands in the middle of occupied areas, and soon became informal meeting areas. Lo and behold, the areas were chained off and declared verboten, on the basis that you couldn’t use it unless you were paying for it…. even though the company was paying for it anyway.
  • In yet another place, we found out that it was more expensive for us not to book a meeting room than to book one…. the allocation key for unused meeting rooms hit us harder than the used version.
  • One of the odder effects we noticed was that of project delay. If you delayed the point at which you actually delivered something that went into production, then you delayed the point at which backlogged work-in-progress would start rolling out in capitalised form. [When we froze all code changes during the lead-up to the euro and similarly to Y2K, the monthly charges from IT went down dramatically, even though actual expenditure actually increased…]

By now you should have a feel for the level of complexity involved in allocating costs related to headcount and project and space and shared services in general, by accident and by design. I hope your experiences have been better than mine.

But all this pales into insignificance when you look at how IT infrastructure costs are allocated. Because now you have systems people interacting with accountants and usually a smattering of consultants as well, and between the three a truly Byzantine structure gets formed. When I looked at what happens in the allocation of data centre costs, hardware, storage, bandwidth, market data, and so on; when I looked at how per-processor licence costs were spread out; when I looked at how firewall and security costs were distributed across the organisation; when I saw how operations, maintenance, support and upgrade/fix costs were charged….. I developed a bad case of spreadsheet vertigo.

These experiences have influenced me, affected me, perhaps even scarred me. In fact I think there’s only one form of “allocation” that scares me more than IT infrastructure allocation. And that will be the subject of a post at a later date.

If you have to develop a conventional representation of the costs of your cloud, it’s not cloud.

If you have to create complex allocation keys for your cloud, it’s not cloud.

Cloud is when what you see is what you get, in the context of billing and payment.

Which is why I find all this talk of “private cloud” odd. By electing to retain hardware capital expenditure, by choosing to continue with associated maintenance and upgrade costs, by voting to stay captive within the prison of the related processor-driven licensing models, people are in effect choosing to stay in the world of complex cost allocation models.

Such cost allocation models are part and parcel of why firms find it hard to be agile, to be responsive to change.

In current economic conditions, business agility is no longer a nice-to-have, it’s a must for survival.

Companies that are “born cloud” have this in their DNA; others will have to evolve this capacity, and evolve it quickly.

It’s a tough world out there.

 

 

Thinking about change

All projects involve change, an outcome of some sort that can be measured as a difference between the initial state and the end state of something.

All change involves risk. At a level of abstraction, project management may be seen as the means by which something is progressed from initial state to end state while mitigating the risks and while staying within given parameters of time, quality, cost.

For many years I worked in the banking sector, sometimes indirectly, sometimes directly. When I was at Dresdner Kleinwort, we “froze” the systems estate in the lead-up to the euro and to the Year 2000.

Nothing moved.

And nothing broke.

And no progress was made.

It used to be said that nothing is certain but death and taxes.

For some time now, there has been a third.

Change is now a constant. It may sound trite and soundbitey, but that does not alter the fact.

IT departments the world over have grappled with change all their lives, even when they masqueraded under names like MIS and DP. The quantum of change may have varied; the ratio of investment in change (as compared to investment in improving the status quo) may have varied; but change happened nevertheless.

Some changes are cultural, transformational, real shifts. Some changes are global, some sectoral, some geographical, some restricted to a given company or even department. Much has been written about change and the management of change. Much has been written about the agents of change. Much has been written about the toxins that emerge when complex systems are placed under the severe stress of change, and how to handle those toxins.

Over the years, people have learnt a lot about IT systems and change. How the change has to involve people, process and systems. How the change process needs to be designed with the right communications and training plans, so that the change is actually sustainable, and sustained.

This post is not about any of this. Or perhaps it’s about all of this.

The IT industry has always been about change. About progress. Quite often, the material value of the progress was intercepted by intermediaries rather than made available to end-customers. But there was always change. And value created by the change.

And resistance to change. Particularly in the enterprise.

Direct dial phones in the early 1980s. PCs in the mid 1980s. Nonproprietary “open” systems in the late 1980s, along with outsourcing. Internet connections in the early 1990s. The web a couple of years later, along with offshoring. Mobile phones around the same time, the mid 1990s. Web mail a few years later. Java, Linux, opensource software in the late 1990s; push mail around the same time. The cloud in the early 21st century. Social software a few years later. Tablets and touch more recently. Every one of these changes were vehemently opposed by the immune system of the enterprise, playing out the same cards in the same sequence: it’s not secure; it’s not robust; it’s too expensive to change; it breaks regulations. The same objections, in the same order.

Technology adoption tends to happen in three phases: substitution, increased use, embedded and differentiated use. So there is usually a problem to solve, something that is currently being done some other way, something that will be substituted and come to an end of its life. So there is usually an “incumbent”, a way of doing something, either as a formal function or as a workaround. And people are invested emotionally in that incumbent. [Especially those whose livelihoods rely on that incumbent].

Over a decade ago, Clayton Christensen set out the reasons for this incumbent reaction in The Innovator’s Dilemma, and continued with the theme in the rest of the series.

More recently, as I’ve seen the misinformation and disinformation thrown around about the cloud, I’ve been thinking harder about the decision process within organisations, and how incumbents mangle and mutate those processes. Much of what I saw reminded me of the core thesis in Pip Coburn’s excellent The Change Function, which broadly states that technology change projects succeed if and only if two conditions are met: there is a clear problem to solve; and the cost of the project is less than the perceived cost of changing from the status quo.

It’s now almost a year since I joined Salesforce.com, an incredibly exhilarating time, frenetic yet ultimately very fulfilling. Because I now see the possibility that end-customers will actually see the benefits of technology advances affect their wallets, actually put money in their hands, much like Skype did for long-distance telephony. We’re seeing the price of commodity infrastructure, both hardware as well as software, drop precipitously; and, unlike the past, we’re seeing those price changes benefit the customer.

More precisely, those customers who take advantage of the progress; for there are always some who buy the incumbent argument on security or performance or robustness.

The effect of this is to buy time for the incumbent; often, this time is used to influence regulators in order to buy more time. And the customer loses out.

Which is why, for the last three months or so, I’ve been spending time thinking about all this.

And I’ve come to realise something, something I thought I’d already learnt and internalised, but obviously something I have to keep learning.

The cloud is not just about flexibility of access to compute power and storage and bandwidth, or about avoiding the thankless tasks of software installations, maintenance and upgrades; mobile is not just about ubiquity of access; cloud and mobile, together, are not just about the ability to “shift time” and “shift space”; social is not just about getting closer to the customer, about valuing relationships and capabilities; open is not just about the transformation of innovation, about partnering, about collaboration across boundaries.

The cloud paradigm is about all of this.

And about one more thing.

The capacity to change. Designed as an integral function. Native.

Changing capacity, scale, coverage, product set, devices, whatever. The cloud is about launching products, scaling them up, scaling them down, discontinuing them. The cloud is about entering …. and exiting … markets. The cloud is about delivering services to the device of choice; even if it didn’t exist when the original design was made.

The cloud is about change. Not about the steady state.

IT before the cloud was all about preserving and maintaining the steady state. And that’s why so many projects failed, and will continue to fail. A conflict of philosophy, as the agents of change try to batter down the walls of the mechanisms implemented to protect against change.

The monolithic systems of the past, largely concentrated on the back office, were built to achieve entirely different objectives: stable, repeatable processes executed at the lowest cost possible, designed to rebuff change.

The cloud is about change.

If you don’t value the ability to change, if you feel you don’t need to change, and change rapidly, then you’re not going to value the cloud. Because your perceived cost of changing will exceed the perceived benefit.

Soon, TCO calculations will include the change premium, the cost of responding to change in market conditions and needs.

Soon.

But before that, a number of firms will die. Because of their inability to change in time.

 

“A magazine is an iPad that does not work”

I saw this video earlier today. I watched it again. And again.

I guess it may turn out that the whole thing was fabricated, that what I watched was an illusion. We live in times where such things are possible.

If you ask me, the video is real. But I’m no expert when it comes to declaring authenticity of such things.

But you know something? I don’t care whether the video was impromptu, staged or otherwise contrived technically. What matters is the message.

The iPhone/iPad generation will have such different views about everything around them as they grow up. Not just about the way they engage with information, the way they make use of information.

Old fogies like me are just getting used to using terms like visitor and resident rather than native and immigrant; I’m lucky, I have three children who keep me in touch with the millennials.

I guess I have to rely on my grandchildren to teach me all about the iPad Generation. And, looking at this video, I am so looking forward to it.