Musing about the cloud and enterprise cost allocation

Over a decade ago, after a couple of years as Deputy CIO, I was appointed Global CIO of Dresdner Kleinwort in May 2001. Times were hard, and my brief was harder still: to reduce technology capital expenditure and operating expenses by 50% within eighteen months, while providing “leadership, stability and continuity” to the organisation. At the time the IT department was nearly 2000 strong and spent around £700m pa in capex alone. I was surrounded by many very talented people, and, largely due to their ingenuity and actions, we delivered the goods. I’m privileged to have worked with them, and even more privileged to be in touch with so many of them a decade later.

Some of the things we did were standard, like shutting down remote offices when we were retracting our presence from those regions, renegotiating contracts with core suppliers, stopping activities that were yesterday’s necessities but today’s luxuries, that kind of thing. A few were more non-standard: shutting down our offshore operations in India and Eire, changing our hiring policy to stop hiring laterals and increasing graduate intake, establishing a formal commitment to opensource and to start-ups.

But it all began with our trying to understand our cost and allocations structures. Easier said than done. This was because it was not enough for us to save the money, we had to save it in the right places. We had to reduce it very heavily for advisory services, heavily for equities-related asset classes and services, and less so for debt- and treasury- related activities. Which meant that we had to understand how our costs flowed from IT to each business.

For most of my life, I’ve worked in very large organisations, often as an “official maverick” but nevertheless part of an extensive and complex fabric. And for most of my life, I’ve been astounded by the incredible difficulty I’ve had in getting two questions answered: What do I spend? How many people do I have? Over the years, as my career developed in its own serendipitous way, I found myself in charge of larger and larger departments with bigger and bigger budgets. And answering these two questions became harder and harder.

Perhaps I should have known better. When I was in my teens, my father used to say that the only “truth” on a balance sheet was the cash position; everything else was a “conventional” representation of information. If you didn’t understand the conventions being followed, you had no ability to understand the information presented.

So there we were, at Dresdner Kleinwort, trying to understand how much we spent, what we spent it on, what was discretionary, what was not, why. Trying to understand how many people we had, who was permanent, who was not. Trying to understand the people we had who “didn’t exist”, because they were part of a service contract; they took up space, had kit, had desks, had phones and badges, but weren’t part of our headcount. Trying to understand and appreciate the people who weren’t there but were on the payroll: on sabbatical, on maternity leave, long-term ill, in dispute. Some were even certified insane….

It turned out that we “controlled” a relatively small proportion of the money in the first place, particularly when it came to capex, but true even for opex. Far less than half. A big chunk of our budget related to “sins of the father’, the depreciation associated with capitalised investments from prior years. Some of the money related to long-term contracts where we had no swing room. A portion related to guaranteed bonuses of staff hired in prior years, and a similar portion to the “month 13” payments that were standard in one or more of the operating units. And then there were the things we were legally obliged to do, the projects that related to legal and regulatory requirements.

Then came our allocations and overheads: as the largest shared-service department, we received the lion’s share of the shared-service costs that had to be allocated out, like premises and heating and lighting and insurances.

That didn’t leave very much. Our so-called “discretionary” expenditure was less than 20% of the overall cake. Which made the very idea of a 50% cut interesting to say the least. But we did it, nevertheless.

In that process, I learnt a lot about allocations, augmenting what I’d already learnt in other companies by then. Here’s a sample:

  • One firm allocated all its IT costs according to the floor space consumed by each department, something that was easy to calculate. As a result, the investment bankers, the lightest users of technology at the time, were charged the bulk of IT costs.
  • It made no sense to me, but apparently it was common practice for one cost centre to charge another. So IT costs for example, not only went direct to the business units, but also via other shared service units. Depended on who did the “sponsoring”; this was probably a throwback to some shared-service manager who wanted his cost centre to look as big as possible, for his CV. But the convention stuck. As a result we had strange anomalies: while our IT costs remained the same, the charge that hit the business unit differed, based on the particular allocation routes and keys used. What this meant in practice was that we “saved” the equities business more money if we took 100 people out of their direct support costs, rather than if we took 120 people out of those who supported equities settlement, whose costs were routed through operations. The idea that two people earning the same money and seated next to each other represented different levels of saving took some getting used to.
  • Some labour was capitalised and some was not; if you reduced the headcount in areas where projects were capitalised, the savings took time to flow through. Capitalisation rules were also different for different classes of resource: it was assumed that contractors worked on projects 100% of their chargeable time, but permanent staff spent only 70% of their time on projects, or some such ratio. So the way the costs flowed looked different.
  • Shared-service allocations were an art in themselves. In at least one company I worked in, as a result of successive waves of layoffs, there were large swathes of unoccupied desks. Some of these unoccupied areas were islands in the middle of occupied areas, and soon became informal meeting areas. Lo and behold, the areas were chained off and declared verboten, on the basis that you couldn’t use it unless you were paying for it…. even though the company was paying for it anyway.
  • In yet another place, we found out that it was more expensive for us not to book a meeting room than to book one…. the allocation key for unused meeting rooms hit us harder than the used version.
  • One of the odder effects we noticed was that of project delay. If you delayed the point at which you actually delivered something that went into production, then you delayed the point at which backlogged work-in-progress would start rolling out in capitalised form. [When we froze all code changes during the lead-up to the euro and similarly to Y2K, the monthly charges from IT went down dramatically, even though actual expenditure actually increased…]

By now you should have a feel for the level of complexity involved in allocating costs related to headcount and project and space and shared services in general, by accident and by design. I hope your experiences have been better than mine.

But all this pales into insignificance when you look at how IT infrastructure costs are allocated. Because now you have systems people interacting with accountants and usually a smattering of consultants as well, and between the three a truly Byzantine structure gets formed. When I looked at what happens in the allocation of data centre costs, hardware, storage, bandwidth, market data, and so on; when I looked at how per-processor licence costs were spread out; when I looked at how firewall and security costs were distributed across the organisation; when I saw how operations, maintenance, support and upgrade/fix costs were charged….. I developed a bad case of spreadsheet vertigo.

These experiences have influenced me, affected me, perhaps even scarred me. In fact I think there’s only one form of “allocation” that scares me more than IT infrastructure allocation. And that will be the subject of a post at a later date.

If you have to develop a conventional representation of the costs of your cloud, it’s not cloud.

If you have to create complex allocation keys for your cloud, it’s not cloud.

Cloud is when what you see is what you get, in the context of billing and payment.

Which is why I find all this talk of “private cloud” odd. By electing to retain hardware capital expenditure, by choosing to continue with associated maintenance and upgrade costs, by voting to stay captive within the prison of the related processor-driven licensing models, people are in effect choosing to stay in the world of complex cost allocation models.

Such cost allocation models are part and parcel of why firms find it hard to be agile, to be responsive to change.

In current economic conditions, business agility is no longer a nice-to-have, it’s a must for survival.

Companies that are “born cloud” have this in their DNA; others will have to evolve this capacity, and evolve it quickly.

It’s a tough world out there.

 

 

27 thoughts on “Musing about the cloud and enterprise cost allocation”

  1. Thanks Ken, why don’t you share some of those war stories, perhaps even here in a comment if you want…. I’d love to hear them

  2. When I became a Head of Something in the department you describe, I thought that everybody else knew all this stuff and it was just me who was struggling. A primer like this would have been nice to have around for n00b managers like me.

    I shall bookmark this and refer my friends and acquaintances to it if they are ever foolish enough to contemplate a career in corporate technology management.

  3. Dom, I wish I had that primer. These are the things we discovered as we built and executed the plan. The anomalies popped out of the woodwork as we put the whole system under stress.

    I would probably have written a primer if we had to go through it again. But if you remember, the whole point of the way we did it was to ensure it was a one-off, not a repeated death-by-thousand-cuts approach.

    Each piece of our strategy is worth a separate post sometime: how we hired, how we experimented with startups, how we pushed open source, the unusual step of shutting down WebTek and DreTec, moving away from a central architecture department, the lot.

  4. I really, really agree with the last part in particular – private cloud is, to me a oxymoron and a LOT of people really don’t get it JP.

  5. “Which is why I find all this talk of “private cloud” odd. By electing to retain hardware capital expenditure, by choosing to continue with associated maintenance and upgrade costs, by voting to stay captive within the prison of the related processor-driven licensing models, people are in effect choosing to stay in the world of complex cost allocation models.

    Such cost allocation models are part and parcel of why firms find it hard to be agile, to be responsive to change.

    In current economic conditions, business agility is no longer a nice-to-have, it’s a must for survival.

    Companies that are “born cloud” have this in their DNA; others will have to evolve this capacity, and evolve it quickly.”

    Very succinctly put JP… and I wonder how often vendors pushing ‘Private Cloud’ solutions are challenged with this fundamental truth?

  6. Thanks Ian. I’ve seen a lot of clever marketing in my time…. Yet the baldfaced attempt to rename “data centre” as “private cloud” took my breath away. Thankfully most people see it for what it is….

  7. Ken, I never underestimate incumbent power, particularly in monopoly or oligopoly markets. Even after reading Christensen time and again, I have to marvel at their scheming.

    I had gotten used to the formula. Step one, claim the disruptor is insecure. Step two, suggest they won’t perform robustly enough. Step three, work with incumbent industry bodies and even the regulator at times, to claim noncompliance with something or the other. Step four, signal an enormous fictitious expense associated with switching. Step five, make it as hard as possible to switch.

    And if all else fails, claim the disruptor is doing something UnAmerican. You’d be surprised how many people believe that open source is fundamentally UnAmerican.

    The sadness is that so many customers fall for those arguments, and then they are the only ones who pay for it…. The incumbent has managed to eke out an existence for a decade or two longer, so they’re happy….

  8. this is such an excellent post. i’m floored. when one strips the complexity from complexity by being overreductive one introduces both coercion and distortion. in large orgs, a lack of property rights with respect to one’s own work creates an environment relative to one’s own position that is analogous to how Hernando de Soto describes poor economies with no property rights where everything is fragile and negotiable and tiny mincing steps and inertia are the rule not the exception. when the management of a large org degrades into a Cominterm following five year plans, then the org is too big. similarly, when individuals construct complex niches which will collapse if they’re let go or fired, then the org is unable to negotiate solutions that respect the value of human capital. one has to realize a large org isn’t an org, fundamentally. it’s an economy. and there are many economies that bring substandard performance.

  9. Agree about all the people costs.

    For the IT costs, in what way do these cost allocations change then? In the AWS world you still buy capacity -server cycles, storage, bandwidth and I imagine they will still get cost allocated in the time honored ways, no? For sure the CAPEX cost goes away and there is a more direct link to actual costs, but I am sure “margin” will be added before actual user/purchaser of the service receive it. In the world of PaaS it is obfuscated from the user, is it to that world you are referring when referencing cloud ?

    Be careful on the private cloud assertions, it depends on the fluff you consume from marketing departments. Private elastic clouds exist and are very useful [not as useful as public, but still useful]. A regular storage, network and server stack re-branded as private cloud is fluff.

  10. Love the cloud and it is where people need to go, but having sensitive organizational data in a public cloud is not an option for folks in regulated industries or certain countries. Everything has levels of complexities and anyone championing a cloud-only solution is also oversimplifying in a way that’s not grounded in reality.

  11. Mark, I come to San Francisco every now and then, and when I leave for the airport, I get to travel in the high-occupancy lane. Coming from Calcutta, my first shock was that 2 people in a car constituted “high occupancy”…. but the bigger shock was seeing the number of single-occupancy cars alongside me. Now when someone says “private cloud” I think “single occupancy car”. Not shared with anyone. Period. And it doesn’t matter who’s marketing to whom, a single-occupancy vehicle is a single-occupancy vehicle. Not shared-occupancy. Not cloud.

  12. thanks Oliver, still mulling over whether I write a follow-up. The comments here and elsewhere have been stimulating, both in agreement and in argument.

  13. There will be reasons for not-cloud. Sometimes it’s regulation. Sometimes it’s latency. Sometimes it’s the cost of transformation, especially if the intended life of the application is low. So I’m not suggesting cloud-only. But cloud-mainly for sure. And when it’s not cloud, let’s have real reasons, not the lies that incumbents spread.

  14. “And it doesn’t matter who’s marketing to whom, a single-occupancy vehicle is a single-occupancy vehicle. Not shared-occupancy. Not cloud.”

    Ah, now, come on, JP. Whilst I agree in general with your assertions about the “lies” of “private cloud” (and sometimes anger Warlords in my organisation by doing so), the line I quote here goes a small step too far. In huge multi-nationals, different business units / regional organisations are tenants (“occupants”), and moving to a model that allows the IT department to put them on shared (as opposed to dedicated) stuff is new — and a vast improvement. And if that IT department uses something like an internal deployment of OpenStack to do that, and the entire organisation is suddenly profiting from the very first time from a degree of shared occupancy, that’s when I find the quoted assertion to be too strongly made. In fact, if I might — that’s the real genius of your analogy to commuter lanes: as a manifestation of a small amount of shared occupancy, they’re an incremental improvement over single occupancy cars. So, properly done private clouds. But they still don’t begin to approach the efficiencies of the much higher degree of shared occupancy that proper public transportation (stroke cloud) provides.

  15. As someone currently struggling with our infrastructure department and thier interpretation of private cloud this is a reminder that sane(ish) people understand :)

  16. Ex Verizon IT friend authored an application for who-reported-to-who-and-when for commissions and cost allocations. Remarkable enterprises don’t have open source apps for this essential… (Funny is he still gets calls to help with settings to relocate the app in different regions)

  17. Mark, I think I will have to write another post to focus on why “private cloud” makes no sense to me. I’ve obviously not done a good enough job here.

  18. Good to hear from you, Steve. Looks like the football’s going well for you as well! Wait till you see my next post on “private cloud”

  19. Very interesting post, thanks a lot. 10 years ago I became the CIO of Bouygues Telecom with a similar mandate to reduce cost (although less strikingly) while growing capabilities and performance. Understanding costs was the beginning of a multiple-years journey … one of the things that struck me most at that time was the systemic nature, the fact that there is an inertia, as well as the memory of all decisions made in the past, something that you emphasize rightly. Since then I have tried to evangelize about the systemic nature (and shared responsibilities) of IT costs, in many forms : teaching at the university, writing books, talking outside the company (no one is a prophet in their own land). More recently, I have been looking at sustainability … including in a rather formal way – cf. http://informationsystemsbiology.blogspot.com/2009/06/sustainable-it-budget-in-equation.html
    This is much less subtle than the comments you shared (which I’ll reuse with my students), but still demonstrates a few obvious things ….

  20. Thanks Yves, not just for sharing your post with me, but also the comments on your post, and the references they provided.

  21. A bit late to the party here as I have just read this. This has really opened my eyes to what a modern CIO has to face. It makes my job of running a tech company sound easy in terms of budgets. I would be interested in your views of how IT vendors and resellers best present ROI and TCO models to corporate IT in light of what you say. Or is that an impossible task?

Let me know what you think

This site uses Akismet to reduce spam. Learn how your comment data is processed.