The best way to predict the future is to prevent it

So said Alan Kay, satirising something he said maybe three decades ago. (While at Xerox PARC he is remembered as saying “The best way to predict the future is to invent it.”) He was speaking at CIO 08: The Year Ahead, a conference I was at last week at the Hotel Del Coronado in San Diego. In some ways his talk was an updated version of this one given by him twenty years ago; read it and see what you think, it will give you a flavour for how he thinks.

Some of the quotes make interesting reading, particularly this trio from Marshall McLuhan:

I don’t know who discovered water, but it wasn’t a fish.

Innovation for holders of conventional wisdom is not novelty but annihilation.

We’re driving faster and faster into the future, trying to steer by using only the rear-view mirror.

There were a number of interesting sessions at the conference; I was pleasantly surprised, given my predilection for somewhat less “formal” conferences. I had the opportunity to spend some time talking to Alan later, and there were a number of things he said that are worth thinking about.

He spent some time working through what he meant by “preventing” the future, how corporations now have people without the domain knowledge to make the decisions they are otherwise empowered to make. Interesting stuff, grist to the mill for a future post. For now, I’d like to share something else. Three things he said really stuck with me.

The first was an assertion that innovation happens as a result of bringing together knowledge, IQ and point of view; that over the last three decades our society has tended to treat IQ as more important than knowledge or point of view; that as a result we have not really created very much, nothing really sustainable; instead, we have given in to pop cultures and pop processes, and so we build things badly, without really understanding scale.

[Hard-hitting stuff, uncomfortable stuff, but definitely worth thinking about. I was less convinced about his seemingly extending the arguments to opensource and to folksonomies. But then maybe I misinterpreted him. One way or the other, he was a challenging speaker.]

The second assertion was something along the lines of “Don’t worry about whether something is right or wrong, just try to find out what is going on“. The way I understood him, he was saying that we spend too much time analysing and “judging” what we see and hear and experience, and that as a result we don’t really understand what it is that we’re experiencing. That the process of judging happens too quickly, that we should try and detach ourselves from the judging process and instead just try to understand the “what”.

[It’s probably my anchors and frames and bias, but I thought he was saying something that resonated with what I think. For some time now I’ve been asserting that we should “filter on the way out, not on the way in”. And I guess he’s said it better than I could. Don’t decide whether something is good or bad,  just try and experience that something, just try and figure out what it is. If enterprises took that stance towards opensource, towards social software, towards social networks, they might actually learn something. Instead they create arguments about just how many social networks can dance on the end of a pin…]

His third assertion was positively frightening. He asked something very simple:

How come there isn’t a Moore’s Law for software?

That felt good, just writing it. So I’ll repeat it. How come there isn’t a Moore’s Law for software?  The way Alan asked it, there was an underlying innuendo. That we were wrong about many things we’ve done in the past thirty years, in terms of networks, operating systems, programming languages, hardware, applications, the lot. That the way we built them was wrong, and that we continue to compound the error.

[This was a hard one for me. Was it time to tear everything up and start all over again? If we didn’t do it, would someone else come and do it for us? I began to wonder. Could an entire industry have a variation of the Innovator’s Dilemma?  Could I be in that industry right now?]

One thing was certain. We were not seeing a Moore’s Law operating in the world of software. What we were seeing was something quite the reverse, something possibly quite ugly.
All in all I had a really interesting time. I feel privileged, privileged to have met Alan, privileged to be in a job where I get the opportunity to think about things like this, and even the opportunity to do something about what I’m thinking about.

I’m particularly taken with his challenge on scale, his accusation that we don’t design things that really scale. I am reminded of my favourite definition of innovation, the one by Peter Drucker: “Innovation is a  change that creates a new dimension of performance.” By that yardstick, just how much innovation has happened in the last decade?

I didn’t agree with everything Alan said. That’s not the point.

The point is that he knew things I didn’t know, that he’d learnt things I hadn’t learnt, and that he was willing to share them with people who bothered to ask. So thank you Alan.

24 thoughts on “The best way to predict the future is to prevent it”

  1. Regarding the lack of a Moore’s Law for software I often consider the state of play in the 1970s, where one large computer could serve hundreds or even thousands of terminals with what we would today regard as minuscule resources, E.g. 16MB of RAM. And in comparing this to the current situation wonder where we are heading. Has the luxury afforded via Moore’s Law for hardware made us lazy with our software, as we binge on seemingly/assumed endless resources? I’m undecided, and perhaps this is simply misplaced nostalgia fuelled by an infatuation with novel architectures from before my time.

    As for tearing everything up and having to start all over again: at a talk I attended earlier this year this is something that the founder of Inmos, Iann Barron, suggested we will sooner or later have to do. So whilst in our endless quest for processing power we have got round the limit in CPU clock speeds by having dual and quad core etc CPUs, we will soon hit another limit – that of effective scaling in SMP systems.

    Inmos gave birth to the transputer in the 1980s, a novel CPU architecture designed from the start to scale. However to port applications to such an architecture would be a major undertaking and in cases perhaps require a complete re-write. The transputer ultimately failed but largely not for technical reasons. It is ironic to think that such an architecture could have been our saviour. And it is likely that we will have to adopt a similar approach sooner or later, and at that point go through the pain of moving to a new model.

    I’ve just realised that I’ve dragged this a little off-topic as the processing power limit is about hardware. But closely linked with software architecture and an example of where we may have taken a wrong turn, or to be blunt, taken the easy or at least familiar and comfortable option.

  2. Andrew, I fear your view of the 1970s is a bit shy on context. Remember what was happening on all of those terminals. In most business situations the terminals did little more than display forms to be filled. In more “sophisticated” environments, like R&D shops, all interaction was through command lines, often in codes so obscure that every user needed a “cheat sheet” next to the terminal. Word processing, as we know it did not exist; and text editing (with very few exceptions) was line-oriented (going back to the punched-card mentality). Most of the software being developed was not interactive, so the whole mode of use was still very much batch-oriented. The point of all this is that it is really easy to support “thousands of terminals” with “minuscule resources” if those terminals don’t do very much!

    I do not think that the richer opportunities for interaction have made us lazier. On the other hand those opportunities have allowed us to develop better “toys,” some of which are “real toys,” like games, and others just opportunities for play. Along with this we now have workplaces where there is less conscientiousness about the distinction between work and play, for a variety of reasons that have been discussed in this forum. My personal feeling is that our laziness has grown out of the fact that our feeling about work has grown so fuzzy. Today’s IT may have contributed to that, but it is only one of many factors. Another factor, appropriate to this particular post, is the growth of shoot-from-the-hip evangelists, whose blather can easily be reduced to nonsense by a few moments of sober reflection that no one seems to want to invest (possibly because of their preoccupation with their toys)!

  3. JP. I like the “We’re driving faster and faster into the future, trying to steer by using only the rear-view mirror.” imagery.
    I have a favourite phrase myself to describe what’s required – and it is about looking ahead, and it’s gleaned from advanced motorcycle riding technique!
    “The further ahead you look, the faster you can go.”
    It avoids target fixation – staring at the problem right in front of you means you steer towards it instead of to where you want to end up.

  4. I think Alan is right about software but not everyone is looking in the rear view mirror. For example check out Joe Armstrongs thoughts (among others).

    For a sample of his thinking listen to parts 1 and 2 of a chat here :
    http://channel9.msdn.com/ShowPost.aspx?PostID=351659

    This isn’t conprehensive put it does point to a place in the past where things forked a little. I also think Erlang as a language/paradigm offers some good insights into the future of programming.

    regards
    Al

  5. There *is* a Moore’s law on the size of software needed to do a function. In 1976 a word processor was around 20kB. 1984 2MB; 1996 500M B(1 CD); 2000 2.4GB (4 CDs for Office 2000). Pretty close to the hardware Moore’s law doubling every three years.

  6. Andrew, good point, though I think the spirit of Moore’s Law is more about capacity, rather than consumption.

    JP, this has me thinking a bit about what a “Moore’s Law” for software would really mean. Really, what does it mean to double software capacity?

    Anyhow, I put up a first post here: http://www.appistry.com/blogs/bob/architecture/a-moores-law-for-software-part-one-2/ and will continue a bit next week.

    Thanks for starting the discussion.

  7. The impact of Moore’s law on computer hardware is tremendous; devices (built using integrated circuits) are more feature rich as you can have more functional logic built into a single IC and at a cheaper price. Integrated circuits are also more standardised, there is a data sheet that specifies exactly what each pin will do. I would say a very strong degree of standardisation and componentization. This is probably due to constant investment and effort on research and development and developments in solid state fabrication technology. A device designer will use number of such IC to do the design of a hardware device.

    In software we probably lack the level of standardisation and componentization that has been achieved in Hardware Industry. The emergence of development tools has reduced the time to develop software. However there are multiple ways how such components are integrated to develop a custom application. A simple example could be how to interact with a database; some tools provide custom built utilities that you can use to interact with a database whereas in some cases you can write lower level code to interact faster with a database. What is the best way?? The debate is open !!

    In software there has been research on developing various tools and methodologies but the focus on standardisation is perhaps less so it is always an issue of integration and making the software work faster and in a consistent way. A designer probably has a wider design choice in software industry because of the abundance of options and ways a particular thing can be done… this introduces subjectivity and makes software design and development more interesting but it probably produces something which might not scale in future…

    SOA promises to address this as we will move from custom application development into being consumer and provider of web services with a standardised interface. A service provider will then have to ensure that it can scale else it will lose revenue. We have to wait and watch..

  8. JP, If you take the long view I’d argue that there is an equivalent to Moore’s Law in software. This law is that each year it becomes slightly easierto create a service that does something for a customer. Progress is slow – not exponential, and is driven by improvements in software design methods, increased modularity, introduction of internet standards like XML, development of the techniques behind Web 2.0, sharing of functionality and in the future is likely to be driven by improvements in semantic technologies and AI.

    It’s easy to miss the improvement unless you stand back. But slowly, year by year, developing new services for customers is getting easier. A new start up can now create software for a service using a few people in weeks. When I first started in this industry we had huge teams working for years to provide pretty poor levels of functionality.

    And yes, I’d argue that this change it is highly disruptive to much of our existing industry. We’re seeing traditional players like Microsoft worrying about the newer and more flexible companies that have emerged from the Web 1.0 era. And we’ll probably see some of the survivors from Web 2.0 challenging the Web 1.0 giants.

  9. David-
    I agree with your line of thinking, and think that the real metric for software is whether or not it’s able to utilize the capacity provided by the infrastructure. At any rate, I’m proposing this sort of thing as “Blaise’s Premise” here: http://www.appistry.com/blogs/bob/architecture/a-moores-law-for-software-part-two/

    I’d be real curious as to what you (and others in this thread) think of this notion. I tend to think that naming something, calling it out (so to speak), helps us focus on that as a goal.

  10. Pingback: ETSI 2.0 | Someone
  11. The question “Why there is no Moore law of software?” might lead to misinterpret what moore law means. Moore law never states about speed or capability of CPU, only amount of transistors available to hardware designers, and some hardware designers choose not to utilize those to a maximum degree to reduce cost. It doesn’t state the productivity of those designers, actually the trend for hardware design groups is to grow in size in order to use those transistors, and go from low level design methods that give fastest most energy efficient result to higher level design methods that increase speed of development at the cost of transistor density, CPU energy efficiency and CPU speed.
    Moore law for software would mean amount ram or computing resources usable for software product. I think closest equivalence for Moore law would be growth of size of microsoft operating system in lines of code. It states not about programmer productivity nor usefullness of end product, but sheer size of end product just like Moore law. The growth of in number of software designers in overall product and trend of going to higher level programming languages for more complex products is parallel to what happened with moore law in hardware industry.

  12. Pingback: Bad Simplicity
  13. There most certainly is a Moore’s law for software: The speed improvement in algorithms has outpaced the speed improvement in CPUs. See this article.

    You may be asking why there is no exponential speed-up in software development resource requirements (time/people). But then again, is there a speed-up in hardware development resource requirements?

  14. Programming is communicating. From the programmers mind to the developmental software, then on to the cpu or device on the mb. Since all of those targets are constantly changing it is impossible to grow exponentially as the hardware does. The hardware is just to grow the size and number of computing elements, a simpler problem which scales well. Since the programmers intelligence and ability to learn depends on extending his knowledge is hampered by having to learn new skill sets constantly so his skill set cannot scale exponentially. This constantly learning new programming skills creates the bottle neck. Remember programming is a language acquisition process and it takes time to become truly proficient at each new language you learn. Until their is a static target ( a final language, and learning can scale the bottleneck will persist as the slowest link in the chain of exponential growth of programming skilsets.

  15. Pingback: : INDEX mb

Let me know what you think

This site uses Akismet to reduce spam. Learn how your comment data is processed.