Thinking lazily about notifications and alerts: Part 2

This is the second in a series on notifications and alerts, building on what I started sharing earlier today, as promised.

First, a musical interlude.

Someone’s knocking at the door, somebody’s ringing the bell/ Do me a favour/Open the door/And let them in.

Mum, the kettle’s boiling/Daddy, what’s the time/Sis, look what you’re doing/Can’t you see/The baby’s crying

We get used to receiving and processing notifications while we are still children. Doorbells ringing. Kettles boiling. Hearing footsteps approach, learning which ones are friendly, recognising the patterns made by parents and siblings.

And yet the one I remember most vividly is to do with what’s exemplified in the photo below:


[My thanks to VikalpSangam whose photo of Mohan, above, helps make my point].

It’s  a very strong childhood memory, one that is almost as strong as sensing the presence or return of a parent. You see, I wasn’t much of a sleeper. [Never was. And now that I’m approaching 60, it looks like I never will be. The power of habit]. As I lay awake, tossing and turning in the sweltering heat of a Calcutta night, I’d hear a strange sound. A sound I came to treat as a friend. The sound made when a stout stick gently hits a lamppost. A strange sound indeed.

We used to live on a street called Hindustan Park in the 1960s, in Ballygunge in Calcutta. I don’t know the precise history behind the phenomenon, but what I can remember is this. The local darwans, who performed roles of building manager, maintenance man and security guard, used to take turns to walk around the neighbourhood while their colleagues dozed. The one doing the walking would signal his presence and doing of the rounds by twanging the occasional lamppost with his danda, his lathi.

It seemed to me that his lamppost-striking action achieved many outcomes: it alerted his colleagues that he was doing his rounds; it probably alerted would-be burglars as well, but more of that in another post; most importantly, it made me feel that God was in his Heaven and that All was Well with the World. It wasn’t just about presence being signalled over distance, it was about the sense of security implied by that presence.

Which brings me to the first class of notification: All is Well.

There’s a rhythm, a pulse, a cadence, to the All is Well notification. It’s a repeating signal. It’s like hearing the sounds a baby makes while asleep. It’s like seeing an ECG at a hospital bedside. There is no need for alarm, no warning threshold has been breached, no action needs to be taken. But its absence is often a signal for action, for investigation.

As the number of sensors continues to grow exponentially, and as we get better at joining the data collected and creating value via insights, we will learn to build baselines for many such things. Initially these baselines may well pertain to single senses, but as we learn and adapt we will build multisensory baselines as well. We will describe whole environments in multisensory ways: a child’s bedroom; a person working out in the gym; a restaurant kitchen in full flow; a factory floor full of robots; a street with a mixture of driven and driverless vehicles. For each of these, we will have established when All is Well: the temperature, the energy consumption, the heart rate, the breathing sounds, the ambient noise, whatever.

It’s important to distinguish the All is Well notification from all others; I think it would be a mistake to assume that people only want a variant of “management by exception” reporting. It’s like the mother wanting to check that the child is still alive, asleep, relaxed. There’s a wellbeing signal, a Linus’ security blanket involved, and this should not be confused with exception management.

Right now it may not be obvious why we should concern ourselves with the All is Well notification when in the context of enterprise software. But I think it’s only a matter of time. One of the implications of hyperconnectedness appears to be that by the time we find out something’s wrong, it’s too late to avoid damage. Our early warning systems will learn to become more sophisticated in order to deal with the problems of connectedness.

It’s worth taking a leaf out of David Agus’s book in this context. Marc Benioff introduced me to David some years ago, and I found his line of thinking very instructive, concentrating on wellness rather than illness. The human body is just one great example of complex adaptive systems in operation, and there is much we can learn from people like David as a result; it is then up to us to adapt that learning to the enterprise context. With the Second Machine Age  of Brynjolfsson and McAfee now upon us, we have to get a move on in understanding how to keep notified of a state of wellness at work and at play, as collectives and as individuals.

So there’s a lot of work to be done in fleshing out the All is Well notification. How to form the new baselines. How those baselines move from being single-sense to multi sensory. The role of time series in all this. The increasing march of robotics, of augmented reality, of hybrid operating environments. The likely arrays of unintended consequences we will face as we go through the learning: the world-ruled-by-algorithm issues identified by people like Kevin Slavin,  the problems caused by poorly designed filters as described by people like Eli Pariser, modern versions of Asimov’s Three Laws, we have all that and more to face and to adapt around.

That’s only the beginning, as we then learn more about which notifications to receive on which devices, and when;  how those notifications will announce themselves when they arrive; when the receipt of a notification has legal standing.

It’s only the beginning.

In my next post I shall be dealing with the next two classes of notification: the Houston, We Have a Problem class and the I Am Here class. After that I shall try and wrap up the remaining classes of notification quickly, so that I can concentrate on the filtering/subscription processes.

Some of you have suggested that I should hold these thoughts and write a book around them. I’d rather share and learn via places like this one here, even if the conversations are with just a small number of people. It’s not that I don’t want to write a book. I do and I will. Sometime. But not about this. More to the point, I want things like this to be discussed openly so that we can all learn. Maybe I have the most learning to do. I will find out soon enough, even if only in private.

Feel free to engage via Twitter or LinkedIn or Facebook or even here, if it’s not too retro for you. Use e-mail only if you absolutely must, we haven’t had that spirit here since 1969.


3 thoughts on “Thinking lazily about notifications and alerts: Part 2”

  1. It is always enjoyable to read everything you write but for me it was even more enjoyable to read your thoughts about our senses and how (when in action) its synergy, or the awareness of its synergy, creates an “All is Well” atmosphere. For me, it’s like you are able to translate (in a much better and wisely way, of course) what sometimes, I (we – in general) feel but can’t put it into words. (which might be a lack of ‘team-work’ between my senses..)
    Can’t wait to read the next two classes of notification!
    Thank you!

  2. JP – another great post, thanks.

    With today’s ability to track and profile ‘normal’ behavior, it seems the ‘market’ for AIW is growing. AIW can do more harm than good if inaccurate, but we’re improving accuracy significantly by bringing together disparate data sources. Interesting times.

    I couldn’t help but correlate AIW to risk. AIW seems most valuable in systems where more risk is present due to either high impact or high likelihood of failure (flight systems), and systems where remediation tactics are less effective, and there is need for early detection (cancer screenings). Impact of failure being the qualifier for implementation, and likelihood determining the alert’s value over time.

    I’ve experienced this w/connected smoke detectors. When I first installed them, I’d look for the AIW signal every night. That faded a few months in when “trust” set in. After a few malfunctions, I started checking again. No reminder to check them. I just do. Eventually I’ll stop checking, when the perceived likelihood of failure drops below my brain’s risk threshold once again. I expect the ’stop checking’ decision will be as subconscious as the ‘do check’ decision. Is the brain an example of the evolved “filter” you describe in Part 1?

Let me know what you think