I was in conversation with an old colleague, Sean Park, a few days ago; with a little bit of luck, we’ll be able to spend a little time together next week in San Francisco, at Supernova. During the conversation, this post by Chris Skinner came up.
First, a few disclaimers.
One, I am not against cyberlibertarians. I count many cyberlibertarians as my friends. In fact I’d even let my daughter marry one. Some people think I am a cyberlibertarian. And I don’t argue with them.
Two, despite all that, I signed up with the UK Border Agency IRIS scheme as soon as I could, use it regularly, and will probably sign up with its equivalent for the US and Europe as soon as I can. So I am not against the technology.
Three, I like what Bruce Schneier has to say about many things, and particularly about things to do with security. This liking predates (by a long way) and is completely unconnected with, our becoming colleagues much later. [Incidentally, we have never met, either as colleagues or before then, although we’ve been in the same room quite a few times. Maybe this will change, we’re both at Supernova.]
Having said all that.
There’s identity and there’s identity. “Identity” covers things I assert about myself, things that only I can assert about myself. It covers things that others assert about me, things that only others can assert about me. It also covers things that I assert, but where my assertion is weak unless it is backed up by someone or something else.
When I say that I like Grateful Dead or Traffic or Crosby Stills Nash and Young or John Mayall or Jim Croce, I am asserting something about myself. A last.fm audioscrobbler attached to iTunes can see whether my listening habits match my stated likes, but it cannot say what I like. That is for me to say. When a bank says that I have a credit rating of X, they are asserting something about me that I cannot assert about myself. When a government gives me a token to help me assert who I am (such as a passport or a driving licence), the government is doing something I couldn’t do as well.
So there’s identity and there’s identity. It’s all the rage, it’s the happening thing, there are now more people working in the identity space than in call centres worldwide. [Doesn’t it feel like that to you?].
And, as Chris Skinner says, it looks like biometrics will become more important, more dominant, more pervasive. Shivers down spine. Collywobbles. Paroxysms of sweat. I begin to get a teensy weensy bit concerned.
Why? Not because I think someone’s going to gouge my eye out and re-use it. Not because I think that someone’s going to chop my finger off. [Yes, there are times and there are places where this can and probably will happen, but in this conversation I consider the Chopping Off argument to be a Red Herring.]
I’ve been concerned about the use of biometrics in everyday life for a few decades now. Nearly 30 years ago, when I worked for Burroughs Corporation, we had a division that manufactured ATMs. And I remember seeing a presentation where people queued up to a hole in the wall to draw money, presented their eyeballs to an even smaller hole in the wall, had their retinas scanned before the hole vomited out cash. And I thought to myself, who designs these things? Who imagines that someone would actually do this? Did they talk to anyone, any would-be customers?
If you want to understand the pros and cons of biometrics, you must read this article in ACM by Bruce. So what if it’s almost a decade old, the points he makes still hold true. It’s an expansion and improvement on an another note by him, written a year earlier in his Crypto-Gram newsletter.
Here are some excerpts:
- [B]iometrics work well only if the verifier can verify two things: one, that the biometric came from the person at the time of verification, and two, that the biometric matches the master biometric on file. If the system can’t do that, it can’t work
- Biometrics are unique identifiers, but they are not secrets. You leave your fingerprints on everything you touch, and your iris patterns can be observed anywhere you look.
- Once someone steals your biometric, it remains stolen for life; there’s no getting back to a secure situation.
- Biometrics are powerful and useful, but they are not keys. They are not useful when you need the characteristics of a key: secrecy, randomness, the ability to update or destroy.
- [B]iometrics are necessarily common across different functions. Just as you should never use the same password on two different systems, the same encryption key should not be used for two different applications. If my fingerprint is used to start my car, unlock my medical records, and read my electronic mail, then it’s not hard to imagine some very unsecure situations arising.
As a frequent traveller, I am happy to use biometrics-based processes when it mean my immigration and security queues are shortened significantly. IRIS has been a boon for me.
But if my bank asked me to start using iris recognition based schemes, I would probably change bank.
Now you must be used to people tritely asking you “So what does good look like to you?” [What an appalling question. Why can’t they ask you what you want? I’m old and patient now, so I forgo the temptation to “throw them under that question”, a la that other appalling phrase “throw them under the bus”. Who thinks this unadulterated crap up anyway?]
So humour me for a second and allow me to use the phrase “What does bad look like?” When I use IRIS, “bad” means that someone has managed (a) to get a copy of my iris as stored in some humongous central database somewhere (b) convinced some hardware and software in a booth that he/she is me returning to the UK. Depending on my actual travelling status, that may throw up some conflicts and errors, and the worst that could happen is that I spend some time sorting out the mess when I next pass through. But the facts will be on my side, and I don’t live in a police state. People may be appalled by CCTV Britain, by Guantanamo Bay, by 42 day detentions, but none of that is as scary as The Emergency was to me in 1975-77. Not even close.
It’s not as if someone can leave my iris behind at a crime scene. If someone finds my eyeball rolling around alongside a corpse, the chances are the corpse is me. if someone leaves a photograph of my iris behind as a calling card, not even the Keystone Cops will assume that I’m the likely perpetrator.
So bad doesn’t look too bad in many of these cases.
When it comes to banking, it’s a different story. Bad can look bad. If you’d like a humorous way of finding out why, listen to this clip by Mitchell and Webb. [Oh the humanity. Worth listening to for that line alone.]
We already use biometrics for banking, the common-or-garden signature is a biometric, particularly if you start analysing pressure and time and emphasis and all that jazz. People have tried to forge signatures, and if electronic signatures become more common, then I am sure that people will try even harder to forge signatures.
I try and adapt to changes in the environment around me. For example I think about where I want to use my credit or debit cards so as to minimise the risk of cloning, and avoid the places where I think the risk is high. If my bank said I could use iris recognition in order to withdraw cash, I wouldn’t sign up. I would use other ways. if they said that it was the only way, I would use other banks. Simple as that.
It doesn’t mean that I am against the use of biometrics. Rather, I am against the use of biometrics in environments where the weaknesses of biometrics overwhelm the strengths. As stated before, I use biometrics to enter the UK. And I would be happy to use biometric locks in my front door, as Xeni Jardin refers to here.
As Bruce says in that article, if someone wanted access to my house, they can make a surreptitous copy of my key or throw a rock through my window. They don’t have to cut my finger off.
Biometrics aren’t bad. Biometric banking is already here, as in the use of signatures. But we need to think hard about allowing increased use of biometrics in banking. Because bad could then look very bad.