Over the last few months, I’ve been hearing this term quite a bit: “search fatigue”. Since I had absolutely no idea what it meant, I thought I’d check. So I Googled it.
Until I saw what came up, if I’d been asked to guess what it meant, I’d have tried some variant on information overload or infoglut. Instead, what I got as search result number one was this article, suitably headlined Search Fatigue.
I read it. And I didn’t understand it. I quote from the article (which itself quotes from an article by Jeffrey Beall in a magazine called American Libraries):
Search fatigue, according to Beall, is a feeling of dissatisfaction when search results do not return the desired information.
“The root cause of search fatigue,” Beall told me, “is a lack of rich metadata and a system that can exploit the metadata.”
I read that, and re-read it. So search fatigue is an academic-sounding term for Google rage. Okay, I got that. And the next sentence was also fine; the richer the metadata, the richer one can make the search experience. So far so good. But soon after that I got confused. Beall went on:
For example, metadata-enabled searching, as you find in a library when it searches through its databases for resources, “allows for precise author, title, and subject searches,” Beall says. In other words, it looks only in the fields you request, rather than searching through the entire document. If you name the author, it looks only in the author field of each document, thus returning only relevant hits.
If Beall is being quoted correctly, he is asserting that deterministic search actually improves the search experience.
Everything I have learnt about search points the other way. Formal data structures, key fields, primary keys, these are all the ways we lost information in the first place. In fact tree structures were probably more responsible for losing things than everything else put together; have you ever tried looking for archived mail or files you Saved rather than Saved As on your PC?
I thought we were moving away from deterministic search to probabilistic models. I thought people at Google and Technorati and wherever else were finding ways to raise the relevance of amorphous poorly-filed information, using a variety of artifices to reflect the value of links and references. I thought we would be heading towards the next generation of collaborative filtering, where I can “pass” my search bias to someone else, or for that matter perform Boolean operations on search bias.
The problem that needs solving is not to do with finding things that have been well labelled and well filed. That we have always been able to do. What we haven’t been able to do is to find the messy stuff, partially named, partially remembered, often misfiled, often misclassified.