Posts Tagged 'collaborative filtering'

The Sound of Music Problem

Recommender and collaborative filtering systems all try to recommend new possibilities to each user, based on the global structure of what all users like and dislike. Underlying such systems is an assumption that we are all alike, just in different ways.

One of the thorny problems of recommender systems is the Sound of Music Problem — there are some objects that are liked by almost everybody, although not necessarily very strongly. Few people hate “The Sound of Music” (TSOM) film, but it’s more a tradition than a strong favourite. Objects like this — seen/bought by many, positively rated but not strongly — cause problems in recommender systems because they create similarities between many other objects that are stronger than ‘they ought to be’. If A likes an object X and TSOM, and B likes Y and TSOM, then TSOM makes A and B seem more similar than they would otherwise be; and also makes X and Y seems more similar as a consequence. When the common object is really common, this effect seriously distorts the quality of the recommendations. For example, Netflix’s list of recommendations for each user has substantial overlap with its ‘Trending Now’ list — in other words, Netflix is recommending, to almost everyone, the same set of shows that almost everyone is watching. Nor is this problem specific to Netflix: Amazon makes similar globally popular recommendations. Less obviously, Google’s page ranking algorithm produces a global ranking of all pages (more or less), from which the ones containing the given search terms are selected.

The property of being a TSOM-like object is related to global popularity, but it isn’t quite the same. Such an object must have been seen/bought by many people, but it is it’s place in the network of objects that causes the trouble, not its high frequency of having been rated as such.

Solutions to this problem have been known from a while (For example, Matt Brand’s paper http://www.merl.com/reports/docs/TR2005-050.pdf outlines a scalable approach that works well in practice. This work deserves to be better known). It’s a bit of a mystery why such solutions haven’t been used by all of the businesses that do recommendation, and so why we live with recommendations that are so poor.

Advertisements

Making group judgements better than individual judgements

One of my former students, Tracy Jenkin, worked on the way novel knowledge is found and processed by organisations (her thesis here). As part of her work, she carried out an experiment comparing several different ways in which decisions are made about what novel ideas to pursue. One of the most interesting results was that an automated collaborative filtering-like tool ranked highly ideas that the group of humans, using a group decision making tool, had let drop off their radar. In other words, as they anonymously ranked and commented on ideas, and then had an open discussion about these ideas, they ‘forgot’ an idea that they had all considered to be of high quality. It wasn’t that they had second thoughts and discounted the idea as not as good as they had (all) initially thought — they just didn’t get around to revisiting it.

This suggests that automated tools for rating ideas might be an important supplement to group decision making. This applies especially to intelligence analysts who are almost always working with concepts that are novel, and where the evidence for the quality of each idea is nebulous or worse.

There’s a difficult tradeoff between the wisdom of crowds on the one hand, and groupthink (and the resulting blind spots) on the other. Adding an automated tool to group decision making may be a helpful way to add some objectivity into the mix without losing any of the benefits of the collective brainpower of a group of humans.


Advertisements