For quite a while now Jeremy and I have been characterizing collaborative exploratory search, and distinguishing it from other kinds of searches. While so far the exercise has been largely theoretical, and sometimes has been met with skepticism. Well, in Tuesday’s Washington Post, I found a passage that illustrates the kinds of activities we are talking about.
A week or so ago, we wrote a post on Social Search, and how (we believe) it is different from Collaborative Search. We have also begun laying out a taxonomy of the various factors or dimensions that characterize information seeking behaviors involving more than one person. So far, we have listed two dimensions: Intent and Synchronization. We will continue with two additional dimensions over the next few weeks: Depth and Location.
But in the meantime, we note that Intent and Synchronization already give us enough material to draw descriptive and discriminatory lines between various types of multi-user search.
This is the third post by Jeremy and me in a series on collaborative information seeking. The first was an introduction to the space, and the second dealt with the topic of collaborative intent. This post deals with synchronization of data that underlies the collaboration. While it is possible to collaborate in searching for information without tool support by exchanging URLs or documents directly, more interesting interactions are possible when they are mediated by the search system.
Recently, a new class of search applications that support collaborative information seeking has emerged. In these systems, users work in small groups with a shared information need, rather than relying on large numbers of anonymous users with potentially diverging information needs. One clear way to distinguish different social search activities has been proposed by Colum Foley. In his PhD thesis, he characterizes search systems on two dimensions, “Sharing of Knowledge” and “Division of labor.” Sharing of knowledge separates all social search systems from traditional single-user approaches, while division of labor separates social search from collaborative search.
This is Part 2 of (at least) 5 in a series of posts about Collaborative Information Seeking. Part 1 is found here.
The most important dimension for distinguishing between various types of collaboration is the intent with which users of a system approach each other for the collaboration.
In his recent post, Daniel Tunkelang issued a call for renewed interest in recall as a measure of performance of information retrieval systems, particularly for exploratory search tasks. It is interesting to note that there are several possible ways to measure recall and precision for interactive tasks, and which measure you should use depends on what aspect of the entire human-computer system you are interested in.
Daniel Tunkelang’s recent post on Twitter search got me thinking about what an HCIR geek would do, which produced the following random thoughts.
First, we should start with tasks. What kinds of information do people want to find in tweet streams? Do they want to find a document that’s been referenced? Do they want information about an event? Are they interested in finding a community of interest? What other useful tasks are there with respect to this stream?
In an earlier post, I described Waterworth and Chignell’s model of information exploration, and distinguished in from other theories ad models of information seeking in that it tried to address some aspects of interaction.The main problem with the model is related to the structural responsibility dimension. Structural responsibility models “who [system or user] is concerned with the structure [of the data]”, but since the user can only interact with the constructs exposed by the system through the interface, and the same structures can be expressed in many different ways, this dimension fails to capture a distinction that’s useful for design.
There are many different ways to characterize collaborative information seeking, many dimensions on which collaborative search systems can be categorized.
For the past few years JeremyPickens and I have been thinking that our model of collaborative exploratory search needs some further explication. Or maybe we’re just trying to understand it better ourselves. We have found that to explain what our model is, we have to simultaneously explain what our model is not. This has led to numerous discussions not only about the various dimensions of collaboration, but also about the relative importance among those dimensions for distinguishing between systems.
Recently, I’ve been involved in a lot of discussions about exploratory search on this blog and in comments on The Noisy Channel. One way to look at exploratory search (and there are many others!) is to separate issues of interaction from issues of retrieval. The two are complementary: for example, recently Daniel Tunkelang posted about using sets rather than ranked lists as a way of representing search results. This has implications on one hand for how the retrieval engine identifies promising documents, and on the other for how results are to be communicated to the user, and how the user should interact with them.