Blog Archive: 2011

Obviously wrong

on Comments (3)

So Microsoft is suing Barnes & Noble for patent infringement. Well, that’s what patents are for: the right to sue. And that’s what licenses are for: the right to avoid getting sued. The only thing is, if you’re going to sue someone with half a brain, you should at least make sure your patent is reasonably solid.

With that said, one of the patents that Microsoft claims that Nook is violating deals with annotating documents, an area I know a bit about. The patent, filed in December 1999, claims a system and method to associate annotations with a non-modifiable document. The idea is that file positions in the document associated with user-selected objects are used to retrieve annotations from some other location, and to display these annotations for the user.

Sounds obvious, no? So obvious, in fact, that when we built such a system in 1997, we didn’t bother patenting this.

Continue Reading

Eye tracking on a laptop

on Comments (1)

Tobii and Lenovo presented a laptop with a built-in eye tracker at CeBit last week. The eye tracker allows the user to control the laptop, for instance selecting files to open and selecting active window from an expose like view. Engadget have a video of a demonstration of the eye control on the laptop here. I wished I could get my hands on it for some testing. A laptop with a built-in eye tracker certainly has potential, from making eye tracking easier and more flexible for disabled and making usability testing using eye tracking more flexible allowing the usability specialist to move from their labs to the field.

Continue Reading

Recommendations needed

on Comments (7)

In one of our research projects, we are trying to compare some alternative algorithms for generating recommendations based on content similarity. As you might expect, we have some data we’re playing with, but the data is noisy and sometimes it’s hard to make sense of the variability: is it due to noise in the data, or is the algorithm trying to tell us something?

So my thought was to break the problem into two parts: first deal with our algorithms on known data, and then apply the results to the new, noisy data to see what’s there. My purpose in writing this post is to solicit suggestions about which publicly-available data we should be using.

Continue Reading

Looking for an HCIR intern

on Comments (1)

It’s intern time again! I am looking for someone to help me run an exploratory study of a collaborative, session-based search tool that I’ve been building over the last few months. Session-based search frames information seeking as an on-going activity, consisting of many queries on a particular topic, with searches conducted over the course of hours, days, or even longer. Collaborative search describes how people can coordinate their information-seeking activities in pursuit of a common goal.

The intern for this project will help frame a set of research questions around collaborative, session-based search, and then take the lead on an experiment to gain insight into this rich space and to help understand how to improve our search tool. The intern will also participate in writing up this work for publication at a major conference such as CHI, CSCW, JCDL, etc.

Continue Reading