Blog Category: Research

Turk vs. TREC

on Comments (9)

We’ve been dabbling in the word of Mechanical Turk, looking for ways to collect judgments of relevance for TREC documents. TREC raters’ coverage is spotty, since it is based on pooled (and sometimes sampled) documents identified by a small number of systems that participated in a particular workshop. When evaluating our research systems against TREC data from prior years, we found that many of the identified documents had not received any judgments (relevant or non-relevant) from TREC assessors. Thus we turned to Mechanical Turk for answers.

Continue Reading

The mystery of the Nook

on Comments (5)

December 10. Could not open box. Tried several times.

December 11. Co-worker took it to get charged. When she brought it back, Walt Whitman’s picture replaced the lady who was displayed earlier.

December 12. Could not turn on device. Accidentally discovered cable and plug in the packaging. Charged overnight through my laptop.

Continue Reading

mobile. very mobile.

on

Developers have built applications for mobile phones to support a wide swath of activities, but I would argue that there is no better use for a mobile phone than for those tasks that are fundamentally mobile. And what is more mobile than running? While there have been a variety of research projects (such as UbiFit) designed to encourage exercise, I am more interested here in those applications that support folks who’ve already bought in. For us, smart phones that make it easy to track pace, distance, and even elevation (such as RunKeeper, SportsTracker, and MotionXGPS) have been killer apps. Research projects (such as TripleBeat) are also exploring how to increase competition using past personal results as well as results from other users. Other work has explored using shared audio spaces to allow runners to compete over distances.

How else might we use mobile technologies to improve the running experience? Continue Reading

Marti Hearst: Google Tech Talk on Search User Interfaces

on

Marti Hearst recently gave a talk at Google related to the themes in her book. She does a good job of explaining the challenges and opportunities related to interactive information seeking, including design, evaluation, query reformulation, integrating navigation and search, information visualization as it relates to search, and future trends. While most of this is music to the ears of HCIR types, her discussion of collaborative search (around minute 46) is particularly “relevant:” Marti spends a good deal of time on our paper on collaborative search, describing the various models of collaboration and showing some figures from our paper. The talk is on YouTube, the paper is on the web. Questions and comments are very welcome.

ps: Marti’s mention of Diane “Green” in minute 24 actually refers to Diane Kelly, whose well-received paper on query suggestion was presented at SIGIR 2009.

DarwinTunes: a social experiment

on

DarwinTunes bills itself as a “test tube for cultural evolution.” It’s an online experiment being run by researchers at Imperial College London. We often talk about the evolution of social media or cultural memes – but is that just a metaphor, or is it really evolution?

Continue Reading

Generating 3D models from webcams

on Comments (2)

One highly inconvenient thing about working with virtual worlds or 3D content in general is: where do your 3D models come from (especially if you’re on a budget)? A talented but (inevitably) overworked 3D artist? An online catalog of variable quality and cost? Messing around yourself with tools like SketchUp or Blender? What if you want something very specific, very quickly? The MIR (Mixed and Immersive Realities) team here at FXPAL is very interested in these questions and has done some work in this area. Others are working on it too: here’s an elegant demo from Qi Pan at the University of Cambridge, showing the construction of a model with textures from a webcam image:

ARdevcamp: Augmented Reality unconference Dec. 5 in Mountain View, New York, Sydney…

on Comments (1)

We’re looking forward to participating in ARdevcamp the first weekend in December. It’s being organized in part by Damon Hernandez of the Web3D Consortium, Gene Becker of Lightning Labs, and Mike Liebhold of the Institute for the Future (among others – it’s an unconference, so come help organize!) So far, there are ~60 people signed up; I’m not sure what capacity will be, but I’d sign up soon if you’re interested. You can add your name on the interest list here.

From the wiki:

The first Augmented Reality Development Camp (AR DevCamp) will be held in the SF Bay Area December 5, 2009.

After nearly 20 years in the research labs, Augmented Reality is taking shape as one of the next major waves of Internet innovation, overlaying and infusing the physical world with digital media, information and experiences. We believe AR must be fundamentally open, interoperable, extensible, and accessible to all, so that it can create the kinds of opportunities for expressiveness, communication, business and social good that we enjoy on the web and Internet today. As one step toward this goal of an Open AR web, we are organizing AR DevCamp 1.0, a full day of technical sessions and hacking opportunities in an open format, unconference style.

AR DevCamp: a gathering of the mobile AR, 3D graphics and geospatial web tribes; an unconference:
# Timing: December 5th, 2009
# Location: Hacker Dojo in Mountain View, CA

Looks like there will be some simultaneous ARdevcamp events elsewhere as well – New York and Manchester events are confirmed; Sydney, Seoul, Brisbane, and New Zealand events possible but unconfirmed.

Google Scholar is now legal

on

In 2001, when we were thinking about how to use e-books for legal research, we partnered with Lexis Nexis to study a moot court class in a law school. Without access to the documents that we obtained through Lexis, we would not have been able to engage the students and to explore potential designs for such devices.

But that was eight years ago. Today, we could resort to Google Scholar: A couple of days ago, Google announced on its blog that it will be including full text legal opinions from U.S. federal and state district, appellate and supreme courts in results returned by Google Scholar. In addition to each case, Google also returns citations of that case in other opinions. This service is unlikely to put West Publishers or Lexis Nexis out of business, but it does make it considerably easier for the average person (or researcher) to find these cases.

Continue Reading

Choose Your Own Adventure

on Comments (1)

Historically, the Hypertext research community is an intertwingling (a Ted Nelson-logism) of three distinct strands — structural computing, interaction, and HT literature, which could be mapped, roughly, onto the engineers, the HCI folk, and the humanists. While engineering and HCI aspects were somewhat necessary for HT literature, the focus, by definition, has been on exploring the boundaries of electronic literature. In the end, I think, it’s good writing that makes hypertext literature interesting much more so than clever interaction. In fact, the electronic component is often not necessary at all: see If On a Winter’s Night a Traveler, for example.

But there is room for beauty in interaction as well. Thanks to Mark Bernstein of Eastgate, I came across a beautiful set of visualizations of narrative structure of CYOA, a series of hypertext books for children. Through a variety of charts and graphs like the one shown here, the author of these diagrams conveys the many alternate paths through a each story in the collection, and uses these visuals to compare, to analyze, and to appreciate the books. And don’t forget the animations, accessible through a link near the top of the page.

My retelling won’t do it justice; take a look for yourself, and think about these designs next time you’re building a slide deck.

Finally, since these stories are now available as Kindle editions, in principle, it would be possible to collect actual reading paths that readers take through the works, and subject them to the same analyses. What sorts of hypotheses about reading, personality, and interaction could we answer with such data?

Preliminary TOC for the IP&M Special Issue on Collaborative Info Seeking

on Comments (12)

We are nearing the end of editing the Special Issue of Information Processing & Management, and are proud to announce the papers that will be in the issue. The Special Issue was the result of the 1st collaborative search workshop we organized at JCDL 2008; the next workshop is coming up soon! We had many submissions on a variety of related topics, including field work and other reporting that characterized instances of collaboration in information seeking, evaluation and models of collaborative episodes, and a number of system and algorithm papers.

Continue Reading