Blog Archive: 2010

Intended to deceive

on Comments (2)

The ‘sphere is a-twitter about BP’s buying keywords (e.g., “oil spill”, “BP”, “gulf disaster”, etc.) to place links to their versions of the story at the top of the search results.  ABC News writes:

According to Kevin Ryan, the CEO of California-based Motivity Marketing, research shows that most people can’t tell the difference between a paid result pages, like the ones BP have, and actual news pages.

So we have two issues: one related to BP, and one related to the search engines.

Continue Reading

Searching for a Houzz

on

Miles Efron and I have written about micro-IR in the past (see here, here, and here), and I recently came across another interesting example in the form of the Houzz App for the iPad. Houzz is an interface that fronts a collection of photographs of house interiors, the kind of stuff you might find in magazines and interior design/decoration books. It provides (an imperfect) browsing and search interface to find images by geographic area, by room function, etc.  It also has a mode which brings together sets of images on a theme, curated by a designer with a blog. Each set of images comes with an introduction by the blogger, a bit of background on the person,  commentary on each image, and even blog-like discussions among readers and designers associated with each theme.

Continue Reading

Is Computer Science so different?

on Comments (5)

There was an interesting article in CACM discussing an idiosyncrasy of computer science I’ve never totally wrapped my head around. Namely, conferences are widely considered higher quality publication venues to journals. Citation statistics in the article bear this perception out. My bias towards journals reflects my background in electrical engineering. But I still find it curious, having now spent more time as an author and reviewer for both ACM conferences and journals.

I think that journals should contain higher quality work. In the general case, there is no deadline for submission, and less restrictive page limits. What this should mean is that authors submit their work when they feel it is ready, and they presumably can detail and validate it with greater completeness. Secondly, the review process is also usually more relaxed. When I review for conferences, I am dealing with several papers in batch mode. For journals, things are usually reviewed in isolation. When the conference PC meets, the standards become relative. The best N papers get in, regardless of whether the N-1 or N+1 best paper really deserved it, as N is often predetermined.

Is this a good thing? Is CS that different from (all?) other fields that value journals more? On the positive side, there’s immense value in getting work out faster (journals’ downside being their publication lag) and in directly engaging the research community in person. No journal can stand in front of an audience to plead its case to be read (with PowerPoint animations no less). And this may better adapt to a rapidly changing research landscape.  On the other hand, we may be settling for less complete work. If conferences become the preferred publication venue, then the eight to ten page version could be the last word on some of the best research.  Or it may be only a tendency towards quantity at the expense of quality. Long ago (i.e. when I was in grad school), a post-doc in the lab told me that if I produced one good paper a year, that I should be satisfied with my work. I’m not sure that would pass for productivity in CS research circles today.

And this dovetails with characterizations of the most selective conferences in the article and elsewhere. Many of the most selective conferences are perceived to prefer incremental advances to less conventional but  potentially more innovative approaches.  The analysis reveals that conferences with 10-15% acceptance rate have less impact than those with 15-20% rate. So if this is the model we will adopt, it still needs some hacking…

Kindle’s fate

on Comments (13)

Last week I made a handshake bet that Amazon will stop selling the Kindle device in a year’s time. Today I am putting it in writing. Amazon will stop selling its devices for several reasons: because the margins are higher on books, because ultimately people won’t want to have multiple, specialized devices with significantly-overlapping functions, and because the devices themselves are quite limited.
Continue Reading

Unintended consequences

on Comments (1)

On Thursday I saw Genevieve Bell’s entertaining PARC Forum talk titled “Feral Technologies: An ethnographic account of the future.” I learned all about animals–camels in Australia, rabbits in Australia, cane toads in Australia–each imported for specific reasons, each going feral and causing various kinds of trouble. Apparently there were also goats, donkeys, foxes, and other species, but she didn’t talk about those.

It was a good talk, following on her CHI 2010 keynote address. My problem with it was that the notion of unintended consequences of technology deployments (animal, mineral, or vegetable) is not particularly new.

Continue Reading

To Link and Link Not

on Comments (2)

Nick Carr wrote a post a couple of days ago about the distracting effects of hypertext anchors when reading text. He referred to the increased cognitive effort that in-line anchors impose on readers, but as Mark Bernstein points out, the cognitive effort article was published in the 1980s, and these claims were not supported in further hypertext research.

Patricia Wright’s work on cognitive prostheses suggests that hiding information behind links made it less likely that people would use that information compared to showing it directly. Her argument (presented as a keynote address at Hypertext ’91) is that the cognitive overhead of link following makes people less likely to follow links, not that the presence of link anchors is distracting. Of course the implication is that the further from their context you move the anchors, the less likely that people will follow them. This is the point that Daniel Tunkelang makes in his response to Nick’s post.

Continue Reading

How far to generalize?

on

The importance of understanding people’s activity to inform design is one of the central tenets of HCI. When design is grounded in actual work practice, it is much more likely to produce artifacts that fit with the way people work and the way they think. One key challenge when studying people for the purpose of informing design is to understand what aspects of existing work practice are essential to the work and what aspects are side-effects of existing technology (or lack thereof) and are fair game for innovation.

While HCIR research often relies on recall and precision measures to compare systems, qualitative methods are used as well. For example, Vakkari and his colleagues studied several students performing research for their Master’s thesis work. Researchers used a variety of techniques including diary entries and interviews to assess the evolution of searchers’ behavior over the course of a few months. Their findings led them to fill in some of the details of Kuhlthau’s model of information seeking.

Continue Reading

Virtual Factory at IEEE ICME 2010, Singapore

on

Happy to note that our overview paper on the Virtual Factory work, “The Virtual Chocolate Factory: Building a mixed-reality system for industry” has been accepted at IEEE’s ICME 2010. The conference is in Singapore in July; I’ll be there, co-chairing a session there that focuses on workplace use of virtual realities, augmented reality, and telepresence. You can see more on the Virtual Factory work here.

Finding vs. retrieving

on Comments (3)

Having stumbled onto the IR Museum on the SIGIR web site, I decided to investigate why I had not come across it before. I missed the SIGIR announcement about the museum when it was made in 2008, and since that item was no longer on the first page of the web site, I didn’t find it by browsing. Site search might have helped, but there wasn’t any.

So I tried searching the web.

Continue Reading