Modeling social media

on

Marti Hearst gave an interesting talk at JHU on Social Media in which she described some important dimensions of through which we can understand the variety of phenomena that are tagged with that label. She examined expertise, the degree to which data are shared (synchronized!) among the people engaged in some activity, and the degree to which participants are working toward an explicitly-shared goal (even if they approach it different personal motivation).

Continue Reading

Metrics don’t come easy

on Comments (2)

Daniel Tunkelang wrote about Herb Simon’s attention economy and ways to measure the way people allocate attention. His example of attention-switching and interruptions with e-mail made me think about individual differences. People differ in the willingness to engage in an activity, and self-interruption is a common practice. You can measure time on task, but for complex cognitive tasks it is not clear that time is a good predictor of performance. The problem of measurement is more complex than simply aggregating times or counting switches.

In HCI, we have a notion of the fallacy of the average user — the notion that if you design for characteristics averaged over a large number of people, there may not exist a single person for whom the design is ideal. This due to the fact that certain phenomena have bimodal distributions rather than those with a central tendency. For example, Hudson et al. found that individual preferences in interruptibility suggested a bimodal distribution.

Continue Reading

Social computing consumerism

on Comments (1)

Social computing is the future of interaction, explains Michael Bernstein, and he has a point. Leveraging the work of others rather than recreating it is the way civilizations are built. But that is not the whole story. There are instances when leveraging the work of others is the right thing to do, but there are also many situations where it is undesirable for moral, aesthetic, and practical reasons. The moral side is obvious — the undesirability of appropriating others’ work without their permission isn’t that controversial — but the aesthetic and practical aspects of reusing others’ content bear some additional scrutiny.

Continue Reading

Ask not what Twitter can do for Yahoo!…

on

Yesterday Yahoo! announced that it reached an agreement with Twitter to incorporate the twitter feed into its properties in a variety of ways, including surfacing tweets related to particular topics, return  more tweets in search results, and allow users to read their tweets and tweet directly from their Yahoo! pages. The move is interesting more as another vote for the importance of Twitter as a communication channel than in the value it introduces into people’s interactions with Yahoo!

Continue Reading

Peter Ingwersen’s Turn

on Comments (1)

Yesterday I had the pleasure to attend a lecture at UCLA given by Peter Ingwersen, Professor of Information Retrieval and Seeking, Royal School of Library and Information Science, Denmark. Peter was the 2009/2010 recipient of the Contribution to Information Science & Technology Award from the Los Angeles chapter of ASIS&T, the 21st person to be so honored.  He gave an interesting talk on frameworks in information seeking which explored the philosophical foundations of the “Cranfield paradigm” and proposed ways of extending the approach to incorporate the behaviors of–gasp!–real users!

Drawing on material from his book (co-written with Kalervo Järvelin)The Turn: Integration of Information Seeking and Retrieval in Context“, he described his “Spaceship” model

A general analytical model of information seeking and retrieval (from Information Research Vol. 10 No. 1, October 2004)

Professor of Information Retrieval and
Seeking, Royal School of Library and Information Science, Denmark

Continue Reading

Learning from eBooks

on Comments (3)

Some time ago I wrote about reports of books being replaced by electronic devices for academic reading. My take was that this kind of techno-utopianism will not improve the quality of education because the current crop of devices is not designed for active reading. This hypothesis was put the test recently at Princeton and four other universities. At Princeton, 53 students in three courses participated in an experiment where they were asked to use a Kindle device for course-work related reading. Results reported by the Daily Princetonian indicate that while the amount of in-course printing dropped by about 50%, students complained about a variety of limitations of using these devices for course work. Not surprisingly,

…users said they often found its design ill-suited for class readings. Students and faculty participating in the program said it was difficult to highlight and annotate PDF files and to use the folder structure intended to organize documents, according to University surveys. The inability to quickly navigate between documents and view two or more documents at the same time also frustrated users.

Continue Reading

Glossy pictures and diagrams

on

Valentine, love, hate: Twitter VennIn the spirit of Many Eyes, Jeff Clark has been developing visualizations of various kinds, including those of various Twitter collections. For example, his Twitter Venn diagram looks at intersections of tweets with three user-specified terms to help understand something about the way different concepts co-occur. Other visualizations look at word distributions associated with pairs of terms, and term use timlines.

The graphs are pretty and, perhaps, informative. His goal is to visualize complex data that don’t lend themselves to standard bar and pie charts. When these visualizations are effective, they can reveal insight that textual representations fail to convey, but the trick is to understand what is effective when. Tufte‘s design guidelines are a start, but one based on a rather static notion of data visualization. Apparently Bertin was more attuned to interaction, but was still trapped in a static medium.

Continue Reading

Picking conferences

on Comments (7)

Selecting a venue for publishing your research is often a non-trivial decision that involves assessing the appropriateness of venues for the work, the quality of reviewing, and the impact that the published work will have. One of the metrics that has been used widely to estimate the quality of a conference is the acceptance rate. It is certainly one of the metrics that we at FXPAL use to decide where to submit papers.

But judging a conference by publication rate alone is not always appropriate: acceptance rates are only approximately correlated with paper quality, and vary considerably among conferences. For example, a marginal paper at a smaller conference such as JCDL is typically of the same quality as a marginal paper at a larger conference such as CHI, but the acceptance rates differ by about a factor of two. Acceptance rates for small or emerging conferences are often higher because the work is done by a smaller community and does not attract as many opportunistic submissions as the better-known conferences do.

So is there a more robust way to assess the quality of a conference?

Continue Reading

What’s private on the Web?

on Comments (4)

Hillary Mason of bit.ly wrote a nice summary of some key issues raised in the recent Search in Social Media 2010 workshop. (For other commentary, see Daniel Tunkelang”s post and our pre-workshop comments.) Hillary asked several important questions, that break out into two main topics: what and how can we compute from social data on one hand, and what are the implications of those computations. Aspects such as computing relevance, how to architect social search engines, and how to represent users’ information needs in appropriate ways all represent the what and how category. We can be sure that adequate  engineering solutions will be found these problems.

The second topic, however, is more problematic because it deals more with the impact that technology has on the individual and on society, rather than about technology per se.

Continue Reading