Blog Category: scientific publishing

Does the CHI PC meeting matter?

on Comments (1)

Jofish reports some interesting numbers regarding the role that the associate chairs play in the outcome of CHI paper reviews. He analyzes the CHI 2010 data to reach the following conclusions:

  • Of the 302 submissions accepted, 57 or so were affected by decisions made at the meeting
  • The 1AC (the primary meta-reviewer) was instrumental in getting a paper rejected 31 times, but was not able to prevent rejection 111 times, and represented reviewers’ consensus 1199 times.
  • He also provides some more ammunition for the desk-reject debate.

It would be great to repeat this analysis on other years to see how reliable the patterns are.

An open question is whether the 57 or so papers whose fates were determined at the PC meeting deserved the outcome they received. (Obviously, the rejected papers’ authors would argue against this process.) It’s also interesting to note that it is not possible to replace the CHI PC process with an rule based on average scores, because both the reviewers and the ACs might then try to game the system by assigning extreme scores to marginal papers.

Public access to federally-funded research results?

on Comments (5)

The topic of implications of federal funding for research was brought up again recently in this Federal Register notice. The Office of Science and Technology Policy wants to receive public comment on a range of issues related to access to academic publications that were funded by Federal grants. The notice mentions the NIH model

One potential model, implemented by the National Institutes of Health (NIH) pursuant to Division G, Title II, Section 218 of Pub. L. 110-161 (http://publicaccess.nih.gov/) requires that all investigators funded by the NIH submit an electronic version of their final, peer-reviewed manuscript upon acceptance for publication no later than 12 months after the official date of publication.

and seeks comments on a range of issues regarding how to structure this broader policy, how to make articles available, how to ensure compliance, etc. This notice seems broader than the NSF-specific discussion I wrote about earlier because it appears to apply to all Federal agencies that fund open research.

Continue Reading

Extremely Unofficial CHI 2010 review survey

on Comments (9)

Yesterday, Lennart Nacke expressed the desire to act on the suggestion in a blog post I wrote to review the reviewers. So why not? I would like to see if we can collect some data to inform the debate about obtaining quality reviews for conferences such as CHI. The goal is to see if the availability of authors’ ratings on reviews of papers can be used to improve the reviewer selection process and to give direct feedback to reviewers as well.

Continue Reading

Comments on the CHI reviewing process

on Comments (8)

In the aftermath of the CHI 2010 PC meeting, we had an interesting discussion of issues related to reviewing and managing the CHI conference submission process. Several interesting approaches to improving the outcomes were discussed, including reinstating mentoring, rating reviews, adding a desk-reject option for some papers, etc. The overall goal is to improve the quality of submissions and the quality of reviews. Simplifying the overall process was also brought up several times.

Continue Reading

On the shoulders of giants

on Comments (3)

I have used the phrases “publish -> filter” and “filter -> publish” in a number of recent blog posts related to scientific publishing, but had been unable to find proper attribution to them with a casual search. While reading Kathleen Fitzpatrick’s draft of Planned Obsolescence, I came across the phrase “filter-then-publish” which she attributed to Clay Shirky’s “Here Comes Everybody.” I am adding that book to the top of my reading list right now.

Continue Reading

Can open source improve open reviewing?

on Comments (8)

Naboj is an overlay on arXiv.org that allows people to comment on articles, to rate articles, and (unlike SciRate.com) to rate the reviews as well. Unfortunately, the rather minimal interface does not make it easy to organize the display by highly-rated reviewers or by thoroughly-reviewed papers (i.e., papers with reviews that others found useful), or restrict search to particular domains.

These limitations are not inherent in the design of the review process or the data collected on the site, but rather are probably indicative of an under-resourced effort. I wonder if an open-source approach to the design of these kinds of tools would result in a more usable (and thus more useful) way of managing an open peer review process. Is open source the way to open reviewing? I would certainly consider contributing to it.

Reviewing the reviewers

on Comments (11)

I’ve written about some alternatives to the current review process, and I believe one of ways in which the current process can be improved is by formal recognition of reviewers’ efforts. While many conferences and journals acknowledge reviewers by publishing their names, this does not reflect the quality of the effort put in by some reviewers. A more lasting and public recognition of quality reviewers may be one way to improve the quality of this volunteer effort.

Interestingly, the APS recently instituted a policy of recognizing referees who review the articles submitted to the various APS journals.

The basis for choosing the honorees was the quality, number, and timeliness of their reports, without regard for membership in the APS, country of origin, or field of research. Individuals with current or very recent direct connections to the journals, such as editors and editorial board members, were excluded.

Continue Reading

Print media and augmented reality

on

December’s issue of Esquire features augmented reality not only on its cover but a couple of places inside. This is not the first instance of AR on print media, of course, but it’s nicely done. I’d love to see this sort of thing make its way into scientific publishing eventually, for 3d and animated illustrations and data visualization. Right now authors can put digital content related to their work out on the web, but it’s an altogether different subjective experience when it’s integrated into the printed object (book, journal, etc.).

Here’s a video tour of the AR in the Esquire issue:

And comments from mashable:

“Print might be in trouble, but Esquire magazine won’t be going gently into that good night. The December issue of the magazine will feature augmented reality pages that will come alive when displayed in front of a webcam.

Augmented reality is a trend and phenomenon we’re starting to see more and more uses of across the web. In March, GE played with augmented reality while showing off its Smart Grid technology. Earlier this month, musician John Mayer released an augmented reality enhanced music video. The Disney.com iPhone app that was released earlier this week also utilizes some AR features.”

How to give up on reviewing

on Comments (28)

Angst turns to anger to acceptance (of your lot, if not of your paper). Yes, it’s the CHI 2010 rebuttal period. A short few days to try to address the reviewers’ misreading of your paper before the program committee throws it into the reject pile, requiring you to rewrite it for another conference. While it is easy to find fault with the process that puts one or more person-years’ of work into the hands of “three angry men” who may or may not be in a position to judge the work fairly, it is not clear how to improve the process. James Landay recently wrote about the frustrations of getting systems papers accepted, and in a comment on that post, jofish pointed out that the concerns apply more widely because CHI consists of many small communities with different methodological expectations that are not necessarily appreciated by reviewers.

Continue Reading

Academic papers want to be free

on Comments (8)

There is an interesting discussion on Panos Ipeirotis’s blog about open-access publishing, and the ACM. He argues that the ACM should grant open access to its digital library because ACM’s stated goal is “Advancing Computing as a Science and a Profession,” and that this would be an effective way to do so. I’ve always thought that the ACM digital library fees were unnecessary. Like Panos, I don’t know what ACM’s expenses are, but I do know that conferences are profit centers, and that too many non-profitable years can lead to trouble for the sponsoring SIGs. Given that

  • conferences make money from attendees,
  • all typesetting costs are borne by authors these days,
  • conferences are starting to abandon print proceedings (or to charge extra for them)

what is the rationale for charging for subsequent access to these papers? Continue Reading