According to a press release from Johns Hopkins (via @pentcheff), its library “received a $300,000 grant from NSF to study the feasibility of developing, operating and sustaining an open access repository of articles from NSF-sponsored research.” This grant was inspired by NLM’s PubMedCentral repository of NIH-funded medical research. This is interesting from the perspective of HCIR because if the precedent holds, this collection will be publicly searchable and downloadable, making it a good candidate as a research collection.
Another interesting–and more controversial–implication is the issue of copyright: given that a large chunk of research published by the ACM, over which ACM currently claims copyright. Will the NSF require these to be included? Will the ACM release the publications, or will it just provide metadata and keep the full papers hidden in the ACM DL? It will be interesting to see how JHU goes about identifying the stakeholders for this activity.
Laurent and I recently published an article (SeeReader: An (Almost) Eyes-Free Mobile Rich Document Viewer) in the special issue on Pervasive Computing in the International Journal of Computer Science Issues (IJCSI). The IJCSI is open-access, meaning that the content is not hidden behind a paywall. Open-access journals are still seen as dubious by many, and perhaps rightly so. These journals are universally new and tend to enjoy less prestige and quality than mainstream journals. In return, though, they offer fast turn-around times and wide indexing.
Well, almost. JoDI, the Journal of Digital Information, founded by Wendy Hall and Gary Marchionini, has been publishing papers online since 1997 with Cliff McKnight as the Editor-in-Chief. JoDI is a peer-reviewed online journal organized into several themes, including digital libraries, hypermedia systems, hypertext criticism, information discovery, information management, social issues of digital information, and usability of digital information.
I’ve been going on and on in blog posts and in comments about the business of reviewing papers as a socially useful activity (given the right incentives) and how the reviews themselves should be rated to identify effective reviewers. The idea behind is not new—Amazon implemented something like this a long time ago—but it is useful to understand it better. This article by Jared Spool offers a good account of the history, the mechanics, and the effect.
The ACM Digital Library is a great resource for our community, and ACM continues to improve the services it offers through the Portal, recently adding an Endeca-built guided browsing interface. The digital library offering is lacking, however, in important ways. Its interface is stuck in the 20th century in that it provides access to materials, but does not support information sharing and collaboration among the people using it. I don’t mean (for once!) collaboration in the sense of collaborative search; I mean that it is not possible to comment on articles or to rate them. The only feedback one can provide is to chose to download a paper, or to cite it in one of your own publications. Both offer some evidence of an article’s impact, but the measures are not nuanced, anonymous, and lack of download or citation frequency may not reflect the merits of the work.
I’ve been using SciRate for 2 1/2 years. I began using it with certain expectations, but my actual use has differed from those expectations.
The simplest and most used feature of SciRate is its “Scites” button. With one click, SciRate members can vote for a paper. Initially I wasn’t sure how I should use this feature. What did my vote mean? Should I only vote for a paper I had read? Did it mean I could vouch for its correctness? Eventually my selfishness kicked in. SciRate made it easy for me to see what papers I had scited. And sciting a paper was so light-weight that it became the easiest way for me to mark papers that I wanted to come back to later. I don’t always come back to those papers, but I frequently use my list on SciRate to find a paper whose abstract I vaguely remember reading, or to find a set of papers it would be fun to read over the weekend.
Continuing the “rant first, do research later” tradition of blogging, I had initially written about the publish->filter model of academic publishing not having heard of arXiv.org or of SciRate. (Technically, I had heard of xxx.lanl.org, which became arXiv.org, but had never had the opportunity to use it.) Having been informed of the existence of these tools, I then related my experience with arXiv.org (here and here), and am now tackling SciRate.com. SciRate.com offers a mechanism on top of arXiv.org for registering comments on papers. Interestingly, arXiv.org provides links to a variety of bookmark aggregators, including CiteULike, Connotea, BibSonomy, del.icio.us, Digg, and Reddit, but not to SciRate. I wonder what politics drove that decision.
After some confusion and frustration, I have created an entry for the JCDL2008 workshop on Collaborative Information Seeking on arXiv.org. The entry includes an overview page with an summary of the workshop and a link to an HTML page that contains links to the papers that were presented. Some of the papers made the ingestion cutoff today; the rest should appear on Friday.
Having written about reforming the academic publication process, and having suggested that arXiv.org be used to archive workshop papers for HCIR’09 (and ’07 and ’08), I decided to upload (with the authors’ permissions) papers from the JCDL 2008 workshop on collaborative information seeking that I co-organized last year with Jeremy Pickens and Merrie Morris. I read the info on the arXiv.org site and decided to give it a shot. It turned out to be less straightforward than one might imagine.
The debate over scholarly publishing continues to percolate along, with an article in CACM by Lance Fortnow, and in a recent blog post by Daniel Tunkelang on The Noisy Channel and in the subsequent comments. The issue in question is whether the established peer-review process is effective and efficient at identifying good work, or whether the peer-reviewed journal or conference is an artifact of a time when the costs of publication and distribution were high. The argument for online publication is certainly compelling; there are many free online journals, and book publishers such as Morgan Claypool are already publishing digitally or primarily digitally.