The Economist used a series of blog posts (see here and the following posts) by Cambridge mathematician Timothy Gowers as a reason to ask once more about future of academic publishing. The article titled “The price of information” makes the usual case: publicly payed researchers provide a large part of the production chain of an article for free, starting from the research over the writing up to journal editors and referees. The journals provide the final layouting, and some basic infrastructure such as websites, printing and assistance for the editorial handling. Nevertheless, after all this is done, they are left with the copyright of the articles, which enables them to make substantial profits by restricting access to this information.
The arguments which are made in the article and the posts are somewhat more refined, relating not only to the profit-maximization, but also to some alleged dubious business practices, but I believe that the fundamental case remains the same: prices, contracts and access options are set by commercial publishers in a way to maximize their profits, which is, in the current system, contrary to the interest of scientists and the public that would like to maximize access to information.
And profits are remarkable: Elsevier, the largest academic publishing house, manages to create an impressive profit margin of 36% (substantially higher than the 28% of Apple which many view as insane). In a recent post , Mike Taylor makes the point that the profits of commercial publishers alone would be sufficient to publish all academic articles worldwide in an open-access system such as PLOSone, which charges around $1350 per article. And even this is a lot: one has to consider whether there is really a societal benefit in the same range basically only from the typesetting. If one is willing to drop layouting and editorial services as well, one could publish at even much lower costs. The arXiv’s FAQs, for example, state that their costs per article are smaller than 7$.
It seems that by now most people (including funding bodies and even some commercial publishers) agree on the need to revise the current publishing model in some way. Alas, getting there seems to be awfully difficult. On may think that the only thing one would need to do is to set up the same service that is currently provided as a payed-for open-access model such as the PLOS journals. Yet, despite the success of PLOS, their progress towards competitively excluding closed-access journals is veeeery slow. And new open access journals face not only the problem of gaining reputation and fighting problems with black sheep in their own rows, but also the problem of research budgets which, for the time being, seldom assign money for open-access publishing.
A much faster process of changed would probably be triggered if open-access models (even if commercial) could provide something that the closed, printed system can’t. Nature and PLOS have experimented quite a lot with web 2.0 type things such as blogs and comments, and other publishers have done similar steps. Yet, the current publishing model has proven remarkably immune to any changes; despite scientist being early adopters to the Internet, all more recent developments such as web 2.0, social media etc. have had hardly any impact on publishing and distributing scientific information (do you know anyone who clicks on the “like” button next to journal articles?). So, I don’t know where this leaves us … if someone could develop the science killer-app, a facebook for geeks, something that everyone wants to publish and comment in and that is open – that would be a game changer. A few random articles and websites that I found worth reading on that matter are this slide show by IanMulvany, this post by Nikolaus Kriegeskorte, and also the blog by Michael Nielsen, who’s new book I haven’t (yet) read though. As long as this does not happen, or a major top-down intervention occurs, it seems we’re stuck with doing very small steps towards open-access.