Online journals, and especially backfiles, may lead to more recent citations and a narrowing of the diversity of articles that are cited, a new study suggests. The study and accompanying news summary appear in the the July 18th issue of Science Magazine.

According to the author, James Evans, professor of sociology at the University of Chicago, for each year that a journal has available online, there are 14% fewer distinct article citations. This does not mean that total citations are dropping, only that fewer articles get cited more. These findings are opposite to the common belief that online access increases the breadth of materials consulted.

While it is tempting to posit a spurious causal connection between backfile access and a reduction in citation diversity, it is not the additional online access that this causing the change in citation behavior but the tools that accompany the online access — tools that allow readers to link to related articles, rank by relevance, times cited, etc. It is these tools that signal to the reader what is important and should be read. The result of these signals is to create herding behavior among scientists, or what Evans describes as consensus building.

By enabling scientists to quickly reach and converge with prevailing opinion, electronic journals hasten scientific consensus. But haste may cost more than the subscription to an online archive: Findings and ideas that do not become consensus quickly will be forgotten quickly.

A highly-efficient publication system can come with unanticipated consequences — the loss of serendipity. In an earlier blog post, we discuss how the Internet is changing reading behavior in general, reducing the depth of inquiry. In another blog, we discuss how signaling can help readers save time.

Print-based browsing and the limited literature indexing of the past created a great inefficiency in the discovery process. While it is tempting to eulogize the end of scholarship, this article may signify that the dissemination of science is working more efficiently than ever. The institution of science values the progress of discovery followed by a consensus and closure of debate. That more of the literature is effectively being ignored may not necessarily signal a bad thing (although it may concern those who are not read). As the historian of science, Derek de Solla Price wrote in 1965:

I am tempted to conclude that a very large fraction of the alleged 35,000 journals now current must be reckoned as merely a distant background noise, and as far from central or strategic in any of the knitted strips from which the cloth of science is woven

References:

Price, D. J. S. (1965). Networks of Scientific Papers. Science, 149(3683), 510-515. http://dx.doi.org/10.1126/science.149.3683.510

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

22 Thoughts on "The Paradox of Online Journals"

Phil noted in his 2003 article on student citations a study at Iowa State that was pretty direct: “A survey of 543 college students at Iowa State University reported that 63 percent of respondents ranked most highly those resources that were easy to use and to find, whether those sources were available from the library or the Internet.” reporting Vicki Tolar Burton and Scott A. Chadwick, “Investigating the Practices of Student Researchers: Patterns of Use and Criteria for Use of Internet and Library Sources,” Computers and Composition 17, no. 3 (2000): 309-28. quote from Phil Davis Effect of the Web on Undergraduate Citation Behavior:Guiding Student Scholarship in a Networked Age. portal: Libraries and the Academy 3.1 (2003)p. 42

Doesn’t a power curve represent herding behavior? The power curve, which is the statistical basis of Chris Anderson’s Long Tail analysis (http://www.longtail.com), illustrates a distribution that is highly concentration in the “head” of the curve. Sales (or citations) are very high for the hits in the head of the curve, then drop off rapidly and extend down a long tail.

Chris’s theory that sales of items in the long tail, facilitated by easy Web acess, would increase relative to sales of the items in the head has been disputed in a recent HBR article. See Eric Schonfield’s post on TechCrunch (http://www.techcrunch.com/2008/07/02/poking-holes-in-the-long-tail-theory/) for excellent commentary (and a link to the fulltext HBR article) on the realities of long tail behavior. An excerpt:

“Just because the Internet makes it possible to offer a near-infinite inventory of goods for sale does not mean that consumers will start wanting more obscure items in any great numbers. That is the conclusion Harvard Business School associate professor Anita Elberse comes to in a recent article in the Harvard Business Review that takes on some of the sacred cows of the Long Tail theory.”

It looks as though Evan’s article provides evidence that citations of scientific literature follow a pattern similar to that found by Anita Elberse at HBS in her research of sales of entertainment products. So, it may be that the hits that everyone is herding to in the head of the curve may simply represent the star quality items that far outshine the rest.

Thank you James, Chuck and Janice for the comments.

A power law distribution is the outcome of a social process where individuals are sensitive to the decisions of others. When a critical number of individuals have chosen a certain outcome, an information cascade can occur and individuals will discount their own personal information and go with the group. This herding behavior is responsible for events such as a stock market bubble, and burst. It is also the reason why fashion/music/movie/book trends are so unpredictable.

While the Evan’s study is controversial, it coincides with social network theory, and especially the economics of attention. Scientists, like everyone else, are sensitive to what the status quo are doing, what they are reading and what they are citing. The current publishing environment is ripe with signals to alert what a reader should attend to, and we should expect this to result in different searching and reading behavior. That these results be framed as a “narrowing” of science, or a “reduction” in diversity seems somewhat pejorative and pines back to a golden yesteryear when indexes were in print, and scholars spent much of their time wandering library stacks and searching photocopy tables for missing journal issues.

As a last thought, I am drawn to a quote from Herbert Simon that dates from 1971:

“…in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it” (p.40-41)

Simon, H. A. (1971). Designing organizations for an information-rich world. In M. Greenberger (Ed.), Computers, Communications, and the Public Interest (pp. 37-72). Baltimore: Johns Hopkins Press.

That more of the literature is effectively being ignored may not necessarily signal a bad thing (although it may concern those who are not read).

It depends but I am inclined to believe that with the “herding behavior”, ignoring more literature has negative effects. First, this tends to create “super authorities” on a subject. Good if these authorities are really authorities but if the basis is only the number of those citing them, then there enters the problem.

Second, this limits the debate as only the “super authorities” are read. Though that could expedite the resolution of the issues or questions, we will now face a problem on the quality of arguments. Unless, of course, the “super authorities” have digested the topics and treated the same skin to skin.

Just a thought.

J.A. Carizo makes an excellent point. Ideally, the value of a scientific contribution should be made on its merit alone, and not simply because it is popular (highly cited). This is one of the consequences of the “Matthew effect” in science, described by the sociologist Robert K. Merton:

“When the Matthew effect is thus transformed into an idol of authority, it violates the norm of universalism embodied in the institution of science and curbs the advancement of knowledge.”

Merton, R. K. (1968). The Matthew Effect in Science. Science, 159(3810), 56-63.
http://dx.doi.org/10.1126/science.159.3810.56

An interesting post. My observation from a researcher’s point of view is that access to online journals has increased the number of citations in my papers. However, I have also observed that while I previously may have cited a paper simply because everybody else was doing it, now, with online journals, I take the time to actually read the article before including it in my bibliography.

Comments are closed.