Your search for kurtz returned 21 results:
Monday, January 2. 2012
[Part 2: see also Part 1)]
QUESTION 8 (8a, 8b):
(Question 8a) “What is the appropriate embargo period after publication before the public is granted free access to the full content of peer-reviewed scholarly publications resulting from federally funded research?”
There is no real reason any would-be user should ever be denied access to publicly funded research journal articles. Over 60% of journals (and virtually all the top journals) already endorse immediate green OA to the author’s final draft.
But if federal funding agencies wish to accommodate the <40% of journals that do not yet endorse immediate green OA, an embargo period (preferably no longer than 6 months) could be allowed.
The crucial thing, however, is that the embargo should not apply to the date at which deposit of the author’s final, peer-reviewed draft in the author’s institutional repository is required. That deposit should be done immediately upon acceptance for publication, for all articles, without exception.
The allowable OA embargo should apply only to whether access to the immediate-deposit is made OA immediately, or access is instead set as “Closed Access” during the allowable embargo period.
Harnad, S. (2006) The Immediate-Deposit/Optional-Access (ID/OA) Mandate: Rationale and Model Open Access ArchivangelismSUMMARY: Universities and research funders are both invited to use this document. Note that this recommended "Immediate-Deposit & Optional-Access" (IDOA) policy model (also called the "Dual Deposit/Release Strategy") has been specifically formulated to be immune from any delays or embargoes (based on publisher policy or copyright restrictions): The deposit -- of the author's final, peer-reviewed draft of all journal articles, in the author's own Institutional Repository (IR) -- is required immediately upon acceptance for publication, with no delays or exceptions. But whether access to that deposit is immediately set to Open Access or provisionally set to Closed Access (with only the metadata, but not the full-text, accessible webwide) is left up to the author, with only a strong recommendation to set access as Open Access as soon as possible (immediately wherever possible, and otherwise preferably with a maximal embargo cap at 6 months).
(Question 8b) “Please describe the empirical basis for the recommended embargo period.”
The many empirical studies that have – in every research field tested – repeatedly demonstrated the research impact advantage (in terms of both downloads and citations) of journal articles that have been made (green) OA, compared to articles in the same journal and year that have not been made OA, have also found that the OA impact advantage is greater (and, of course, comes earlier) the earlier the article is made OA. The advantage of early OA extends also to preprints made OA even before peer review. Delayed access means not only delayed impact but also lost impact, in areas of research where it is important to strike while the iron is hot. See especially the findings of the Harvard astrophysicist, Michael Kurtz in:
Gentil-Beccot A, Mele S, Brooks T.C. (2010) Citing and reading behaviours in high-energy physics. Scientometrics 84(2):345–55
EXTRA QUESTIONS (X1, X2, X3):
Question X1. “Analyses that weigh public and private benefits and account for external market factors, such as competition, price changes, library budgets, and other factors, will be particularly useful.”
Please see the careful comparative economic analyses of John Joughton and co-workers (Figure 1):
Houghton, J.W. & Oppenheim, C. (2009) The Economic Implications of Alternative Publishing Models. Prometheus 26(1): 41-54
Question X2. “Are there evidence-based arguments that can be made that the delay period should be different for specific disciplines or types of publications?”
The optimal OA delay period is zero: the research reported in peer-reviewed journal/conference articles should be accessible to all potential users immediately upon acceptance for publication, in all disciplines. There is no real reason any would-be user should ever be denied access to publicly funded research journal articles. Over 60% of journals (and virtually all the top journals) already endorse immediate green OA to the author’s final draft.
But if federal funding agencies wish to accommodate the <40% of journals that do not yet endorse immediate green OA, an embargo period (preferably no longer than 6 months) could be allowed.
The crucial thing, however, is that the embargo should not apply to the date at which deposit of the author’s final, peer-reviewed draft in the author’s institutional repository is required. That deposit should be done immediately upon acceptance for publication, for all articles, without exception.
The allowable OA embargo should apply only to whether access to the immediate-deposit is made OA immediately, or access is instead set as “Closed Access” during the allowable embargo period.
Question X3. “Please identify any other items the Task Force might consider for Federal policies related to public access to peer-reviewed scholarly publications resulting from federally supported research.”
If Federal funding agencies mandate green OA self-archiving of the fundee’s final draft of all peer-reviewed journal articles resulting from federally funded research, deposited in the fundee’s institutional repository immediately upon acceptance for publication (ID/OA mandate), this will not only generate 100% OA for all US federally funded research, but it will inspire funders as well as universities and research institutions worldwide to follow the US’s model, reciprocating with OA mandates of their own, thereby ushering in the era of open access to all research, worldwide, in all fields, funded and unfunded (see mandate growth curve from ROARMAP (Registry of Open Access Mandatory Archiving Policies), Figure 2).
Monday, December 19. 2011
The straw poll on whether or not to continue the American Scientist Open Access (AmSci) Forum (and if so, who should be the new moderator) is complete (the full results are reproduced at the end of this message).
The vote is for (1) continuing the Forum, under (2) the moderatorship of Richard Poynder.
The AmSci list has now been migrated to http://mailman.ecs.soton.ac.uk/mailman/listinfo/goal where the BOAI list is also being hosted.
AmSci Forum members need not re-subscribe. All subscriptions have been automatically transferred to the new host site.
The name of the list has been changed to the Global Open Access List (GOAL) to reflect the fact that Open Access is no longer just an American or a Scientific matter. It has become a global movement.
The old AmSci Forum Archives (1998-2011) will stay up at the Sigma Xi site (indefinitely, I hope -- though we do have copies of the entire archive).
The new GOAL archive is at: http://mailman.ecs.soton.ac.uk/pipermail/goal/
Below are the complete results of Straw Poll on whether to continue the Forum, and on who should be the new moderator:
AGAINST CONTINUING AMSCI:
ARIF JINHA: I believe it would be better to have one forum, the BOAI. This forum has developed a doctrinal bias defined by the values and personality of its leadership. Though the leadership is to be commended for its credibility and vigour, it is not without its blind spots. It has not always OPEN to a diversity of perspectives. AMSCI is driven by assertive and competitive advocacy for mandates over Gold OA publishing. The rush to conclusion on the right path is premature and overly authoritative in its expression, therefore it is alienating. In truth, we have only really got started with the web in the last 10 years and authority is completely flattened by the learning curve. The BOAI is much wider in its representation of Open Access alternatives, it is therefore more neutral as well as having a wider reach for the promotion of Green OA. It means less duplication and less work devoted to instant communication, giving more time to develop a rigorous and scientific approach to meta-scholarship in the digital age.
FOR CONTINUING AMSCI:
DANA ROTH: I would disagree with Arif Jinha, in that it is the 'assertive and competitive advocacy for mandates over Gold OA publishing' that make AMSCI such an interesting listserv.
Wednesday, February 23. 2011
The following quotes are from McCabe, MJ (2011) Online access versus open access. Inside Higher Ed. February 10, 2011.McCabe, MJ & Snyder, CM (2011) Did Online Access to Journals Change the Economics Literature?
MCCABE: …I thought it would be appropriate to address the issue that is generating some heat here, namely whether our results can be extrapolated to the OA environment….If "selection bias" refers to authors' bias toward selectively making their better (hence more citeable) articles OA, then this was controlled for in the comparison of self-selected vs. mandated OA, by Gargouri et al (2010) (uncited in the McCabe & Snyder (2011) [M & S] preprint, but known to the authors -- indeed the first author requested, and received, the entire dataset for further analysis: we are all eager to hear the results).
If "selection bias" refers to the selection of the journals for analysis, I cannot speak for studies that compare OA journals with non-OA journals, since we only compare OA articles with non-OA articles within the same journal. And it is only a few studies, like Evans and Reimer's, that compare citation rates for journals before and after they are made accessible online (or, in some cases, freely accessible online). Our principal interest is in the effects of immediate OA rather than delayed or embargoed OA (although the latter may be of interest to the publishing community).
MCCABE: 2. There are at least 2 “flavors” found in this literature: 1. papers that use cross-section type data or a single observation for each article (see for example, Lawrence (2001), Harnad and Brody (2004), Gargouri, et. al. (2010)) and 2. papers that use panel data or multiple observations over time for each article (e.g. Evans (2008), Evans and Reimer (2009)).We cannot detect any mention or analysis of the Gargouri et al. paper in the M & S paper…
MCCABE: 3. In our paper we reproduce the results for both of these approaches and then, using panel data and a robust econometric specification (that accounts for selection bias, important secular trends in the data, etc.), we show that these results vanish.We do not see our results cited or reproduced. Does "reproduced" mean "simulated according to an econometric model"? If so, that is regrettably too far from actual empirical findings to be anything but speculations about what would be found if one were actually to do the empirical studies.
MCCABE: 4. Yes, we “only” test online versus print, and not OA versus online for example, but the empirical flaws in the online versus print and the OA versus online literatures are fundamentally the same: the failure to properly account for selection bias. So, using the same technique in both cases should produce similar results.Unfortunately this is not very convincing. Flaws there may well be in the methodology of studies comparing citation counts before and after the year in which a journal goes online. But these are not the flaws of studies comparing citation counts of articles that are and are not made OA within the same journal and year.
Nor is the vague attribution of "failure to properly account for selection bias" very convincing, particularly when the most recent study controlling for selection bias (by comparing self-selected OA with mandated OA) has not even been taken into consideration.
Conceptually, the reason the question of whether online access increases citations over offline access is entirely different from the question of whether OA increases citations over non-OA is that (as the authors note), the online/offline effect concerns ease of access: Institutional users have either offline access or online access, and, according to M & S's results, in economics, the increased ease of accessing articles online does not increase citations.
This could be true (although the growth across those same years of the tendency in economics to make prepublication preprints OA [harvested by RepEc] through author self-archiving, much as the physicists had started doing a decade earlier in Arxiv, and computer scientists started doing even earlier [later harvested by Citeseerx] could be producing a huge background effect not taken into account at all in M & S's painstaking temporal analysis (which itself appears as an OA preprint in SSRN!).
But any way one looks at it, there is an enormous difference between comparing easy vs. hard access (online vs. offline) and comparing access with no access. For when we compare OA vs non-OA, we are taking into account all those potential users that are at institutions that cannot afford subscriptions (whether offline or online) to the journal in which an article appears. The barrier, in other words (though one should hardly have to point this out to economists), is not an ease barrier but a price barrier: For users at nonsubscribing institutions, non-OA articles are not just harder to access: They are impossible to access -- unless a price is paid.
(I certainly hope that M & S will not reply with "let them use interlibrary loan (ILL)"! A study analogous to M & S's online/offline study comparing citations for offline vs. online vs. ILL access in the click-through age would not only strain belief if it too found no difference, but it too would fail to address OA, since OA is about access when one has reached the limits of one's institution's subscription/license/pay-per-view budget. Hence it would again miss all the usage and citations that an article would have gained if it had been accessible to all its potential users and not just to those whose institutions could afford access, by whatever means.)
It is ironic that M & S draw their conclusions about OA in economic terms (and, predictably, as their interest is in modelling publication economics) in terms of the cost/benefits, for an author, of paying to publish in an OA journal. concluding that since they have shown it will not generate more citations, it is not worth the money.
But the most compelling findings on the OA citation advantage come from OA author self-archiving (of articles published in non-OA journals), not from OA journal publishing. Those are the studies that show the OA citation advantage, and the advantage does not cost the author a penny! (The benefits, moreover, accrue not only to authors and users, but to their institutions too, as the economic analysis of Houghton et al shows.)
And the extra citations resulting from OA are almost certainly coming from users for whom access to the article would otherwise have been financially prohibitive. (Perhaps it's time for more econometric modeling from the user's point of view too…)
I recommend that M & S look at the studies of Michael Kurtz in astrophysics. Those, too, included sophisticated long-term studies of the effect of the wholesale switch from offline to online, and Kurtz found that total citations were in fact slightly reduced, overall, when journals became accessible online! But astrophysics, too, is a field in which OA self-archiving is widespread. Hence whether and when journals go online is moot, insofar as citations are concerned.
(The likely hypothesis for the reduced citations -- compatible also with our own findings in Gargouri et al -- is that OA levels the playing field for users: OA articles are accessible to all their potential usesr, not just to those whose institutions can afford toll access. As a result, users can self-selectively decide to cite only the best and most relevant articles of all, rather than having to make do with a selection among only the articles to which their institutions can afford toll access. One corollary of this [though probably also a spinoff of the Seglen/Pareto effect] is that the biggest beneficiaries of the OA citation advantage will be the best articles. This is a user-end -- rather than an author-end -- selection effect...)
MCCABE: 5. At least in the case of economics and business titles, it is not even possible to properly test for an independent OA effect by specifically looking at OA journals in these fields since there are almost no titles that switched from print/online to OA (I can think of only one such title in our sample that actually permitted backfiles to be placed in an OA repository). Indeed, almost all of the OA titles in econ/business have always been OA and so no statistically meaningful before and after comparisons can be performed.The multiple conflation here is so flagrant that it is almost laughable. Online ≠ OA and OA ≠ OA journal.
First, the method of comparing the effect on citations before vs. after the offline/online switch will have to make do with its limitations. (We don't think it's of much use for studying OA effects at all.) The method of comparing the effect on citations of OA vs. non-OA within the same (economics/business, toll-access) journals can certainly proceed apace in those disciplines, the studies have been done, and the results are much the same as in other disciplines.
M & S have our latest dataset: Perhaps they would care to test whether the economics/business subset of it is an exception to our finding that (a) there is a significant OA advantage in all disciplines, and (b) it's just as big when the OA is mandated as when it is self-selected.
MCCABE: 6. One alternative, in the case of cross-section type data, is to construct field experiments in which articles are randomly assigned OA status (e.g. Davis (2008) employs this approach and reports no OA benefit).And another one -- based on an incomparably larger N, across far more fields -- is the Gargouri et al study that M & S fail to mention in their article, in which articles are mandatorily assigned OA status, and for which they have the full dataset in hand, as requested.
MCCABE: 7. Another option is to examine articles before and after they were placed in OA repositories, so that the likely selection bias effects, important secular trends, etc. can be accounted for (or in economics jargon, “differenced out”). Evans and Reimer’s attempt to do this in their 2009 paper but only meet part of the econometric challenge.M & S are rather too wedded to their before/after method and thinking! The sensible time for authors to self-archive their papers is immediately upon acceptance for publication. That's before the published version has even appeared. Otherwise one is not studying OA but OA embargo effects. (But let me agree on one point: Unlike journal publication dates, OA self-archiving dates are not always known or taken into account; so there may be some drift there, depending on when the author self-archives. The solution is not to study the before/after watershed, but to focus on the articles that are self-archived immediately rather than later.)
Gargouri, Y., Hajjem, C., Lariviere, V., Gingras, Y., Brody, T., Carr, L. and Harnad, S. (2010) Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research. PLOS ONE 5 (10). e13636
Harnad, S. (2010) The Immediate Practical Implication of the Houghton Report: Provide Green Open Access Now. Prometheus 28 (1): 55-59.
Wednesday, October 20. 2010
Jennifer Howard ("Is there an Open-Access Advantage?," Chronicle of Higher Education, October 19 2010) seems to have missed the point of our article. It is undisputed that study after study has found that Open Access (OA) is correlated with higher probability of citation. The question our study addressed was whether making an article OA causes the higher probability of citation, or the higher probability causes the article to be made OA.
The latter is the "author self-selection bias" hypothesis, according to which the only reason OA articles are cited more is that authors do not make all articles OA: only the better ones, the ones that are also more likely to be cited.
The Davis et al study tested this by making articles -- 247 articles, from 11 biology journals -- OA randomly, instead of letting the authors choose whether or not to do it, self-selectively, and they found no increased citation for the OA articles one year after publication (although they did find increased downloads).
But almost no one finds that OA articles are cited more a year after publication. The OA citation advantage only becomes statistically detectable after citations have accumulated for 2-3 years.
Even more important, Davis et al. did not test the obvious and essential control condition in their randomized OA experiment: They did not test whether there was a statistically detectable OA advantage for self-selected OA in the same journals and time-window. You cannot show that an effect is an artifact of self-selection unless you show that with self-selection the effect is there, whereas with randomization it is not. All Davis et al showed was that there is no detectable OA advantage at all in their one-year sample (247 articles from 11 Biology journals); randomness and self-selection have nothing to do with it.
Davis et al released their results prematurely. We are waiting*,** to hear what Davis finds after 2-3 years, when he completes his doctoral dissertation. But if all he reports is that he has found no OA advantage at all in that sample of 11 biology journals, and that interval, rather than an OA advantage for the self-selected subset and no OA advantage for the randomized subset, then again, all we will have is a failure to replicate the positive effect that has now been reported by many other investigators, in field after field, often with far larger samples than Davis et al's.
Meanwhile, our study was similar to that of Davis et al's, except that it was a much bigger sample, across many fields, and a much larger time window -- and, most important, we did have a self-selective matched-control subset, which did show the usual OA advantage. Instead of comparing self-selective OA with randomized OA, however, we compared it with mandated OA -- which amounts to much the same thing, because the point of the self-selection hypothesis is that the author picks and chooses what to make OA, whereas if the OA is mandatory (required), the author is not picking and choosing, just as the author is not picking and choosing when the OA is imposed randomly.Davis's results are welcome and interesting, and include some good theoretical insights, but insofar as the OA Citation Advantage is concerned, the empirical findings turn out to be just a failure to replicate the OA Citation Advantage in that particular sample and time-span -- exactly as predicted above. The original 2008 sample of 247 OA and 1372 non-OA articles in 11 journals one year after publication has now been extended to 712 OA and 2533 non-OA articles in 36 journals two years after publication. The result is a significant download advantage for OA articles but no significant citation advantage.
And our finding is that the mandated OA advantage is just as big as the self-selective OA advantage.
As we discussed in our article, if someone really clings to the self-selection hypothesis, there are some remaining points of uncertainty in our study that self-selectionists can still hope will eventually bear them out: Compliance with the mandates was not 100%, but 60-70%. So the self-selection hypothesis has a chance of being resurrected if one argues that now it is no longer a case of positive selection for the stronger articles, but a refusal to comply with the mandate for the weaker ones. One would have expected, however, that if this were true, the OA advantage would at least be weaker for mandated OA than for unmandated OA, since the percentage of total output that is self-archived under a mandate is almost three times the 5-25% that is self-archived self-selectively. Yet the OA advantage is undiminished with 60-70% mandate compliance in 2002-2006. We have since extended the window by three more years, to 2009; the compliance rate rises by another 10%, but the mandated OA advantage remains undiminished. Self-selectionists don't have to cede till the percentage is 100%, but their hypothesis gets more and more far-fetched...
The other way of saving the self-selection hypothesis despite our findings is to argue that there was a "self-selection" bias in terms of which institutions do and do not mandate OA: Maybe it's the better ones that self-select to do so. There may be a plausible case to be made that one of our four mandated institutions -- CERN -- is an elite institution. (It is also physics-only.) But, as we reported, we re-did our analysis removing CERN, and we got the same outcome. Even if the objection of eliteness is extended to Southampton ECS, removing that second institution did not change the outcome either. We leave it to the reader to decide whether it is plausible to count our remaining two mandating institutions -- University of Minho in Portugal and Queensland University of Technology in Australia -- as elite institutions, compared to other universities. It is a historical fact, however, that these four institutions were the first in the world to elect to mandate OA.
One can only speculate on the reasons why some might still wish to cling to the self-selection bias hypothesis in the face of all the evidence to date. It seems almost a matter of common sense that making articles more accessible to users also makes them more usable and citable -- especially in a world where most researchers are familiar with the frustration of arriving at a link to an article that they would like to read (but their institution does not subscribe), so they are asked to drop it into the shopping cart and pay $30 at the check-out counter. The straightforward causal relationship is the default hypothesis, based on both plausibility and the cumulative weight of the evidence. Hence the burden of providing counter-evidence to refute it is now on the advocates of the alternative.
Davis, PN, Lewenstein, BV, Simon, DH, Booth, JG, & Connolly, MJL (2008) Open access publishing, article downloads, and citations: randomised controlled trial , British Medical Journal 337: a568
Gargouri, Y., Hajjem, C., Lariviere, V., Gingras, Y., Brody, T., Carr, L. and Harnad, S. (2010) Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research. PLOS ONE 10(5) e13636
Harnad, S. (2008) Davis et al's 1-year Study of Self-Selection Bias: No Self-Archiving Control, No OA Effect, No Conclusion. Open Access Archivangelism July 31 2008
Wednesday, November 19. 2008
Tenopir & King's confirmation of the finding (by Kurtz and others) -- that as more articles become accessible, more articles are indeed accessed (and read), but fewer articles are cited (and those are cited more) -- is best explained by the increased selectivity made possible by that increased accessibility:
The Seglen "skewness" effect is that the top 20% of articles receive 80% of all citations. It is probably safe to say that although there are no doubt some bandwagon and copycat effects contributing to the Seglen effect, overall the 20/80 rule probably reflects the fact that the best work gets cited most (skewing citations toward the top of the quality distribution).
So when more researchers have access to more (or, conversely, are denied access to less), they are more likely to access the best work, and the best work thereby increases its likelihood of being cited, whereas the rest correspondingly decreases its likelihood of being cited. Another way to put it is that there is a levelling of the playing field: Any advantage that the lower 80% had enjoyed from mere accessibility in the toll-access lottery is eliminated, and with it any handicap the top 20% suffered from inaccessibility in the toll-access lottery is eliminated too. Open Access (OA) allows all the cream to rise to the top; accessibility is no longer a constraint on what to cite, one way or the other.
(I would like to point out also that this "quality selectivity" on the part of users -- rather than self-selection on the part of authors -- is likely to be the main contributor to the citation advantage of Open Access articles over Toll Access articles. It follows from the 20/80 rule that whatever quality-selectivity there is on the part of users will be enjoyed mostly by the top 20% of articles. There is no doubt at all that the top authors are more likely to make their articles OA, and that the top articles are more likely to be made OA, but one should ask oneself why that should be the case, if there were no benefits [or the only benefit were more readers, but fewer citations!]: One of the reasons the top articles are more likely to be made OA is precisely that they are also the most likely to be used, applied and cited more if they are made OA!)
American Scientist Open Access Forum
Monday, August 25. 2008
Confirmation Bias and the Open Access Advantage: Some Methodological Suggestions for Davis's Citation Study
SUMMARY: Davis (2008) analyzes citations from 2004-2007 in 11 biomedical journals. For 1,600 of the 11,000 articles (15%), their authors paid the publisher to make them Open Access (OA). The outcome, confirming previous studies (on both paid and unpaid OA), is a significant OA citation Advantage, but a small one (21%, 4% of it correlated with other article variables such as number of authors, references and pages). The author infers that the size of the OA advantage in this biomedical sample has been shrinking annually from 2004-2007, but the data suggest the opposite. In order to draw valid conclusions from these data, the following five further analyses are necessary:(1) The current analysis is based only on author-choice (paid) OA. Free OA self-archiving needs to be taken into account too, for the same journals and years, rather than being counted as non-OA, as in the current analysis.Davis proposes that an author self-selection bias for providing OA to higher-quality articles (the Quality Bias, QB) is the primary cause of the observed OA Advantage, but this study does not test or show anything at all about the causal role of QB (or of any of the other potential causal factors, such as Accessibility Advantage, AA, Competitive Advantage, CA, Download Advantage, DA, Early Advantage, EA, and Quality Advantage, QA). The author also suggests that paid OA is not worth the cost, per extra citation. This is probably true, but with OA self-archiving, both the OA and the extra citations are free.
The Davis (2008) preprint is an analysis of the citations from years c. 2004-2007 in 11 biomedical journals: c. 11,000 articles, of which c. 1,600 (15%) were made Open Access (OA) through “Author Choice” (AC-OA): author chooses to pay publisher for OA). Author self-archiving (SA-OA) articles from the same journals was not measured.Comments on: Davis, P.M. (2008) Author-choice open access publishing in the biological and medical literature: a citation analysis. Journal of the American Society for Information Science and Technology (JASIST) (in press) http://arxiv.org/pdf/0808.2428v1
The result was a significant OA citation advantage (21%) over time, of which 4% was correlated with variables other than OA and time (number of authors, pages, references; whether article is a Review and has a US co-author).
This outcome confirms the findings of numerous previous studies (some of them based on far larger samples of fields, journals, articles and time-intervals) of an OA citation advantage (ranging from 25%-250%) in all fields, across a 10-year range (Hitchcock 2008).
The preprint also states that the size of the OA advantage in this biomedical sample diminishes annually from 2004-2007. But the data seem to show the opposite: that as an article gets older, and its cumulative citations grow, its absolute and relative OA advantage grow too.
The preprint concludes, based on its estimate of the size of the OA citation Advantage, that AC-OA is not worth the cost, per extra citation. This is probably true -- but with SA-OA the OA and the extra citations can be had at no cost at all.
The paper is accepted for publication in JASIST. It is not clear whether the linked text is the unrefereed preprint, or the refereed, revised postprint. On the assumption that it is the unrefereed preprint, what follows is an extended peer commentary with recommendations on what should be done in revising it for publication.
(It is very possible, however, that some or all of these revisions were also recommended by the JASIST referees and that some of the changes have already been made in the published version.)
As it stands currently, this study (i) confirms a significant OA citation Advantage, (ii) shows that it grows cumulatively with article age and (iii) shows that it is correlated with several other variables that are correlated with citation counts.
Although the author argues that an author self-selection bias for preferentially providing OA to higher-quality articles (the Quality Bias, QB) is the primary causal factor underlying the observed OA Advantage, in fact this study does not test or show anything at all about the causal role of QB (or of any of the other potential causal factors underlying the OA Advantage, such as Accessibility Advantage, AA, Competitive Advantage, CA, Download Advantage, DA, Early Advantage, EA, and Quality Advantage, QA; Hajjem & Harnad 2007b).
The following 5 further analyses of the data are necessary. The size and pattern of the observed results, as well as their interpretations, could all be significantly altered (as well as deepened) by their outcome:
(1) The current analysis is based only on author-choice (paid) OA. Free author self-archiving OA needs to be taken into account too, for the same journals and years, rather than being counted as non-OA, as in the current analysis.Commentary on the text of the preprint:
“ABSTRACT… there is strong evidence to suggest that the open access advantage is declining by about 7% per year, from 32% in 2004 to 11% in 2007”It is not clearly explained how these figures and their interpretation are derived, nor is it reported how many OA articles there were in each of these years. The figures appear to be based on a statistical interaction between OA and article-age in a multiple regression analysis for 9 of the 11 journals in the sample. (a) The data from PNAS, the largest and highest-impact journal, are excluded from this analysis. (b) The many variables included in the (full) multiple regression equation (across journals) omit one of the most obvious ones: journal impact factor. (c) OA articles that are self-archived rather than paid author-choice are not identified and included as OA, hence their citations are counted as being non-OA. (d) The OA/age interaction is not based on yearly citations after a fixed interval for each year, but on cumulative retrospective citations in June 2008.
The natural interpretation of Figure 1 accordingly seems to be the exact opposite of the one the author makes: Not that the size of the OA Advantage shrinks from 2004-2007, but that the size of the OA Advantage grows from 2007-2004 (as articles get older and their citations grow). Not only do cumulative citations grow for both OA and non-OA articles from year 2007 articles to year 2004 articles, but the cumulative OA advantage increases (by about 7% per year, even on the basis of this study’s rather slim and selective data and analyses).
This is quite natural, as not only do citations grow with time, but the OA Advantage -- barely detectable in the first year, being then based on the smallest sample and the fewest citations -- emerges with time.
“See Craig et al.  for a critical review of the literature [on the OA citation advantage]”Craig et al’s rather slanted 2007 review is the only reference to previous findings on the OA Advantage cited by the Davis preprint (Harnad 2007a). Craig et al. had attempted to reinterpret the many times replicated positive finding of an OA citation advantage, on the basis of 4 negative findings (Davis & Fromerth, 2007; Kurtz et al., 2005; Kurtz & Henneken, 2007; Moed, 2007), in maths, astronomy and condensed matter physics, respectively. Apart from Davis’s own prior study, these studies were based mainly on preprints that were made OA well before publication. The observed OA advantage consisted mostly of an Early Access Advantage for the OA prepublication preprint, plus an inferred Quality Bias (QB) on the part of authors towards preferentially providing OA to higher quality preprints (Harnad 2007b).
The Davis preprint does not cite any of the considerably larger number of studies that have reported large and consistent OA advantages for postprints, based on many more fields, some of them based on far larger samples and longer time intervals (Hitchcock 2008). Instead, Davis focuses rather single-mindedly on the hypothesis that most or all of the OA Advantage is the result for the self-selection bias (QB) toward preferentially making higher-quality (hence more citeable) articles OA:
“authors selectively choose which articles to promote freely… [and] highly cited authors disproportionately choose open access venues”It is undoubtedly true that better authors are more likely to make their articles OA, and that authors in general are more likely to make their better articles OA. This Quality or “Self-Selection” Bias (QB) is one of the probable causes of the OA Advantage.
However, no study has shown that QB is the only cause of the OA Advantage, nor even that it is the biggest cause. Three of the studies cited (Kurtz et al., 2005; Kurtz & Henneken, 2007; Moed, 2007) showed that another causal factor is Early Access (EA: providing OA earlier results in more citations).
There are several other candidate causal factors in the OA Advantage, besides QB and EA (Hajjem & Harnad 2007b):There is the Download (or Usage) Advantage (DA): OA articles are downloaded significantly more, and this early DA has also been shown to be predictive of a later citation advantage in Physics (Brody et al. 2006).
There is a Competitive Advantage (CA): OA articles are in competition with non-OA articles, and to the extent that OA articles are relatively more accessible than non-OA articles, they can be used and cited more. Both QB and CA, however, are temporary components of the OA advantage that will necessarily shrink to zero and disappear once all research is OA. EA and DA, in contrast, will continue to contribute to the OA advantage even after universal OA is reached, when all postprints are being made OA immediately upon publication, compared to pre-OA days (as Kurtz has shown for Astronomy, which has already reached universal post-publication OA).
There is an Accessibility Advantage (AA) for those users whose institutions do not have subscription access to the journal in which the article appeared. AA too (unlike CA) persists even after universal OA is reached: all articles then have AA's full benefit.
And there is at least one more important causal component in the OA Advantage, apart from AA, CA, DA and QB, and that is a Quality Advantage (QA), which has often been erroneously conflated with QB (Quality Bias):
Ever since Lawrence’s original study in 2001, the OA Advantage can be estmated in two different ways: (1) by comparing the average citations for OA and non-OA articles (log citation ratios within the same journal and year, or regression analyses like Davis’s) and (2) by comparing the proportion of OA articles in different “citation brackets” (0, 1, 2, 3-4, 5-8, 9-16, 17+ citations).
In method (2), the OA Advantage is observed in the form of an increase in the proportion of OA articles in the higher citation brackets. But this correlation can be explained in two ways. One is QB, which is that authors are more likely to make higher-quality articles OA. But it is also at least as plausible that higher-quality articles benefit more from OA! It is already known that the top c. 10-20% of articles receive c. 80-90% of all citations (Seglen’s 1992 “skewness of science”). It stands to reason, then, that when all articles are made OA, it is the top 20% of articles that are most likely to be cited more: Not all OA articles benefit from OA equally, because not all articles are of equally citable quality.
Hence both QB and QA are likely to be causal components in the OA Advantage, and the only way to tease them apart and estimate their individual contributions is to control for the QB effect by imposing the OA instead of allowing it to be determined by self-selection. We (Gargouri, Hajjem, Gingras, Carr & Harnad, in prep.) are completing such a study now, comparing mandated and unmandated OA; and Davis et al 2008 have just published another study on randomized OA for 11 journals:
“In the first controlled trial of open access publishing where articles were randomly assigned to either open access or subscription-access status, we recently reported that no citation advantage could be attributed to access status (Davis, Lewenstein, Simon, Booth, & Connolly, 2008)”This randomized OA study by Davis et al. was very welcome and timely, but it had originally been announced to cover a 4-year period, from 2007-2010, whereas it was instead prematurely published in 2008, after only one year. No OA advantage at all was observed in that 1-year interval, and this too agrees with the many existing studies on the OA Advantage, some based on far larger samples of journals, articles and fields: Most of those studies (none of them randomized) likewise detected no OA citation advantage at all in the first year: It is simply too early. In most fields, citations take longer than a year to be made, published, ISI-indexed and measured, and to make any further differentials (such as the OA Advantage) measurable. (This is evident in Davis’s present preprint too, where the OA advantage is barely visible in the first year (2007).)
The only way the absence of a significant OA advantage in a sample with randomized OA can be used to demonstrate that the OA Advantage is only or mostly just a self-selection bias (QB) is by also demonstrating the presence of a significant OA advantage in the same (or comparable) sample with nonrandomized (i.e., self-selected) OA.
But Davis et al. did not do this control comparison (Harnad 2008b). Finding no OA Advantage with randomized OA after one year merely confirms the (widely observed) finding that one year is usually too early to detect any OA Advantage; but it shows nothing whatsoever about self-selection QB.
“we examine the citation performance of author-choice open access”It is quite useful and interesting to examine citations for OA and non-OA articles where the OA is provided through (self-selected) “Author-Choice” (i.e., authors paying the publisher to make the article OA on the publisher’s website).
Most prior studies of the OA citation Advantage, however, are based on free self-archiving by authors on their personal, institutional or central websites. In the bigger studies, a robot trawls the web using ISI bibliographic metadata to find which articles are freely available on the web (Hajjem et al. 2005).
Hence a natural (indeed essential) control test that has been omitted from Davis’s current author-choice study – a test very much like the control test omitted from the Davis et al randomized OA study – is to identify the articles in the same sample that were made OA through author self-archiving. If those articles are identified and counted, that not only provides an estimate of the relative uptake of author-choice OA vs OA self-archiving in the same sample interval, but it allows a comparison of their respective OA Advantages. More important, it corrects the estimate of an OA Advantage based on author-choice OA alone: For, as Davis has currently done the analysis, any OA Advantage from OA self-archiving in this sample would in fact reduce the estimate of the OA Advantage based on author-choice OA (mistakenly counting as non-OA the articles and citation-counts for self-archived OA articles)
“METHODS… The uptake of the open access author-choice programs for these  journals ranged from 5% to 22% over the dates analyzed”Davis’s preprint does not seem to provide the data – either for individual journals or for the combined totals – on the percentage of author-choice OA (henceforth AC-OA) by year, nor on the relation between the proportion uptake of AC-OA and the size of the OA Advantage, by year.
As Davis has been careful to do multiple regression analyses on many of the article-variables that might correlate with citations and OA (article age, number of authors, number of references, etc.), it seems odd not to take into account the relation between the size of the AC-OA Advantage and the degree of uptake of AC-OA, by year. The other missing information is the corresponding data for self-archiving OA (henceforth SA-OA).
“[For] All of the journals… all articles roll into free access after an initial period [restricted to subscription access only for 12 months (8 journals), 6 months (2 journals) or 24 months (1 journal)]”(This is important in relation to the Early Access (EA) Advantage, which is the biggest contributor to the OA Advantage in the two cited studies by Kurtz on Astronomy. Astronomy has free access to the postprints of all articles in all astronomy journals immediately upon publication. Hence Astronomy has scope for an OA Advantage only through an EA Advantage, arising from the early posting of preprints before publication. The size of the OA Advantage in other fields -- in which (unlike in Astronomy) access to the postprint is restricted to subscribers-only for 6, 12, or 24 months -- would then be the equivalent of an estimate of an “EA Advantage” for those potential users who lack subscription access – i.e., the Accessibility Advantage, AA.)
“Cumulative article citations were retrieved on June 1, 2008. The age of the articles ranged from 18 to 57 months”Most of the 11 journals were sampled till December 2007. That would mean that the 2007 OA Advantage was based on even less than one year from publication.
“STATISTICAL ANALYSIS… Because citation distributions are known to be heavily skewed (Seglen, 1992) and because some of the articles were not yet cited in our dataset, we followed the common practice of adding one citation to every article and then taking the natural log”(How well did that correct the skewness? If it still was not normal, then citations might have to be dichotomized as a 0/1 variable, comparing, by citation-bracket slices, (1) 0 citations vs 1 or more citations, (2) 0 or 1 vs more than 1, (3) 2 or fewer vs. more than 2, (4) 3 or fewer vs. more than 3… etc.)
“For each journal, we ran a reduced [2 predictor] model [article age and OA] and a full [7 predictor] regression model [age, OA; log no. of authors, references, pages; Review; US author]”Both analyses are, of course, a good idea to do, but why was Journal Impact Factor (JIF) not tested as one of the predictor variables in the cross-journal analyses (Hajjem & Harnad 2007a)? Surely JIF, too, correlates with citations: Indeed, the Davis study assumes as much, as it later uses JIF as the multiplier factor in calculating the cost per extra citation for author-choice OA (see below).
Analyses by journal JIF citation-bracket, for example, can provide estimates of QA (Quality Advantage) if the OA Advantage is bigger in the higher journal citation-brackets. (Davis’s study is preoccupied with the self-selection QB bias, which it does not and cannot test, but it fails to test other candidate contributors to the OA Advantage that it can test.)
(An important and often overlooked logical point should also be noted about the correlates of citations and the direction of causation: The many predictor variables in the multiple regression equations predict not only the OA citation Advantage; they also predict citation counts themselves. It does not necessarily follow from the fact that, say, longer articles are more likely to be cited that article length is therefore an artifact that must be factored out of citation counts in order to get a more valid estimate of how accurately citations measure quality. One possibility is that length is indeed an artifact. But the other possibility is that length is a valid causal factor in quality! If length is indeed an artifact, then longer articles are being cited more just because they are longer, rather than because they are better, and this length bias needs to be subtracted out of citation counts as measures of quality. But if the extra length is a causal contributor to what makes the better articles better, then subtracting out the length effect simply serves to make citation counts a blunter, not a sharper instrument for measuring quality. The same reasoning applies to some of the other correlates of citation counts, as well as their relation to the OA citation Advantage. Systematically removing them all, even if they are not artifactual, systematically divests citation counts of their potential power to predict quality. This is another reason why citation counts need to be systematically validated against other evaluative measures [Harnad 2008a].)
“Because we may lack the statistical power to detect small significant differences for individual journals, we also analyze our data on an aggregate level”It is a reasonable, valid strategy, to analyze across journals. Yet this study still persists in drawing individual-journal level conclusions, despite having indicated (correctly) that its sample may be too small to have the power to detect individual-journal level differences (see below).
(On the other hand, it is not clear whether all the OA/non-OA citation comparisons were always within-journal, within-year, as they ought to be; no data are presented for the percentage of OA articles per year, per journal. OA/non-OA comparisons must always be within-journal/year comparisons, to be sure to compare like with like.)
“The first model includes all 11 journals, and the second omits the Proceedings of the National Academy of Sciences (PNAS), considering that it contributed nearly one-third (32%) of all articles in our dataset”Is this a justification for excluding PNAS? Not only was the analysis done with and without PNAS, but, unlike all the other journals, whose data were all included, for the entire time-span, PNAS data were only included from the first and last six months.
Why? PNAS is a very high impact factor journal, with highly cited articles. A study of PNAS alone, with its much bigger sample size, would be instructive in itself – and would almost certainly yield a bigger OA Advantage than the one derived from averaging across all 11 journals (and reducing the PNAS sample size, or excluding PNAS altogether).
There can be a QB difference between PNAS and non-PNAS articles (and authors), to be sure, because PNAS publishes articles of higher quality. But a within-PNAS year-by-year comparison of OA and non-OA that yielded a bigger OA Advantage than a within-journal OA/non-OA comparison for lower-quality journals would also reflect the contribution of QA. (With these data in hand, the author should not be so focused on confirming his hypotheses: take the opportunity to falsify them too!)
“we are able to control for variables that are well-known to predict future citations [but] we cannot control for the quality of an article”This is correct. One cannot control for the quality of an article; but in comparing within a journal/year, one can compare the size of the OA Advantage for higher and lower impact journals; if the advantage is higher for higher-impact journals, that favors QA over QB.
One can also take target OA and non-OA articles (within each citation bracket), and match the title words of each target article with other articles (in the same journal/year):
If one examines N-citation OA articles and N-citation non-OA articles, are their title-word-matched (non-OA) control articles equally likely to have N or more citations? Or are the word-matched control articles for N-citation OA articles less likely to have N or more citations than the controls for N-citation non-OA articles (which would imply that the OA has raised the OA article’s citation bracket)? And would this effect be greater in the higher citation brackets than in the lower ones (N = 1 to N = >16)?
If one is resourceful, there are ways to control, or at least triangulate on quality indirectly.
“spending a fee to make one’s article freely available from a publisher’s website may indicate there is something qualitatively different [about that article]”Yes, but one could probably tell a Just-So story either way about the direction of that difference: paying for OA because one thinks one's article is better, or paying for OA because one thinks one's article worse! Moreover, this is AC-OA, which costs money; the stakes are different with SA-OA, which only costs a few keystrokes. But this analysis omitted to identify or measure SA-OA.
“RESULTS…The difference in citations between open access and subscription-based articles is small and non-significant for the majority of the journals under investigation”(1) Compare the above with what is stated earlier: “Because we may lack the statistical power to detect small significant differences for individual journals, we also analyze our data on an aggregate level.”
(2) Davis found an OA Advantage across the entire sample of 11 journals, whereas the individual journal samples were too small. Why state this as if it were some sort of an empirical effect?
“where only time and open access status are the model predictors, five of the eleven journals show positive and significant open access effects.”(That does not sound too bad, considering that the individual journal samples were small and hence lacked the statistical power to detect small significant differences, and that the PNAS sample was made deliberately small!)
“Analyzing all journals together, we report a small but significant increase in article citations of 21%.”Whether that OA Advantage is small or big remains to be seen. The bigger published OA Advantages have been reported on the basis of bigger samples.
“Much of this citation increase can be explained by the influence of one journal, PNAS. When this journal is removed from the analysis, the citation difference reduces to 14%.”This reasoning can appeal only if one has a confirmation bias: PNAS is also the journal with the biggest sample (of which only a fraction was used); and it is also the highest impact journal of the 11 sampled, hence the most likely to show benefits from a Quality Advantage (QA) that generates more citations for higher citation-bracket articles. If the objective had not been to demonstrate that there is little or no OA Advantage (and that what little there is is just due to QB), PNAS would have been analyzed more closely and fully, rather than being minimized and excluded.
“When other explanatory predictors of citations (number of authors, pages, section, etc.) are included in the full model, only two of the eleven journals show positive and significant open access effects. Analyzing all journals together, we estimate a 17% citation advantage, which reduces to 11% if we exclude PNAS.”In other words partialling out 5 more correlated variables from this sample reduces the residual OA Advantage by 4%. And excluding the biggest, highest-quality journal’s data, reduces it still further.
If there were not this strong confirmation bent on the author’s part, the data would be treated in a rather different way: The fact that a journal with a bigger sample enhances the OA Advantage would be treated as a plus rather than a minus, suggesting that still bigger samples might have the power to detect still bigger OA Advantages. And the fact that PNAS is a higher quality journal would also be the basis for looking more closely at the role of the Quality Advantage (QA). (With less of a confirmation bent, OA Self-archiving, too, would have been controlled for, instead of being credited to non-OA.)
Instead, the awkward persistence of a significant OA Advantage even after partialling out the effects of so many correlated variables, despite restricting the size of the PNAS sample, and even after removing PNAS entirely from the analysis, has to be further explained away:
“The modest citation advantage for author-choice open access articles also appears to weaken over time. Figure 1 plots the predicted number of citations for the average article in our dataset. This difference is most pronounced for articles published in 2004 (a 32% advantage), and decreases by about 7% per year (Supplementary Table S2) until 2007 where we estimate only an 11% citation advantage.”(The methodology is not clearly described. We are not shown the percent OA per journal per year, nor what the dates of the citing articles were, for each cited-article year. What is certain is that a 1-year-old 2007 article differs from a 4-year-old 2004 article not just in its total cumulative citations in June 2008, but in that the estimate of its citations per year is based on a much smaller sample, again reducing the power of the statistic: This analysis is not based on 2005 citations to 2004 articles, plus 2006 citations to 2005 articles, plus 2007 citations to 2006 articles, etc. It is based on cumulative 2004-2008 citations to 2004, 2005, 2006 etc. articles, reckoned in June 2008. 2007 articles are not only younger: they are also more recent. Hence it is not clear what the Age/OA interaction in Table S2 really means: Has (1) the OA advantage for articles really been shrinking across those 4 years, or are citation rates for younger articles simply noisier, because based on smaller citation spans, hence (2) the OA Advantage grows more detectable as articles get older?)
From what is described and depicted in Figure 1, the natural interpretation of the Age/OA interaction seems to be the latter: As we move from one-year-old articles (2007) toward four-year-old articles, three things are happening: non-OA citations are growing with time, OA citations are growing with time, and the OA/non-OA Advantage is emerging with time.
“[To] calculate… the estimated cost per citation [$400 - $9000]… we multiply the open access citation advantage for each journal (a multiplicative effect) by the impact factor of the journal… Considering [the] strong evidence of a decline of the citation advantage over time, the cost…would be much higher…”Although these costs are probably overestimated (because the OA Advantage is underestimated, and there is no decline but rather an increase) the thrust of these figures is reasonable: It is not worth paying for AC-OA for the sake of the AC-OA Advantage: It makes far more sense to get the OA Advantage for free, through OA Self-Archiving.
Note, however, that the potentially informative journal impact factor (JIF) was omitted from the full-model multiple regression equation across journals (#6). It should be tested. So should the percentage OA for each journal/year. And after that the analysis should be redone separately for, say, the four successive JIF quartiles. If adding the JIF to the equation reduces the OA Advantage further, whereas without JIF the OA Advantage increases in each successive quartile, then that implies that a big factor in the OA Advantage is the Quality Advantage (QA).
“that we were able to explain some of the citation advantage by controlling for differences in article characteristics… strengthens the evidence that self-selection – not access – is the explanation for the citation advantage… more citable articles have a higher probability of being made freely accessible”Self-selection (QB) is undoubtedly one of the factors in the OA Advantage, but this analysis has not estimated the size of its contribution, relative to many other factors (AA, CA, DA, EA, QA). It has simply shown that some of the same factors that influence citation counts, influence the OA citation Advantage too.
By failing to test and control for the Quality Advantage in particular (by not testing JIFs in the full regression equation, by not taking percentage OA per journal/year into account, by restricting the sample-size for the highest impact, largest-sample journal, PNAS, by overlooking OA self-archiving and crediting it to non-OA, by not testing citation-brackets of JIF quartiles), the article needlessly misses the opportunity to analyze the factors contributing to the OA Advantage far more rigorously.
“earlier studies [on the OA Advantage] may be showing an early-adopter effect…”This is probably true. And early adopters also have a Competitive Advantage (CA). But with only about 20% OA being provided overall today, the CA is still there, except if it can be demonstrated – as Davis certainly has not demonstrated – that the c. 20% of articles that are being made OA today correspond sufficiently closely to that top 20% of articles that receive 80% of all citations. (Then the OA Advantage would indeed be largely QB.)
“authors who deposited their manuscripts in the arXiv tended to be more highly-cited than those who did not”There is some circularity in this, but it is correct to say that this correlation is compatible with both QB and QA, and probably both are contributing factors. But none of the prior studies nor this one actually estimate their relative contributions (nor those of AA, CA, DA and EA).
“any relative citation advantage that was enjoyed by early adopters would disappear over time”It is not that CA (Competitive Advantage) disappears simply because time elapses: CA only disappears if the competitors provide OA too! The same is true of QB (Quality Bias), which also disappears once everyone is providing OA. But at 20%, we are nowhere near 100% OA yet, hence there is plenty of scope for a competitive edge.
“If a citation advantage is the key motivation of authors to pay open access fees, then the cost/benefit of this decision can be quite expensive for some journals.”This is certainly true, and would be true even if the OA citation Advantage were astronomically big – but the reason it is true is that authors need not pay AC-OA fees for OA at all: they can self-archive for free (and indeed are being increasingly mandated by their funders and institutions to do so).
“Randomized controlled trials provide a more rigorous methodology for measuring the effect of access independently of other confounding effects (Davis et al., 2008)… the differences we report in our study… have more likely explained the effect of self-selection (or self-promotion) than of open access per se.”The syntax here makes it a little difficult to interpret, but if what is meant is that Davis et al’s prior study has shown that the OA Advantage found in the present study was more likely to be a result of QB than of QA, AA, CA, DA, or EA, then it has to be replied that that prior study showed nothing of the sort (Harnad 2008b). All it showed was that one cannot detect a significant OA Advantage at all one year after publication when OA is randomized. (The same is true when OA is not randomized.)
However, the prior Davis et al. study did find a significant DA (Download Advantage) for OA articles in the first year. And other studies have reported a significant correlation between early downloads and later citations (Brody et al. 2006).
So the prior Davis et al. study (1) confirmed the familiar failure to detect the OA Advantage in the first year, and (2) found a significant DA in the first year (probably predictive of a later OA citation Advantage). The present Davis study found (i) a significant OA Advantage, (ii) smallest in the first year (2007), much bigger by the fourth (2004).
“Retrospective analysis… our analysis is based on cumulative citations to articles taken at one point in time. Had we tracked the performance of our articles over time – a prospective approach – we would have stronger evidence to bolster our claim that the citation advantage is in decline. Still, we feel that cumulative citation data provides us with adequate inference.”Actually, it would be possible, with a fuller analysis using the ISI database, to calculate not only the citation counts for each cited article, but the dates of the citing articles. So a “prospective” analysis can be done in retrospect. Without performing that analysis, however, the present study does not provide evidence of a decline in the OA Advantage with time, just evidence of an improved signal/noise ratio for measuring the OA Advantage with time. A “prospective” analysis, taking citing dates as well as cited dates into account, would be welcome (and is far more likely to show that the size of the OA Advantage is, if anything, growing, rather than confirming the author’s interpretation, unwarranted on the present data, that it is shrinking).
“all of the journals under investigation make their articles freely available after an initial period of time [hence] any [OA Advantage] would be during these initial months in which there exists an access differential between open access and subscription-access articles. We would expect therefore that the effect of open access would therefore be strongest in the earlier years of the life of the article and decline over time. In other words, we would expect our trend (Figure 1) to operate in the reverse direction.”The reasoning here is a bit hard to follow, but the Kurtz studies that Davis cites show that in Astronomy, making preprints OA in the year or so before publication (after which all Astronomy postprints are OA) results in both “a strong EA effect and a strong [QB] effect.” But even in a fast-moving field like Astronomy, the effect is not immediate! There is no way to predict from the data for Astronomy how quickly an EA effect for nonsubscribers during the embargo year in Biomedicine should make itself felt in citations, but it is a safe bet that, as with citation latency itself, and the latency of the OA citation Advantage, the “EmA” (“Embargo Access”) counterpart of the EA effect in access-embargoed Biomedical journals will need a latency of a few years to become detectable. And since Davis’s age/OA interaction, based on static, cumulative, retrospective data, is just as readily interpretable as indicating that OA Advantages require time and sample-size growth in order to occur and be detected, the two patterns are perfectly compatible.
“we are at a loss to come up with alternative explanations to explain the monotonic decline in the citation advantage”There is no monotonic decline to explain. Just (a) low power in initial years, (b) cumulative data not analysed to equate citing/cited year spans, (c) the failure to test for QA citation-bracket effects, and (d) the failure to reckon self-archiving OA into the OA Advantage (treating it instead as non-OA).
If this had been a JASIST referee report, I would have recommended performing several further analyses taking into account:
and making the interpretation of the resultant findings more even-handed, rather than slanting toward the author’s preferred hypothesis that the OA Advantage is due solely or mostly to QB.(1) self-archiving OA
Björk, Bo-Christer; Roosr, Annikki; Lauri, Mari (2008) Global annual volume of peer reviewed scholarly articles and the share available via different open access options. In: Chan, L. & Mornati, S. (Eds) ELPUB2008. Open Scholarship: Authority, Community, and Sustainability in the Age of Web 2.0 - Proceedings of the 12th International Conference on Electronic Publishing Toronto, Canada 25-27 June 2008: pp. 178-186
Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8): 1060-1072
Craig, I. D., Plume, A. M., McVeigh, M. E., Pringle, J., & Amin, M. (2007). Do Open Access Articles Have Greater Citation Impact? A critical review of the literature. Journal of Informetrics 1(3): 239-248
Davis, P.M. (2008) Author-choice open access publishing in the biological and medical literature: a citation analysis. Journal of the American Society for Information Science and Technology (JASIST) (in press)
Davis, P. M., & Fromerth, M. J. (2007). Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles? Scientometrics 71(2): 203-215.
Davis, P. M., Lewenstein, B. V., Simon, D. H., Booth, J. G., & Connolly, M. J. L. (2008). Open access publishing, article downloads and citations: randomised trial. British Medical Journal 337: a586
Hajjem, C. and Harnad, S. (2007a) Citation Advantage For OA Self-Archiving Is Independent of Journal Impact Factor, Article Age, and Number of Co-Authors. Technical Report, Electronics and Computer Science, University of Southampton.
Hajjem, C. and Harnad, S. (2007b) The Open Access Citation Advantage: Quality Advantage Or Quality Bias? Technical Report, Electronics and Computer Science, University of Southampton
Hajjem, C., Harnad, S. and Gingras, Y. (2005) Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin 28(4) 39-47.
Harnad, S. (2007a) Craig et al.'s Review of Studies on the OA Citation Advantage. Open Access Archivangelism 248.
Harnad, S. (2007b) Where There's No Access Problem There's No Open Access Advantage Open Access Archivangelism 389
Harnad, S. (2008a) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11)
Harnad, S. (2008b) Davis et al's 1-year Study of Self-Selection Bias: No Self-Archiving Control, No OA Effect, No Conclusion British Medical Journal: Rapid Responses 337 (a568): 199775
Hitchcock, S. (2008) The effect of open access and downloads ('hits') on citation impact: a bibliography of studies
Kurtz, M. J., Eichhorn, G., Accomazzi, A., Grant, C., Demleitner, M., Henneken, E., et al. (2005). The effect of use and access on citations. Information Processing and Management 41: 1395-1402
Kurtz, M. J., & Henneken, E. A. (2007). Open Access does not increase citations for research articles from The Astrophysical Journal: Harvard-Smithsonian Center for Astrophysics
Lawrence, S. (2001) Free online availability substantially increases a paper's impactNature 31 May 2001
Moed, H. F. (2007). The effect of 'Open Access' upon citation impact: An analysis of ArXiv's Condensed Matter Section. Journal of the American Society for Information Science and Technology 58(13): 2047-2054
Seglen, P. O. (1992). The Skewness of Science. Journal of the American Society for Information Science 43(9): 628-638
American Scientist Open Access Forum
Monday, August 4. 2008
Evans, James A. (2008) Electronic Publication and the Narrowing of Science and Scholarship Science 321(5887): 395-399 DOI:10.1126/science.1150473Evans found that as more and more journal issues are becoming accessible online (mostly only the older back-issues for free), journals are not being cited less overall, but citations are narrowing down to fewer articles, cited more.Excerpt: "[Based on] a database of 34 million articles, their citations (1945 to 2005), and online availability (1998 to 2005),... as more journal issues came online, the articles [cited] tended to be more recent, fewer journals and articles were cited, and more of those citations were to fewer journals and articles... [B]rowsing of print archives may have [led] scientists and scholars to [use more] past and present scholarship. Searching online... may accelerate consensus and narrow the range of findings and ideas built upon."
In one of the few fields where this can be and has been analyzed thoroughly, astrophysics, which effectively has 100% Open Access (OA) (free online access) already, Michael Kurtz too found that with free online access to everything, reference lists became (a little) shorter, not longer, i.e., people are citing (somewhat) fewer papers, not more, when everything is accessible to them free online.
The following seems a plausible explanation:
Before OA, researchers cited what they could afford to access, and that was not necessarily all the best work, so they could not be optimally selective for quality, importance and relevance. (Sometimes -- dare one say it? -- they may even have resorted to citing "blind," going by just the title and abstract, which they could afford, but not the full text, to which they had no subscription.)
In contrast, when everything becomes accessible, researchers can be more selective and can cite only what is most relevant, important and of high quality. (It has been true all along that about 80-90% of citations go to the top 10-20% of articles. Now that the top 10-20% (along with everything else in astrophysics), is accessible to everyone, everyone can cite it, and cull out the less relevant or important 80-90%.
This is not to say that OA does not also generate some extra citations for lesser articles too; but the OA citation advantage is bigger, the better the article -- the "quality advantage" -- (and perhaps most articles are not that good!). Since the majority of published articles are uncited (or only self-cited), there is probably a lot published that no amount of exposure and access can render worth citing!
(I think there may also exist some studies [independent of OA] on "like citing like" -- i.e., articles tending to be cited more at their own "quality" level rather than a higher one. [Simplistically, this means within their own citation bracket, rather than a higher one.] If true, this too could probably be analyzed from an OA standpoint.)
But the trouble is that apart from astrophysics and high energy physics, no other field has anywhere near 100% OA: It's closer to 15% in other fields. So aside from a (slightly negative) global correlation (between the growth of OA and the average length of the reference list), the effect of OA cannot be very deeply analyzed in most fields yet.
In addition, insofar as OA is concerned, much of the Evans effect seems to be based on "legacy OA," in which it is the older literature that is gradually being made accessible online or freely accessible online, after a long non-online, non-free interval. Fields differ in their speed of uptake and their citation latencies. In physics, which has a rapid turnaround time, there is already a tendency to cite recent work more, and OA is making the turnaround time even faster. In longer-latency fields, the picture may differ. For the legacy-OA effect especially, it is important to sort fields by their citation turnaround times; otherwise there can be biases (e.g. if short- or long-latency fields differ in the degree to which they do legacy OA archiving).
If I had to choose between the explanation of the Evans effect as a recency/bandwagon effect, as Evans interprets it, or as an increased overall quality/selectivity effect, I'd choose the latter (though I don't doubt there is a bandwagon effect too). And that is even without going on to point out that Tenopir & King, Gingras and others have shown that -- with or without OA -- there is still a good deal of usage and citation of the legacy literature (though it differs from field to field).
I wouldn't set much store by "skimming serendipity" (the discovery of adjacent work while skimming through print issues), since online search and retrieval has at least as much scope for serendipity. (And one would expect more likelihood of a bandwagon effect without OA, where authors may tend to cite already cited but inaccessible references "cite unseen.")
Are online and free online access broadening or narrowing research? They are broadening it by making all of it accessible to all researchers, focusing it on the best rather than merely the accessible, and accelerating it.
American Scientist Open Access Forum
Thursday, July 31. 2008
Davis et al's 1-year Study of Self-Selection Bias: No Self-Archiving Control, No OA Effect, No Conclusion
The following is an expanded, hyperlinked version of a BMJ critique of:
Davis, PN, Lewenstein, BV, Simon, DH, Booth, JG, & Connolly, MJL (2008) Open access publishing, article downloads, and citations: randomised controlled trial British Medical Journal 337: a568
Overview (by SH):
Davis et al.'s study was designed to test whether the "Open Access (OA) Advantage" (i.e., more citations to OA articles than to non-OA articles in the same journal and year) is an artifact of a "self-selection bias" (i.e., better authors are more likely to self-archive or better articles are more likely to be self-archived by their authors).
The control for self-selection bias was to select randomly which articles were made OA, rather than having the author choose. The result was that a year after publication the OA articles were not cited significantly more than the non-OA articles (although they were downloaded more).
The authors write:
"To control for self selection we carried out a randomised controlled experiment in which articles from a journal publisher’s websites were assigned to open access status or subscription access only"The authors conclude:
"No evidence was found of a citation advantage for open access articles in the first year after publication. The citation advantage from open access reported widely in the literature may be an artefact of other causes."Commentary:
To show that the OA advantage is an artefact of self-selection bias (or of any other factor), you first have to produce the OA advantage and then show that it is eliminated by eliminating self-selection bias (or any other artefact).
This is not what Davis et al. did. They simply showed that they could detect no OA advantage one year after publication in their sample. This is not surprising, since most other studies, some based based on hundreds of thousands of articles, don't detect an OA advantage one year after publication either. It is too early.
To draw any conclusions at all from such a 1-year study, the authors would have had to do a control condition, in which they managed to find a sufficient number of self-selected, self-archived OA articles (from the same journals, for the same year) that do show the OA advantage, whereas their randomized OA articles do not. In the absence of that control condition, the finding that no OA advantage is detected in the first year for this particular sample of 247 out of 1619 articles in 11 physiological journals is completely uninformative.
The authors did find a download advantage within the first year, as other studies have found. This early download advantage for OA articles has also been found to be correlated with a citation advantage 18 months or more later. The authors try to argue that this correlation would not hold in their case, but they give no evidence (because they hurried to publish their study, originally intended to run four years, three years too early.)
(1) The Davis study was originally proposed (in December 2006) as intended to cover 4 years:
Davis, PN (2006) Randomized controlled study of OA publishing (see comment)It has instead been released after a year.
(2) The Open Access (OA) Advantage (i.e., significantly more citations for OA articles, always comparing OA and non-OA articles in the same journal and year) has been reported in all fields tested so far, for example:
Hajjem, C., Harnad, S. and Gingras, Y. (2005) Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin 28(4) pp. 39-47.(3) There is always the logical possibility that the OA advantage is not a causal one, but merely an effect of self-selection: The better authors may be more likely to self-archive their articles and/or the better articles may be more likely to be self-archived; those better articles would be the ones that get more cited anyway.
(4) So it is a very good idea to try to control methodologically for this self-selection bias: The way to control it is exactly as Davis et al. have done, which is to select articles at random for being made OA, rather than having the authors self-select.
(5) Then, if it turns out that the citation advantage for randomized OA articles is significantly smaller than the citation advantage for self-selected-OA articles, the hypothesis that the OA advantage is all or mostly just a self-selection bias is supported.
(6) But that is not at all what Davis et al. did.
(7) All Davis et al. did was to find that their randomized OA articles had significantly higher downloads than non-OA articles, but no significant difference in citations.
(8) This was based on the first year after publication, when most of the prior studies on the OA advantage likewise find no significant OA advantage, because it is simply too early: the early results are too noisy! The OA advantage shows up in later years (1-4).
(9) If Davis et al. had been more self-critical, seeking to test and perhaps falsify their own hypothesis, rather than just to confirm it, they would have done the obvious control study, which is to test whether articles that were made OA through self-selected self-archiving by their authors (in the very same year, in the very same journals) show an OA advantage in that same interval. For if they do not, then of course the interval was too short, the results were released prematurely, and the study so far shows nothing at all: It is not until you have actually demonstrated an OA advantage that you can estimate how much of that advantage might in reality be due to a self-selection artefact!
(10) The study shows almost nothing at all, but not quite nothing, because one would expect (based on our own previous study, which showed that early downloads, at 6 months, predict enhanced citations at a year and a half or later) that Davis's increased downloads too would translate into increased citations, once given enough time.
Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.(11) The findings of Michael Kurtz and collaborators are also relevant in this regard. They looked only at astrophysics, which is special, in that (a) it is a field with only about a dozen journals, to which every research-active astronomer has subscription access -- these days they also have free online access via ADS -- and (b) it is a field in which most authors self-archive their preprints very early in arxiv -- much earlier than the date of publication.
Kurtz, M. J. and Henneken, E. A. (2007) Open Access does not increase citations for research articles from The Astrophysical Journal. Preprint deposited in arXiv September 6, 2007.(12) Kurtz & Henneken, too, found the usual self-archiving advantage in astrophysics (i.e., about twice as many citations for OA papers than non-OA), but when they analyzed its cause, they found that most of the cause was the Early Advantage of access to the preprint, as much as a year before publication of the (OA) postprint. In addition, they found a self-selection bias (for prepublication preprints -- which is all that were involved here, because, as noted, in astrophysics, after publication, everything is OA): The better articles by the better authors were more likely to have been self-archived as preprints.
(13) Kurtz's results do not generalize to all fields, because it is not true of other fields either that (a) they already have 100% OA for their published postprints, or that (b) many authors tend to self-archive preprints before publication.
(14) However, the fact that early preprint self-archiving (in a field that is 100% OA as of postprint publication) is sufficient to double citations is very likely to translate into a similar effect, in a non-OA, non-preprint-archiving field, if one reckons on the basis of the one-year access embargo that many publishers are imposing on the postprint. (The yearlong "No-Embargo" advantage provided by postprint OA in other fields might not turn out to be so big as to double citations, as the preprint Early Advantage in astrophysics does, because any potential prepublication advantage is lost, and after publication there is at least the subscription access to the postprint; but the postpublication counterpart of the Early Advantage for postprints that are either not self-archived or embargoed is likely to be there too.)
(15) Moreover, the preprint OA advantage is primarily Early Advantage, and only secondarily Self-Selection.
(16) The size of the postprint self-selection bias would have been what Davis et al. tested -- if they had done the proper control, and waited long enough to get an actual OA effect to compare against. (Their regression analyses simply show that exactly as they detected no citation advantage in their sample and interval for the random OA articles, they likewise likewise detected no citation advantage for the self-selected self-archived OA articles in their sample and interval: this hardly constitutes evidence that the (undetected) OA advantage is in reality a self-selection artefact!)
(17) We had reported in an unpublished 2007 pilot study that there was no statistically significant difference between the size of the OA advantage for mandated (i.e., obligatory) and unmandated (i.e., self-selected) self-archiving:
Hajjem, C & Harnad, S. (2007) The Open Access Citation Advantage: Quality Advantage Or Quality Bias? Preprint deposited in arXiv January 22, 2007.(18) We will soon be reporting the results of a 4-year study on the OA advantage in mandated and unmandated self-archiving that confirms these earlier findings: Mandated self-archiving is like Davis et al.'s randomized OA, but we find that it does not reduce the OA advantage at all -- once enough time has elapsed for there to be an OA Advantage at all.
American Scientist Open Access Forum
Saturday, September 22. 2007
Sandy Thatcher, President, Association of American University Presses (AAUP) wrote:
ST: "You make it all sound so simple, Stevan, but there is nothing simple about a transition from Green OA to Gold OA, including the redirection of savings from journal subscriptions to funding Gold OA journals, because as many wise people like Jim O'Donnell have pointed out on this list [liblicense], universities don't work that way."I make no wishes, wise or unwise. And I make no conjectures ("Hypotheses non Fingo") -- except perhaps (if forced) as counter-conjectures, to counter others' unforced conjectures.
The actual empirical evidence (neither wish nor conjecture) is that OA self-archiving (Green OA) is (1) feasible, (2) being done, (3) beneficial, and (4) being mandated. Whether and when it ever goes on to generate cancellations and transitions and redirections is all pure speculation, based on no empirical evidence one way or the other (except that it hasn't happened yet, even in fields that reached 100% Green OA years ago). But if you insist on asking a hypothetical "what if?" question just the same, I respond with an equally hypothetical "then..." answer.
The factual part is fact. If wise men have privileged access to the future, so be it. I have none. I have only the available evidence, and logic. (And logic tells me, platitudinously, that necessity is the mother of invention, and where there's a will, there's a way, especially if/when the hypothetical cancellation windfall savings that no one has yet seen should ever materialize. Till then, I'll just go with the evidence-based four -- OA self-archiving (2), OA self-archiving mandates (4), and their already demonstrated feasibility (1) and benefits (3) -- leaving the speculation to those who prefer that sort of thing.)
ST: "Wishing it were so does not make it so. And by talking about peer review only, you oversimplify what is involved in journal publishing, which requires skills that go beyond simply conducting peer review and that are not most economically carried out by faculty, who are not trained for such tasks and whose dedication of time to them detracts from the exercise of their main talents as researchers."Well, I could invoke my quarter century as founder and editor in chief of a major peer-reviewed journal as evidence that I may know what I am talking about...
But I'd rather just point out that the conjecture about journal-publication downsizing to just peer-review service-provision is part of the hypothetical conditional that I only invoke if someone insists on playing the speculation game. It is neither a wish nor a whim. I am perfectly content with 100% Green OA. Full stop.
Apart from that, I'll stick with the empirical facts -- reminder: self-archiving, self-archiving mandates, their demonstrated feasability and their demonstrated benefits -- and abstain from the hypothesizing.
ST: "You are also wrong in interpreting PRISM as just another repetition of the same old tired anti-OA rhetoric. As a member of the publishing community whose press is a member of the PSP (but not an endorser of PRISM), I can tell you that this is not just more of the same."If PRISM is making any new points -- empirical or logical -- I would be very grateful if Sandy (or anyone) would point out to me exactly what those new points are. For all I have seen has been a repetition of the very few and very familiar old points I and others have rebutted so many, many times before...
(Sandy seems to have overlooked the linked list of 21 references I included as evidence that these points have all been voiced, and rebutted, repeatedly, in bygone days. If anyone sends me a list of new points, it would be very helpful if they first checked that list to see whether those points are indeed new, rather than dated, discredited duplicates.)
ST: "Whether we are getting close to a "tipping point" is of course a matter of conjecture, but then so is the overall benefit from Green OA, which you always state as though it were an established fact rather than a hypothesis with some evidence in support of it yet hardly overwhelming evidence at this point in time."First, since we are talking about wishful thinking, I know full well that the OA self-archiving advantage -- in terms of citations and downloads -- is something that the anti-OA publishers dearly wish were nonexistent, or merely a methodological artifact of some kind.
Second, I and others are quite happy to continue conducting actual empirical studies and analyses confirming the OA advantage, and demonstrating that it is not just an artifact (of either early access or self-selection bias for quality). That interesting ongoing question is at least substantive and empirical, hence new (especially when the challenges come from those -- such as Kurtz and Moed -- who have no vested interests in the outcome one way or the other). The doomsday prophecies and the hype about government control and censorship are not.
American Scientist Open Access Forum
Friday, September 7. 2007
Kurtz & Henneken (2007) report a very interesting new result:
"We demonstrate conclusively that there is no 'Open Access Advantage' for papers from the Astrophysical Journal. The two to one citation advantage enjoyed by papers deposited in the arXiv e-print server is due entirely to the nature and timing of the deposited papers. This may have implications for other disciplines."Earlier, Kurtz et al. (2005) had shown that the lion's share of the citation advantage of astrophysics papers self-archived as preprints in Arxiv was caused by (1) Early Advantage (EA) (earlier citations for papers self-archived earlier) and (2) Quality Bias (QB) (a self-selection bias toward self-archiving higher quality papers) and not by (3) Open Access (OA) itself (being freely accessible online to those who cannot afford subscription-toll access).
Kurtz et al. explained their finding by suggesting that:
"in a well funded field like astrophysics essentially everyone who is in a position to write research articles has full access to the literature."This seems like a perfectly reasonable explanation for their findings. Where there is no access problem, OA cannot be the cause of whatever higher citation count is observed for self-archived articles. Moed (2007) has recently reported a similar result in Condensed Matter Physics, and so have Davis & Fromerth (2007) in 4 mathematics journals.
Kurtz & Henneken's latest study confirms and strengthens their prior finding: They compared citation counts for articles published in two successive years of the Astrophysical Journal. For one of the years, the journal was freely accessible to everyone; for the other it was only accessible to subscribers. The citation counts for the self-archived articles, as expected, were twice as high as for the non-self-archived articles. They then compared the citation-counts for non-self-archived articles in the free-access year and in the toll-access year, and found no difference. They concluded, again, that OA does not cause increased citations.
But of course K&H's prior explanation -- which is that there is no access problem in astrophysics -- applies here too: It means that in a field where there is no access problem, whatever citation advantage comes from making an article OA by self-archiving cannot be an OA effect.
K&H conclude that "[t]his may have implications for other disciplines."
It should be evident, however, that the degree to which this has implications for other disciplines depends largely on the degree to which it is true in other disciplines that "essentially everyone who is in a position to write research articles has full access to the literature."
We (Hajjem & Harnad 2007) have conducted (and are currently replicating) a similar study, but across the full spectrum of disciplines, measuring the citation advantage for mandated and unmandated self-archiving for articles from 4 Institutional Repositories that have self-archiving mandates (three universities plus CERN), each compared to articles in the very same journal and year by authors from other institutions (on the assumption that mandated self-archiving should have less of a self-selection quality bias than unmandated self-archiving).
Figure 1. Self-Selected Self-Archiving vs. Mandated Self-Archiving: Within-Journal Citation Ratios (for 2004, 4 mandating institutions, all fields). S = citation counts for articles self-archived at institutions with (Sm) and without (Sn) a self-archiving mandate. N = citation counts for non-archived articles at institutions with (Nm) and without (Nn) mandate (i.e., Nm = articles not yet compliant with mandate). Grand average of (log) S/N ratios (106,203 articles; 279 journals) is the OA advantage (18%); this is about the same as for Sn/Nn (27972 articles, 48 journals, 18%) and Sn/N (17%); ratio is higher for Sm/N (34%), higher still for Sm/Nm (57%, 541 articles, 20 journals); and Sm/Sn = 27%, so self-selected self-archiving does not yield more citations than mandated (if anything, it is rather the reverse). (All six within-pair differences are significant: correlated sample t-tests.)We again confirmed the citation advantage for self-archiving, and found no difference in the size of that advantage for mandated and unmandated self-archiving. (The finding of an equally large self-archiving advantage for mandated and unmandated self-archiving was also confirmed for CERN, whose articles are all in physics -- although one could perhaps argue that CERN articles enjoy a quality advantage over articles from other institutions.)
A few closing points:
(1) It is likely that the size of the access problem differs from field to field, and with it the size of the OA citation advantage. Evidence suggests that most fields are not nearly as well-heeled as astrophysics. According to a JISC survey, 48% of researchers overall (biomedical sciences 53%, physical/engineering sciences 42%, social sciences 47%, language/linguistics 48% and arts/humanities 53%) have difficulty in gaining access to the resources they need to do their research. (The ARL statistics on US university serials holdings is consistent with this.) The overall access difficulty is roughly congruent with the reported OA access advantage.Stevan Harnad
Brody, T., Harnad, S. and Carr, L. (2006) Earlier Web Usage Statistics as Predictors of Later Citation Impact. Journal of the American Association for Information Science and Technology (JASIST) 57(8) pp. 1060-1072.
Davis, P. M. and Fromerth, M. J. (2007) Does the arXiv lead to higher citations and reduced publisher downloads for mathematics articles? Scientometics, Vol. 71, No. 2.
Hajjem, C., Harnad, S. and Gingras, Y. (2005) Ten-Year Cross-Disciplinary Comparison of the Growth of Open Access and How it Increases Research Citation Impact. IEEE Data Engineering Bulletin 28(4) pp. 39-47.
Hajjem, C. and Harnad, S. (2007) The Open Access Citation Advantage: Quality Advantage Or Quality Bias? Technical Report, Electronics and Computer Science, University of Southampton.
Kurtz, M. J. and Henneken, E. A. (2007) Open Access does not increase citations for research articles from The Astrophysical Journal. Preprint deposited in arXiv September 6, 2007.
Kurtz, M. J., Eichhorn, G., Accomazzi, A., Grant, C. S., Demleitner, M., Murray, S. S. (2005, The Effect of Use and Access on Citations. Information Processing and Management, 41, 1395-1402)
Moed, H. F. (2007) The effect of 'open access' on citation impact: An analysis of ArXiv's condensed matter section, Journal of the American Society for Information Science and Technology (JASIST) , August 30, 2007.
Seglen, P. O. (1992) The skewness of science. Journal of the American Society for Information Science 43:628-38
(Page 1 of 3, totaling 21 entries) » next page
Syndicate This Blog
Materials You Are Invited To Use To Promote OA Self-Archiving:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society.
The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
Last entry: 2017-03-27 13:12
1125 entries written
238 comments have been made