Monday, June 30. 2008
Guenther Eysenbach suggests that what should be mandated (by institututions and funders) is not Green OA self-archiving but Gold OA publishing, because it would be more efficient and less costly. I explain below why neither of these is the case now (but that Green OA mandates may indeed pave the way for an eventual transition to Gold OA that will be less costly and more efficient).
(1) There is no parallel publishing with Green OA self-archiving.
(2) Publishing (whether Gold OA or non-OA) is publishing.
(3) Green OA is access provision (to published articles), not publishing (of already-published articles).
(4) A posted, unpublished document is not listed in an academic CV as "published," and a published article that is also posted (or emailed, or photocopied) is not listed as two publications.
(5) Users cite the published article, not the access-source; where available, they also list the URL(s) for access purposes.
Society pays twice?
(1) Who pays twice for what?
(2) Tax-payers pay to fund research.
(3) Institutions pay to subscribe to the journals in which the research is published.
(4) No institution can afford to subscribe to all (refereed) journals; most can afford only a few.
(5) So OA is needed so that all users can access all articles.
(6) Green OA self-archiving supplements what is currently accessible to subscribers, by making it accessible to all would-be users, webwide.
(7) So who is paying for what, twice, with Green OA?
(8) Even with Gold OA today, no one would be paying twice.
(9) But hybrid "open choice" (optional gold) publishers would be paid twice -- once from institutional subscription money, and once from research grant money.
(10) If and when universal Green OA causes universal subscription cancellations, all journals can downsize to Gold OA, paid for by redirecting part of the windfall institutional cancellation savings rather than by redirecting scarce research funds from research.
Mandate gold OA and "pay once"?
(1) No one is paying twice.
(2) Institutions and funders cannot mandate publishers' choice of cost-recovery model.
(3) Institutions and funders cannot mandate authors' choice of journals.
(4) Until and unless subscriptions are cancelled, Gold OA requires extra funds, usually diverted from research.
(5) Green OA can provide immediate 100% OA.
(6) It just needs to be mandated by institutions and funders.
Pay for Gold OA by "Topping up" research grants?
(1) Topping up from what funds?
(2) Research is already underfunded and research funds are scarce.
(3) Redirecting research funds to pay needlessly for Gold OA publishing today just makes research funds scarcer, needlessly, because providing Green OA costs nothing, and subscriptions are still paying the costs of publishing.
Canada's CIHR Mandate without "infrastructure"?
(1) What infrastucture?
(2) Researchers whose institutions already have an OAI-compliant Institutional Repository (IR) can deposit there.
(3) For researchers that do not yet have an IR there are many back-up central repositories in which they can deposit, such as DEPOT, CogPrints, or Arxiv, all OAI-interoperable.
(4) What is lacking is deposits, not repositories to deposit in -- and the mandates will cure that.
(5) There is no need for a "Canada PubMed Central" as a locus of direct deposit.
(6) Central repositories can harvest from Institutional Repositories, through OAI interoperability.
Self-archiving is not free?
(1) Not free to whom?
(2) And to whom does it cost what?
(3) Institutions create repositories (for many reasons: record-keeping, performance assessment, access-provision, visibility).
(4) The cost to the institution per paper deposited is a few dollars.
(5) The cost to the researcher per paper deposited is a few keystrokes.
Can subscription savings pay for repository costs?
(1) Institutional repository costs per paper deposited are negligibly small.
(2) Central repository costs are up to those who think central repositories are needed, over and above OAI-interoperable institutional repositories.
(3) None of this has anything to do with publication costs or Gold OA fees.
(4) Today Gold OA fees per paper are not negligibly small, and they divert scarce funds from research.
(5) If and when Green OA causes subscriptions to become unsustainable, journals will cut costs by abandoning the paper edition and PDF-generation, offloading access-provision and archiving onto the distributed institutional repository network, downsizing to peer review alone, and converting to the Gold OA cost-recovery model -- paid for, per paper, by institutions' annual windfall savings from having cancelled journal subscriptions.
How much does it cost to run a Repository?
(1) Per deposited paper, next to nothing.
(2) And institutions derive many benefits from their IRs, having nothing to do with journal subscription costs one way or the other.
American Scientist Open Access Forum
Stan Katz has blogged a summary of the OA session at the Association of American University Presses (AAUP) Meeting in Montreal in the the Chronicle of Higher Education Review. Here are six little clarifications:
(1) SK: "Open Access [OA] [means that] all scholarly articles ought to be mounted on free public access websites maintained by their universities." This brief definition is fine as a 1st cut.
(2) SK: "[T]he obligation [i.e., recent mandates like Harvard's] to “publish” by mounting articles on free websites will make it impossible for nonprofit presses and learned societies to sustain themselves."The mandates are for authors to mount (i.e., self-archive) the final, refereed draft of their published article, not to "publish" it by mounting it. This self-archived "Green OA" draft is a supplement, provided for all users whose institutions cannot afford access to the publisher's version. It is not itself another publication.
The publisher might be nonprofit or commercial; that is not the relevant question. The relevant question is whether or not the supplementary OA draft causes cancellations, rendering journal subscriptions unsustainable for covering costs. So far, in the few fields where OA self-archiving has been taking place the longest (15 years) and most extensively (100%) -- e.g. high-energy physics, published by the American Physical Society (APS), the Institute of Physics (IOP) and Reed-Elsevier -- the publishers report that they find no detectable subscription cancellations associated with self-archiving:
However, if and when OA self-archiving ever does cause catastrophic cancellations, making subscriptions unsustainable, then, and only then, journals can (a) offload all their former access-provision and archiving functions, along with their costs, onto the distributed network of institutional repositories, (b) downsize to peer review alone, and (c) convert to the "Gold OA" publication-cost-recovery model, charging the author-institution, per outgoing paper, for peer review and certification instead of charging the user-institution, per incoming journal, for access. The institutions will (on the very same hypothesis, of catastrophic cancellations) have more than enough annual windfall subscription cancellation savings out of which to cover those charges.
(3) SK: "Harnad’s suggestion is that the universities transfer the payments they are currently making to their academic presses to subsidize peer review and archiving of their faculty scholarly output."No, my suggestion is only that universities should mandate self-archiving. Then, if and when the resulting universal OA should ever make subscriptions become unsustainable, the universities' subscription savings will, by the same token, be freed to pay for the university's peer-review costs. That is no subsidy: It is direct payment for a service, out of the very same funds formerly used to purchase a product.
(4) SK: "[U]niversities increasingly expect their presses to be self-sustaining economically, and are unlikely to put up the necessary funding"Extra funding by whom for whom for for what? While universities are subscribing to journals, that pays for peer review. If and when journal subscriptions collapse, the university savings will pay for the peer review. (And again this has nothing to do with university presses in particular.)
(5) SK: "[N]ot all scholars (and nonprofit publishers) are connected to universities. Who will subvene their publications?"Unaffiliated scholars are rare enough, and the per-paper costs of peer review alone are low enough, so that a small surcharge on the charges of the affiliated authors (the vast majority) will take care of these outliers.
And as noted, neither whether publishers are nonprofit or commercial publishers, nor whether they are university or learned-society publishers, is relevant to any of this.
(6) SK: "[OA has] different... implications...for the humanities and social sciences"OA pertains to refereed journal articles publication in all disciplines. Humanities and social sciences are not exceptions in any way. All research, in all scholarly and scientific disciplines, benefits from maximizing its uptake, usage, applications and impact by eliminating the access-barriers that have been made obsolete and unnecessary by the advent of the PostGutenberg Galaxy.
American Scientist Open Access Forum
Thursday, June 26. 2008
Wednesday, June 25. 2008
Richard Poynder asks (in the American Scientist Open Access Forum):
 Is it true that a Gold OA article-processing-charge model will create a situation in which "publishers are operating in a genuinely competitive market to offer a service that is good value for money"?Excellent question(s)!
i. The answer to Question  would be "Yes" if all or most (refereed) journals today were Gold OA. But the vast majority of journals are non-OA. Hence the competition is just among a minority of journals (about 10-15%, and mostly not the top 10-15%).
ii. Meanwhile, without universal Green OA, the functions of access-provision and archiving -- and their costs -- continue to be a part of journal publishing, both Gold OA and non-OA, with all journals also still providing the PDF (with its costs) too. (I leave the issue of the print edition to those who are specialized in pondering Escher impossible-figures.) I just point out that this is a long way from providing just peer review alone. Nor does there look to be a transition scenario, in the absence of Green OA and a distributed network of Green OA Institutional Repositories to take over the function of access-provision and archiving.
iii. The answer to question  being hence "No," conditional question  becomes moot.
iv. There is a known, tried, tested way of scaling to 100% OA, and it has been demonstrated to work: Green OA self-archiving and Green OA self-archiving mandates.
v. Unlike Gold OA, which not only faces substantial scaling problems but is not in the hands of the research community, Green OA is entirely in the hands of the research community and can be (and has been) mandated (and the mandates work).
vi. So what are we waiting for?
American Scientist Open Access Forum
Tuesday, June 24. 2008
If I live to be a hundred, never will I understand why we prefer providing funds for our OA publications rather than providing OA for all our publications (at the cost of only a few keystrokes)...
It is splendid that University of Calgary has found $100,000 to fund OA for those U of C authors who wish to publish their articles in OA journals, but why does U of C not first mandate that those U of C authors who publish in any journal whatsoever provide OA to all their articles, by self-archiving them in their own OA institutional repository (as, for example, Harvard and 21 other universities have already done)?
American Scientist Open Access Forum
Monday, June 23. 2008
What follows is a note of caution about the generalizability of Mark Rowse's consortial "flip" model for pre-emptive conversion to Gold Open Access publishing and its current implementation by SCOAP3 (Sponsoring Consortium for Open Access Publishing in Particle Physics), as promoted by CERN and the other participating institutions.
First, the important and unproblematic points of agreement:
1. CERN mandates Green OA self-archiving for all of its research output (and most particle physicists self-archive spontaneously in any case).
If all other research institutions and universities, in all disciplines, worldwide, already did the same as CERN, then we would already have universal Green OA and there would be no problem with the SCOAP3 experiment, which would be risk-free, regardless of whether it proved scalable or sustainable. But as it is, only 44 other institutions and funders have so far done as CERN has done and only about 10-15% of annual research article output is being self-archived spontaneously.
2. CERN is very probably right that the eventual future of peer-reviewed journals will be that publishing costs are no longer recovered from user-institution subscription fees but from author-institution publishing fees.
(Note that this is not "author pays" but "author-institution pays," as it should be, and does not entail diverting scarce research funds toward paying publishing fees.)
However, if all other research institutions and universities, in all disciplines, worldwide, were, like CERN, already mandating the self-archiving of 100% of their research output today, hence if the archiving and access-provision service were already being offloaded onto the distributed network of OA Institutional Repositories instead of having to be provided by journals, then the essential cost of journal publishing would shrink to just the cost of providing peer review alone, and the asking price for that service alone would be far less than what is being charged currently for Gold OA and all the other products and services that are being co-bundled into Gold OA today (print edition, PDF, archiving, access-provision). Moreover, each author-institution's annual savings from its user-institution subscription cancellations would then be available to pay that much reduced asking price for peer review alone, per institutional paper published. (That would then be a natural, non-Rowsean "flip" that worked, thanks to universal Green OA.)
But there are several problems with the SCOAP3 approach at this time, and they arise from an incoherence at the heart of the "flip" model (in the pre-emptive form proposed by Mark Rowse, former CEO of the journal aggregator, INGENTA, in a proposal not unlike an even earlier one made by Arnoud De Kemp, then at Springer, for a "click-through oligopoly"):
3.1. The current asking price for Gold OA is vastly inflated. Because there is nowhere near universal Green OA self-archiving yet, paying for Gold OA at all today is not only premature and unnecessary (if, that is, our goal is OA itself, rather than something else [like lowering journal prices, maybe?]), but the current Gold OA asking price is unrealistically high.
Green OA needs to come first, before conversion to Gold OA publishing. Then, if and when universal Green OA induces subscription cancellations, which in turn drive cost-cutting, and downsizing to the true essentials of OA publishing (with the print edition terminated and OA repositories taking over the burden of access-provision and archiving), peer review can be paid for by author-institutions, per outgoing paper published, instead of being paid for, as now, by user user-institutions, per incoming journal purchased. At the present time, however, there exist (a) neither the institutional need to pay to publish in order to provide OA nor (b) the institutional funds to pay to publish (because those funds are currently tied up in paying journal subscriptions, which are in turn covering the costs of publishing indirectly); in addition, (c) the price of publishing as it is currently done today, with everything that is still being co-bundled into it, is still far too high.
3.2. The Rowsean "flip" model is globally incoherent and unscalable. SCOAP3 consortial sponsorship/membership is not only based on an arbitrarily inflated asking price today, with inessentials (like the paper edition or the publisher's PDF) gratuitously co-bundled into it, but the consortial payment model itself is incoherent and unscalable, for two reasons:
(i) Why should institutions that can access all journal articles for free (Green) pay for Gold until/unless they have to (in order to get their own research output published)?
And (ii) on what basis are institutions to negotiate in advance with each individual journal (there are 25,000 in all!) how many of their researchers' papers will be accepted and published, per year, as if that too were some sort of annual subscription quota! Publishing is individual paper- and journal-based, not consortial bundled subscription-based.
4. Consortial institutional payment for co-bundled incoming journals does not translate into consortial payment for the peer review of institutions' individual outgoing articles. Consortial payment for publication gives the illusion of making sense only if one considers it locally, as CERN is doing, for one field (particle physics, a field that already has Green OA), with a set of collaborating institutions, ready and willing to "flip" to paying the same journals jointly for publishing, much as they had been paying jointly for subscribing. But this Rowsean "flip" model stops making sense as one scales up globally across fields, institutions, publishers and journals -- and particularly to that overwhelming majority of fields that do not yet have Green OA.
We are meant to imagine all institutions, pre-emptively paying all journals -- co-bundled, subscription/license-style -- in advance, for an annual "quota" (again subscription-style) of accepted publications. This is rather like paying for all meals, for all consumers, by all vendors, through advance annual "institutional" meal-plans, each consumer specifying to his institution, each year, what meals he intends to consume, from which vendor. And that still leaves out the crucial factor, which is that each vendor needs to "peer review" each individual meal-request to determine whether it is worthy of fulfilling at all (by that vendor)! It also leaves out the question of the price per meal, which today includes extras such as styrofoam containers, mustard, relish, salad bar, home delivery and a child's toy prize co-bundled into it, none of which the consumers may need or want any longer, once their institutional repositories can supply it all on their own...
5. Conclusions. In sum, the problem is not only that a Rowsean "flip" is profligate and premature at today's asking prices in fields where universal Green OA self-archiving has not yet downsized publishing and its costs to their post-OA essentials. Even apart from that, the Rowsean consortial "sponsorship/flip" model, simply does not scale up to all journals, across all fields, researchers and institutions, because it is based on the institutional co-bundled license/subscription model. That in turn involves an institutional library budget (1) pre-paying (i.e., subscribing to) a specific yearly quota of incoming journals (consisting of articles published by other institutions), per annual incoming journal (bundle), rather than (2) paying for the peer-review for each institution's own individual outgoing articles, per individual outgoing article.
Collective, bundled annual institutional subscriptions (for that is exactly what they would be!) are simply the wrong model for paying for individual, per-article peer review services. Twenty-five thousand peer-reviewed journals (publishing 2.5 million articles annually) cannot each agree in advance to accept an annual quota of N(i) articles from each of N (c. 10,000) institutions worldwide (and vice versa), even if many, most or all of the journals are "bundled" into a collective, bundled omni-publisher "Big Deal." Authors choose journals, journals compete for articles, and referees (not consortial subscribers) decide what gets accepted, where. (This could conceivably all be done in bulk for bulk publishers, on an annual pro-rated basis, based on last year's institutional publications, but then that would hardly be different from -- and certainly not simpler or more accurate than -- just paying each journal by the article.)
The Rowsean instant flip model has not been thought through beyond the confines of the special case of CERN, SCOAP3 and a long-self-archiving (hence Green) field (particle physics). As a consequence, instead of advancing universal Green self-archiving across all fields -- and thereby helping to set the stage for universal OA, and possibly eventual global cancellations, publication cost-cutting, downsizing, and conversion to per-article peer review services, paid for out of the institutional cancellation savings, via Gold OA (a gradual, global, leveraged, non-Rowsean "flip," driven by Green OA) -- the SCOAP3 consortium institutions are pushing through a (literally) pre-emptive solution for their library budget problems, in one special subfield (particle physics):
This would be fine (vive la liberté!), except that it also keeps being portrayed and perceived (by some, not everyone) as a scalable solution for achieving OA, which it most definitely is not. The prospect of an instant Rowsean "flip" is a local illusion, and hence a source of distraction and confusion for other fields and institutions worldwide, of which only 44 have as yet even made the successful transition to Green OA that particle physicists in general, and CERN in particular, have already made.
6. Recommendation. What is needed is prominent caveats and disclaimers clearly explaining the current unsuitability of the SCOAP3 "flip" model for the rest of the research world, along with the prominent injunction that the rest of the world's institutions and disciplines should first go Green, as CERN did, before contemplating any "flip-flops"...
Harnad, S. (2007) The Green Road to Open Access: A Leveraged Transition. In: Anna Gacs. The Culture of Periodicals from the Perspective of the Electronic Age. L'Harmattan: pp 99-106.Stevan Harnad
American Scientist Open Access Forum
Friday, June 20. 2008
Another superb OA Interview by Richard Poynder, this time of Leslie Chan of the University of Toronto, one of the original co-drafters and co-signatories of the Budapest OA Initiative, a formidable and relentless (yet gentle and patient) force in propelling the world -- and especially the developing world -- toward OA. Leslie will be hosting ELPUB 08 in Toronto next week where he will also be launching the OSI-supported OA sourcebook OASIS (Open Access Scholarly Information Sourcebook).
Wednesday, June 18. 2008
ELPUB 2008, with the theme of Open Scholarship, meets in Toronto, Canada, June 25-27 2008.
Leslie Chan (U. Toronto) is Conference Chair.
Les Carr of U. Southampton/EPrints will conduct a workshop on
Repositories that Support Research Management
John Willinsky (UBC & Stanford) will deliver the opening keynote:
The Quality of Open Scholarship: What Follows from Open?
Stevan Harnad (UQAM & Southampton) will deliver the closing keynote:
Filling OA Space At Long Last: Integrating University and Funder Mandates and Metrics.
The Association of American University Presses (AAUP) meets in Montreal 26-28 June.
Open Access: From the Budapest Open to Harvard's Addendum
Moderated by Sandy Thatcher (Director, Penn State University Press) with presentations by Stevan Harnad (UQAM & Southampton) and Stanley Katz (Princeton).
Monday, June 16. 2008
Steve Inchcoombe, managing director of Nature Publishing Group writes, of Nature:
"We also support and encourage self-archiving of the author’s final version of accepted articles."But if you look in the Romeo directory of publisher self-archiving policies, you will find that whereas Nature is indeed among the 92% of journals that have endorsed the immediate self-archiving of the author's unrefereed first draft (the preprint), Nature is not among the 63% of journals that have endorsed the immediate self-archiving of the author's peer-reviewed final draft (the postprint) -- the one that is the real target of OA, and indispensable for research usage and progress.
Nature used to be "green" on the immediate self-archiving of both preprints and postprints, but, electing to take half of NIH's maximal allowable access embargo as its own minimum, Nature became one of the few journals that back-slid in 2005 to impose a 6-month embargo on open access to the peer-reviewed final draft.
It doesn't make much difference, because Institutional Repositories still have the almost-OA email eprint request-a-copy Button to tide over research usage needs during the embargo, but let it not be thought that Nature is still on the "side of the angels" insofar as OA is concerned...
Maxine Clarke, Publishing Executive Editor, Nature, replied: "Don't forget that people can always read the article in the journal, Stevan, as soon as it is published! The vast majority of scientists are either at an institution with a site license or can access the journal free via OARE, AGORA or HINARI, so they don't even have to take out a subscription."But what about those would-be users worldwide who are "[n]either at an institution with a site license [n]or can access the journal free via OARE, AGORA or HINARI"? Is there any reason whatsoever why they should all be denied access for six months if they (or their institutions) do not "have [the funds] to take out a subscription"?
Because that, Maxine, is what OA is really all about.
American Scientist Open Access Forum
Sunday, June 15. 2008
Charles Oppenheim wrote (in the American Scientist Open Access Forum):Numbers with a number of problems
CHARLES OPPENHEIM: "I've now read the whole report. Yes, it tilts at the usual windmills, and rightly dismissed the use of Impact factors for anything but crude comparisons, but it fails to address the fundamental issue, which is: citation and other metrics correlate superbly with subjective peer review. Both methods have their faults, but they are clearly measuring the same (or closely related) things. Ergo, if you have evaluate research in some way, there is no reason NOT to use them! It also keeps referring to examples from the field of maths, which is a very strange subject citation-wise."I have now read the IMU report too, and agree with Charles that it makes many valid points but it misunderstands the one fundamental point concerning the question at hand: Can and should metrics be used in place of peer-panel based rankings in the UK Research Assessment Exercise (RAE) and its successors and homologues elsewhere? And there the answer is a definite Yes.
The IMU critique points out that research metrics in particular and statistics in general are often misused, and this is certainly true. It also points out that metrics are often used without validation. This too is correct. There is also a simplistic tendency to try to use one single metric, rather than multiple metrics that can complement and correct one another. There too, a practical and methodological error is correctly pointed out. It is also true that the "journal impact factor" has many flaws, and should on no account be used to rank individual papers or researchers, and especially not alone, as a single metric.
But what all this valuable, valid cautionary discussion overlooks is not only the possibility but the empirically demonstrated fact that there exist metrics that are highly correlated with human expert rankings. It follows that to the degree that such metrics account for the same variance, they can substitute for the human rankings. The substitution is desirable, because expert rankings are extremely costly in terms of expert time and resources. Moreover, a metric that can be shown to be highly correlated with an already validated variable predictor variable (such as expert rankings) thereby itself becomes a validated predictor variable. And this is why the answer to the basic question of whether the RAE's decision to convert to metrics was a sound one is: Yes.
Nevertheless, the IMU's cautions are welcome: Metrics do need to be validated; they do need to be multiple, rather than a single, unidimensional index; they do have to be separately validated for each discipline, and the weights on the multiple metrics need to be calibrated and adjusted both for the discipline being assessed and for the properties on which it is being ranked. The RAE 2008 database provides the ideal opportunity to do all this discipline-specific validation and calibration, because it is providing parallel data from both peer panel rankings and metrics. The metrics, however, should be as rich and diverse as possible, to capitalize on this unique opportunity for joint validation.
Here are some comments on particular points in the IMU report. (All quotes are from the report):
The meaning of a citation can be even more subjective than peer review.True. But if there is a non-metric criterion measure -- such as peer review -- on which we already rely, then metrics can be cross-validated against that criterion measure, and this is exactly what the RAE 2008 database makes it possible to do, for all disciplines, at the level of an entire sizeable nation's total research output...
The sole reliance on citation data provides at best an incomplete and often shallow understanding of research -- an understanding that is valid only when reinforced by other judgments.This is correct. But the empirical fact has turned out to be that a department's total article/author citation counts are highly correlated with its peer rankings in the RAE in every discipline tested. This does not mean that citation counts are the only metric that should be used, or that they account for 100% of the variance in peer rankings. But it is strong evidence that citation counts should be among the metrics used, and it constitutes a (pairwise) validation.
Using the impact factor alone to judge a journal is like using weight alone to judge a person's health.For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. As noted, this is a foolish error if the journal impact factor is used alone, but it may enhance predictivity and hence validity if added to a battery of jointly validated metrics.
The validity of statistics such as the impact factor and h-index is neither well understood nor well studied.The h-index (and its variants) were created ad hoc, without validation. They turn out to be highly correlated with citation counts (for obvious reasons, since they are in part based on them). Again, they are all welcome in a battery of metrics to be jointly cross-validated against peer rankings or other already-validated or face-valid metrics.
citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused.It is certainly true that there are many more potential metrics of research performance productivity, impact and quality than just citation metrics (e.g., download counts, student counts, research funding, etc.). They should all be jointly validated, discipline by discipline and each metric should be weighted according to what percentage of the criterion variance (e.g., RAE 2008 peer rankings) it predicts.
relying primarily on metrics (statistics) derived from citation data rather than a variety of methods, including judgments by scientists themselves...The whole point is to cross-validate the metrics against the peer judgments, and then use the weighted metrics in place of the peer judgments, in accordance with their validated predictive power.
bibliometrics (using counts of journal articles and their citations) will be a central quality index in this system [RAE]Yes, but the successor of RAE is not yet clear on which metrics it will use, and whether and how it will validate them. There is still some risk that a small number of metrics will simply be picked a priori, without systematic validation. It is to be hoped that the IMU critique, along with other critiques and recommendations, will result in the use of the 2008 parallel metric/peer data for a systematic and exhaustive cross-validation exercise, separately for each discipline. Future assessments can then use the metric battery, with initialized weights (specific to each discipline), and can calibrate and optimize them across the years, as more data accumulates -- including spot-checks cross-validating periodically against "light-touch" peer rankings and other validated or face-valid measures.
sole reliance on citation-based metrics replaces one kind of judgment with another. Instead of subjective peer review one has the subjective interpretation of a citation's meaning.Correct. This is why multiple metrics are needed, and why they need to be systematically cross-validated against already-validated or face-valid criteria (such as peer judgment).
Research usually has multiple goals, both short-term and long, and it is therefore reasonable that its value must be judged by multiple criteria.Yes, and this means multiple, validated metrics. (Time-course parameters, such as growth and decay rates of download, citation and other metrics are themselves metrics.)
many things, both real and abstract, that cannot be simply ordered, in the sense that each two can be comparedYes, we should not compare the incomparable and incommensurable. But whatever we are already comparing, by other means, can be used to cross-validate metrics. (And of course it should be done discipline by discipline, and sometimes even by sub-discipline, rather than by treating all research as if it were of the same kind, with the same metrics and weights.)
lea to use multiple methods to assess the quality of researchValid plea, but the multiple "methods" means multiple metrics, to be tested for reliability and validity against already validated methods.
Measures of esteem such as invitations, membership on editorial boards, and awards often measure quality. In some disciplines and in some countries, grant funding can play a role. And peer review -- the judgment of fellow scientists -- is an important component of assessment.These are all sensible candidate metrics to be included, alongside citation and other candidate metrics, in the multiple regression equation to be cross-validated jointly against already validated criteria, such as peer rankings (especially in RAE 2008).
lure of a simple process and simple numbers (preferably a single number) seems to overcome common sense and good judgment.Validation should definitely be done with multiple metrics, jointly, using multiple regression analysis, not with a single metric, and not one at a time.
special citation culture of mathematics, with low citation counts for journals, papers, and authors, makes it especially vulnerable to the abuse of citation statistics.Metric validation and weighting should been done separately, field by field.
For some fields, such as bio-medical sciences, this is appropriate because most published articles receive most of their citations soon after publication. In other fields, such as mathematics, most citations occur beyond the two-year period.Chronometrics -- growth and decay rates and other time-based parameters for download, citations and other time-based, cumulative measures -- should be among the battery of candidate metrics for validation.
The impact factor varies considerably among disciplines... The impact factor can vary considerably from year to year, and the variation tends to be larger for smaller journals.All true. Hence the journal impact factor -- perhaps with various time constants -- should be part of the battery of candidate metrics, not simply used a priori.
The most important criticism of the impact factor is that its meaning is not well understood. When using the impact factor to compare two journals, there is no a priori model that defines what it means to be "better". The only model derives from the impact factor itself -- a larger impact factor means a better journal... How does the impact factor measure quality? Is it the best statistic to measure quality? What precisely does it measure? Remarkably little is known...And this is because the journal impact factor (like most other metrics) has not been cross-validated against face-valid criteria, such as peer rankings.
employing other criteria to refine the ranking and verify that the groups make senseIn other words, systematic cross-validation is needed.
impact factor cannot be used to compare journals across disciplinesAll metrics should be independently validated for each discipline.
impact factor may not accurately reflect the full range of citation activity in some disciplines, both because not all journals are indexed and because the time period is too short. Other statistics based on longer periods of time and more journals may be better indicators of quality. Finally, citations are only one way to judge journals, and should be supplemented with other informationChronometrics. And multiple metrics
The impact factor and similar citation-based statistics can be misused when ranking journals, but there is a more fundamental and more insidious misuse: Using the impact factor to compare individual papers, people, programs, or even disciplinesIndividual citation counts and other metrics: Multiple metrics, jointly validated.
the distribution of citation counts for individual papers in a journal is highly skewed, approximating a so-called power law... highly skewed distribution and the narrow window of time used to compute the impact factorTo the extent that distributions are pertinent, they too can be parametrized and taken into account in validating metrics. Comparing like with like (e.g., discipline by discipline) should also help maximize comparability.
using the impact factor as a proxy for actual citation counts for individual papersNo need to use one metric as a proxy for another. Jointly validate them all.
if you want to rank a person's papers using only citations to measure the quality of a particular paper, you must begin by counting that paper's citations. The impact factor of the journal in which the paper appears is not a reliable substitute.Correct, but this obvious truth does not need to be repeated so many times, and it is an argument against single metrics in general; and journal impact factor as a single factor in particular. But there's nothing wrong with using it in a battery of metrics for validation.
h-index Hirsch extols the virtues of the h-index by claiming that "h is preferable to other single-number criteria commonly used to evaluate scientific output of a researcher..."[Hirsch 2005, p. 1], but he neither defines "preferable" nor explains why one wants to find "single-number criteria."... Much of the analysis consists of showing "convergent validity," that is, the h-index correlates well with other publication/citation metrics, such as the number of published papers or the total number of citations. This correlation is unremarkable, since all these variables are functions of the same basic phenomenon...The h-index is again a single metric. And cross-validation only works against either an already validated or a face-valid criterion, not just another unvalidated metric. And the only way multiple metrics, all inter-correlated, can be partitioned and weighted is with multiple regression analysis -- and once again against a criterion, such as peer rankings.
Some might argue that the meaning of citations is immaterial because citation-based statistics are highly correlated with some other measure of research quality (such as peer review).Not only might some say it: Many have said it, and they are quite right. That means citation counts have been validated against peer review, pairwise. Now it is time to cross-validate and entire spectrum of candidate metrics, so each can be weighted for its predictive contribution.
The conclusion seems to be that citation-based statistics, regardless of their precise meaning, should replace other methods of assessment, because they often agree with them. Aside from the circularity of this argument, the fallacy of such reasoning is easy to see.The argument is circular only if unvalidated metrics are being cross-correlated with other unvalidated metrics. Then it's a skyhook. But when they are cross-validated against a criterion like peer rankings, which have been the predominant basis for the RAE for 20 years, they are being cross-validated against a face-valid criterion -- for which they can indeed be subsequently substituted, if the correlation turns out to be high enough.
"Damned lies and statistics"Yes, one can lie with unvalidated metrics and statistics. But we are talking here about validating metics against validated or face-valid criteria. In that case, the metrics lie no more (or less) than the criteria did, before the substitution.
Several groups have pushed the idea of using Google Scholar to implement citation-based statistics, such as the h-index, but the data contained in Google Scholar is often inaccurate (since things like author names are automatically extracted from web postings)...This is correct. But Google Scholar's accuracy is growing daily, with growing content, and there are ways to triangulate author identity from such data even before the (inevitable) unique author identifier is adopted.
Citation statistics for individual scientists are sometimes difficult to obtain because authors are not uniquely identified...True, but a good approximation is -- or will soon be -- possible (not for arbitrary search on the works of "Lee," but, for example, for all the works of all the authors in the UK university LDAPs).
Citation counts seem to be correlated with quality, and there is an intuitive understanding that high-quality articles are highly-cited.The intuition is replaced by objective data once the correlation with peer rankings of quality is demonstrated (and replaced in proportion to the proportion of the criterion variance accounted for) by the predictor metric.
But as explained above, some articles, especially in some disciplines, are highly-cited for reasons other than high quality, and it does not follow that highly-cited articles are necessarily high quality.This is why validation/weighting of metrics must be done separately, discipline by discipline, and why citation metrics alone are not enough: multiple metrics are needed to take into account multiple influences on quality and impact, and to weight them accordingly.
The precise interpretation of rankings based on citation statistics needs to be better understood.Once a sufficiently broad and predictive battery of metrics is validated and its weights initialized (e.g., in RAE 2008), further interpretation and fine-tuning can follow.
In addition, if citation statistics play a central role in research assessment, it is clear that authors, editors, and even publishers will find ways to manipulate the system to their advantage.True, but inasmuch as the new metric batteries will be Open Access, there will also be multiple metrics for detecting metric anomalies, inconsistency and manipulation, and for naming and shaming the manipulators, which will serve to control the temptation.
Harnad, S. (2001) Research access, impact and assessment. Times Higher Education Supplement 1487: p. 16.
Loet Leydesdorff wrote in the ASIS&T Special Interest Group on Metrics:
LL: "It seems to me that it is difficult to generalize from one setting in which human experts and certain ranks coincided to the existence of such correlations across the board. Much may depend on how the experts are selected. I did some research in which referee reports did not correlate with citation and publication measures."Much may depend on how the experts are selected, but that was just as true during the 20 years in which rankings by experts were the sole criterion for the rankings in the UR Research Assessment Exercise (RAE). (In validating predictive metrics one must not endeavor to be Holier than the Pope: Your predictor can at best hope to be as good as, but not better than, your criterion.)
That said: All correlations to date between total departmental author citation counts (not journal impact factors!) and RAE peer rankings have been positive, sizable, and statistically significant for the RAE, in all disciplines and all years tested. Variance there will be, always, but a good-sized component from citations alone seems to be well-established. Please see the studies of Professor Oppenheim and others, for example as cited in:
Harnad, S., Carr, L., Brody, T. & Oppenheim, C. (2003) Mandated online RAE CVs Linked to University Eprint Archives: Improving the UK Research Assessment Exercise whilst making it cheaper and easier. Ariadne 35.
LL: "Human experts are necessarily selected from a population of experts, and it is often difficult to delineate between fields of expertise."Correct. And the RAE rankings are done separately, discipline by discipline; the validation of the metrics should be done that way too.
Perhaps there is sometimes a case for separate rankings even at sub-disciplinary level. I expect the departments will be able to sort that out. (And note that the RAE correlations do not constitute a validation of metrics for evaluating individuals: I am confident that that too will be possible, but it will require many more metrics and much more validation.)
LL: "Similarly, we know from quite some research that citation and publication practices are field-specific and that fields are not so easy to delineate. Results may be very sensitive to choices made, for example, in terms of citation windows."As noted, some of the variance in peer judgments will depend on the sample of peers chosen; that is unavoidable. That is also why "light touch" peer re-validation, spot-checks, updates and optimizations on the initialized metric weights are also a good idea, across the years.
As to the need to evaluate sub-disciplines independently: that question exceeds the scope of metrics and metric validation.
LL: "Thus, I am bit doubtful about your claims of an 'empirically demonstrated fact'."Within the scope mentioned -- the RAE peer rankings, for disciplines such as they have been partitioned for the past two decades -- there is ample grounds for confidence in the empirical results to date.
(And please note that this has nothing to do with journal impact factors, journal field classification, or journal rankings. It is about the RAE and the ranking of university departments by peer panels, as correlated with citation counts.)
American Scientist Open Access Forum
(Page 1 of 2, totaling 17 entries) » next page
Syndicate This Blog
Materials You Are Invited To Use To Promote OA Self-Archiving:
The American Scientist Open Access Forum has been chronicling and often directing the course of progress in providing Open Access to Universities' Peer-Reviewed Research Articles since its inception in the US in 1998 by the American Scientist, published by the Sigma Xi Society.
The Forum is largely for policy-makers at universities, research institutions and research funding agencies worldwide who are interested in institutional Open Acess Provision policy. (It is not a general discussion group for serials, pricing or publishing issues: it is specifically focussed on institutional Open Acess policy.)
You can sign on to the Forum here.
Last entry: 2017-03-27 13:12
1125 entries written
238 comments have been made