Assessing Journal quality
Librarians are frequently asked to provide rankings of journals in particular disciplines. Though there are many supposedly authoritative lists that rank journals, it might be wise to heed the motto caveat lector. The problem is who is the “authority" behind the “authoritative"? And how is he/she/they qualified to be authoritative? Some rankings are essentially based on people’s opinions, e.g. faculty are asked to rank journals in their fields. A common problem with such surveys is that the resultant lists often ignore journals focusing on more out of the way disciplinary areas; frequently they disproportionately represent American journals; and they often do not give appropriate attention to newer journals. Even when journal ranking lists utilize bibliometrics one should bear in mind that no single metric, each having a specific focus and bias, can address all relevant variables. Moreover, while a particular metric might be useful for one subject area, it might be quite inappropriate for another. An interesting discussion of three major journal ranking lists and the criticism leveled at all three is available in a 2010 article “The Controversial Policies of Journal Rankings: Evaluating Social Sciences and Humanities."
Quality of Specific Journals
Sometimes, the question asked of librarians is “How can I tell if this journal is of good scholarly worth". To provide a definitive answer is usually difficult without first agreeing on a set of quite precise evaluative criteria. Obviously, when such criteria change, the answer often changes.
Some Criteria for Evaluating Journals:
a) Impact Factor
A journal’s Impact Factor (IF) is often used to judge the quality of a journal. One may use the database Journal Citation Reports (JCR) to assess the IF of roughly 11,000 Institute of Scientific Information (ISI) journals. The IF is the frequency with which articles from a journal published in the past two years have been cited in a particular year. ISI’s IF is calculated by dividing the number of current year citations by the total number of articles published in the two previous years. An IF of 2.0 signifies that, on average, the articles published one or two years ago have been cited twice.
There are numerous question marks associated with IFs. Only a small number of journals have IFs, i.e. only those journals indexed by JCR (over 8,000 journals in Science and 2,700 in the Social Sciences). Humanities journals are not represented. There is a heavy emphasis on North American titles. With the exception of British and Dutch titles, journals from other countries are not well represented. Moreover, English language journals predominate. Journals that publish longer articles with more citations tend to have higher IFs. Journals that publish survey or review articles are often more heavily cited than journals that do not include such documents. In like manner, journals which have editorials, correspondence, reports of meetings, all of which may be cited, can often have much higher IFs than journals that do not. There is no correction for self citations which are often numerous. Citations in books are not included. Most journals tend to have articles which end up with a wide range of citations. Some may be heavily cited, some a little, some not at all. Thus, it may be quite misleading to judge an article based on the IF of the journal in which it is published. In addition, whatever value IFs have, they can only be used to compare journals within the same discipline. Comparing IFs of journals in different subject areas may be valueless. For example, according to Journal Citation Reports, the medical journal with the highest IF for 2010 is New Journal of Medicine (IF 53.486). However, the journal with the highest IF for journals in veterinary studies, Veterinary Research, only has an IF of 3.765. Clearly it can be meaningless to compare the IFs for journals in different fields.
Another problem is using a journal’s IF to assess the scholarly worth of an author who has published in it. One cannot adequately assess an author’s work based on a single metric. As David Tempest, Associate Director of Research and Academic Relations for Elsevier, stated “The papers that an individual published could be zero-cited in a journal with an Impact Factor of 50. Taking the journal’s position as a proxy for individual quality can be misleading." Proper assessment of an author’s scholarship should be based on a thorough examination of his/her scholarship by experts in that subject area and not by some metric judging the media where he/she has published.
A portal that complements the metrics of Journal Citation Reports and that may be useful for ranking both journals and assessing a journal’s quality is SCImago Journal & Country Rank. This platform shows the visibility of the journals contained in the Scopus® database from 1996 (BC Libraries presently do not provide access to Scopus).
b) Google Scholar Metrics
Another tool, the recently introduced Google Scholar Metrics (GSM), offers potentially strong competition to ISI’s Journal Citation Report’s Impact Factor. Utilizing GSM’s citation metrics one may gauge the visibility and influence of recent articles in scholarly journals. Particularly interesting is GSM’s listing of the top 100 publications in several languages, ordered by their five-year h-index and h-median metrics. Information about GSM’s “h" indices and other bibliometrics employed is available here.
c) Eigenfactor Score and Article Influence Score
Like the Impact Factor, the Eigenfactor score and Article Influence score use citation data to evaluate the influence of a journal in relation to other journals. Eigenfactor uses data gathered for five years to calculate how often articles from the journal have been cited. It takes account of which journals have cited the journal in question so that highly cited journals will influence the network more than lesser cited journals. There is a check on journal self citation, e.g. references from one article to another article in the same journal are removed. “The Article Influence determines the average influence of a journal's articles over the first five years after publication. It is calculated by dividing a journal’s Eigenfactor Score by the number of articles in the journal, normalized as a fraction of all articles in all publications. This measure is roughly analogous to the 5-Year Journal Impact Factor in that it is a ratio of a journal’s citation influence to the size of the journal’s article contribution over a period of five years. The mean Article Influence Score is 1.00. A score greater than 1.00 indicates that each article in the journal has above-average influence. A score less than 1.00 indicates that each article in the journal has below-average influence." The database JCR provides the Eigenfactor and Article Influence scores for its journals.
d) Who’s the publisher?
A hint about journal quality may be provided by the society, association, organization publishing it. Prestigious organizations like the American Psychological Association, Institute of Electrical and Electronics Engineers, American Medical Association, and so on publish a number of journals that tend to be very well respected. Still, smaller, less well known scholarly bodies may publish highly regarded journals and though they may be read far less often than some of the better known journals this does not, of course, necessarily detract from their scholarly value.
e) Editorial Board
The scholarly reputation of the members of the editorial board may provide tips about the quality of the journal. However, though useful, this strategy is clearly open to strong elements of subjectivity.
f) Where Indexed
Where a journal is indexed may give a clue as to its quality. The database UlrichsWeb Global Serials Directory provides detailed indexing information on over 300,000 journals, both academic and popular. Of course, it isn’t necessarily the case that if a journal is widely indexed its value is higher.
g) Journal Acceptance/Rejection Rates
Methods for determining acceptance/rejection rates may differ from journal to journal. Journal X may calculate the acceptance rate based on the number of articles accepted out of all articles submitted. Journal Y may calculate the rate from the number of articles accepted out of the articles sent out for peer review. In the latter case Journal Y will have a higher acceptance rate. Another factor is the disciplinary area. A subject area for which few scholars write articles may have a higher acceptance rate than other more popular subject areas. There are strategies for locating acceptance/rejection rates. Sometimes they may be located by looking at the information for authors or submission guidelines found on a journal’s website. Another strategy useful for some subject areas is to access the Library database Cabell’s Directories of Publishing Opportunities: Business Directories (Accounting, Economics & Finance, Management, Marketing); Educational Directories (Educational Curriculum & Methods, Educational Psychology & Administration, Educational Technology & Library Science). These Cabell directories frequently include journal acceptance/rejection rates.
A traditional criterion for evaluating the quality of a journal is to ascertain whether it is peer reviewed (refereed) or not. However, the challenge here is that often there is a very wide range of quality in peer reviewed journals. A journal declaring itself to be peer reviewed does not necessarily indicate that it possesses high scholarly quality and prestige. One may consult the database UlrichsWeb Global Serials Directory to determine if a journal is peer-reviewed.
i) Are publication fees required?
Many open access journals require authors to pay publication fees. This is not necessarily a red flag. Numerous quality OA journals, some very prestigious, have a system of “author pays". However, there’s the swiftly growing problem of sham journals whose sole rationale is to make a profit. Such journals, often with very credible scholarly names, will charge authors high publication fees and will publish most articles submitted. They are clearly fake journals and it’s likely that more and more serious academics are being caught in the trap. A useful resource for determining some of these spurious journals is Jeffrey Beall's List of Predatory, Open-Access Publishers.
There are other strategies and metrics by means of which journals might be ranked and/or evaluated. However, in common with those discussed above they invariably tend to contain an element, often strong, of subjectivity and arbitrariness. Feel free to talk to librarians for further details about these issues.
Collection Development Librarian