Evaluation based on scientific publishing: Evaluating articles

Evaluating articles

When evaluating an article, attention should be paid to the identity of the writer and to the publisher. A good starting point for evaluating an article is the amount of citations it has received. In addition, one should evaluate the objectivity of the text and the references used. More information about citation analysis.

Also the openness of articles  is looked at. Read more

Another approach to evaluate articles is to explore the attention an article has received in social media and social networking tools  - regardless of the journal where the article is published. This is done by using article level metrics, altmetrics.

Before publishing the manuscript is peer reviewed.

Peer Reviews

Journals that have peer reviews, carried out by experts, of the manuscripts that are offered for publication are usually seen to be of high quality. Peer reviewing guarantees that researchers uphold certain codes of conduct that are expected within their discipline, prevents groundless claims and false interpretations.

Different journals carry out peer reviews in different ways, and you can find out whether a journal uses peer review from Ulrichsweb - Global serials directory. Check also the journal's home page.

Usually peer reviews are conducted anonymously so that the writer of the article does not  know the identity of the reviewer, but the reviewer knows who the writer is. Even though peer review is often seen as being useful, it is not without problems and has been criticised. Problems with peer reviewing are often related to the slow publishing process and the bias with the evaluating. The evaluator can, for example, discriminate against women writers and researchers from small universities, or favour interpretations that support their own opinions. In peer reviewing, serious mistakes and deficiencies are often not noticed, and they only emerge when the article has already been published. In addition, evaluating is subjective and answerability is missing from the evaluation process because it is done anonymously. Peer reviewing can also make plagiarism possible, since the reviewer may abuse unpublished material from the manuscript. Such problems with peer reviewing are fairly rare, and a more central question is how they affect the quality of the article. The influence of peer reviewing has not been well researched, and what research there is has been done mainly with biomedical journals.

In order to avoid the problems of peer reviewing, some journals, mainly new ones have started to use alternative peer reviewing methods.

  • In a "double blind" peer review, neither the authors nor the reviewers know each others' identities. This method is mainly used as a test method when trying to establish the functionality of peer reviewing and chart the possible problems related to it.
  • An open peer review is different from the traditional peer review because it is not conducted anonymously. Its aim is to avoid the traditional abuses and problems of peer reviewing. On the other hand, it also generates academic merit for the reviewer. Different journals have different ways of conducting open peer reviews, but the basic principle is that the author of the text knows the identity of the reviewer. Some journals only inform the writer of the article about the identity of the reviewers, but other journals publish the names of the reviewers in the journal too. In addition, some journals even publish the reviewer's report, and in that way the readers gain insight into the whole review procedure. A problem with open peer reviewing is that not all researchers are willing to be reviewers when the reviewing is not anonymous. An example of open peer review in Biology Direct
  • In post-publication refereeing, all manuscripts are published electronically in an archive where they are free to be commented on. The writer will alter the script based on the comments, and after that the script will be peer reviewed normally, and finally it will be published in an academic journal. This model is used in physics and in related subjects, like mathematics and information technology. This sort of peer reviewing can work if distinguished researchers take part in commenting and the comments are honest and candid. It cannot however, replace traditional peer reviewing but could be used as an addition. arXiv.org is an example of post publication peer review

Some journals allow everyone to comment on the articles after they have been published, using Internet conversations in which the authors of the articles can participate; this is known as open peer commentary.

Databases containing citation information

The most important multidisciplinary databases containing citation information are Web of Sciece (WoS) by Thomson Reuters and Scopus by Elsevier.  Citations can also be retrieved from Google Scholar (GS) keeping in mind the limitations of the database. GS contains a lot of non-scientific citations. The number of citations can vary a lot between Web of Science, Scopus and especially GS.

In addition, there are some field specific databases which contain reference information such as Chemical Abstracts (SciFinder), CiteSeerX and MathScinet.

Table below: Comparison of WoS, Scopus and GS

Feature

Web of Science (more information)

Scopus (more information)

Google Scholar (more information)

Availability

subscription based

subscription based

freely accessible

Number of journals

 22 000 peer-reviewed journals

23 500 peer-reviewed journals

information is not publically available

Other contents

conference proceedings, books

conference proceedings, professional magazines, patents and book series

books, pre-prints, theses and dissertations, and webpages

Main disiplines

Natural Sciences, Technology, Social Sciences, Fine Arts and Humanities

Physics, Technology, Health Sciences, Bio sciences, Fine Arts and Humanities,
Social Sciences

information is not publically availabale

Time span

from 1900 (Science), 1956 (Social Sciences) and 1975 (Arts and Humanities), accessble

records back to 1788

 

information is not publically availabale

Up-dates

weekly

daily

information not publically available, but more or less monthly

Collection policy

public

public

information not publically available, contracts with most significant publishing houses

Citation analysis

Citation Report -tool

View citation overview -tool

search report with a 'Cited by' link, giving  all pulications which cite the publication in question

Time span of citation information

from 1900  (Science), from 1956 (Social Sciences) and from 1975 (Arts and Humanities); citation statistics available at Oulu University Library for the whole period, but the referencing articles only available from 1975

cited references dating back to 1970

information is not publically availabale

Web of Science Scopus Google Scholar
Indicators Journal Citation Reports:
- Article Influence (AI)
- Eigenfactor
- H-index
- Immediacy Index
- Impact Factor (IF)
- H-index
- Raw impact per publication (RIP)
SCImago Journal Rank (SJR)
Source normalized impact per paper (SNIP)
Field-Weighted Citation Impact 
- H-index
Tools - Journal Citation Reports
- Eigenfactor
- ScienceWatch 
- Scival
- SCImago Journal and Country Rank
- CWTS Journal Indicators
- Publish or Perish 
University rankings

- Shanghai Ranking eli Academic Ranking of World Universities (ARWU)
- National Taiwan University Ranking (NTU)
- University Ranking by Academic Performance (URAP)
- U.S. News & World Report's Best Global Universities Rankings
- CWTS Leiden Ranking
- U-Multirank

- Review of the state of scientific research in Finland by The Academy of Finland

- Times Higher Education World University Rankings
- QS World University Rankings
- Webometrics
- Webometrics
Researcher profile ResearcherID Scopus Author Identifier
- also Scopus Affiliation Identifier
Google Scholar Profile


From the perspective of citation analysis, what matters most, is the number of records enhanced by cited references and the total number of cited references included in the database used. In addition to contributing citation data, citation enhanced databases may serve as a platform providing the analytical tools for bibliometric analysis. Both of these contributions are beset with several methodological and technical difficulties, including limited coverage of the scholarly literature, inconsistent and inaccurate data, and limited facilities for browsing, searching and analyzing data. Most of these difficulties arise because bibliographic databases are primarily designed for information retrieval and bibliometric analysis represents only a secondary use of the systems.

Both Web of Science and Scopus have master records with cited references, and they show the bibliographic and reference details of the citing records. Web of Science is available in many different versions regarding years of coverage and citation indexes included in the subscription of the user's institution. Web of Science always includes all the cited references for every record created, irrespective of the publication year and subscription. Scopus includes cited reference information for records of papers published from 1970 onward.

Google Scholar's coverage is unknown and might be uneven across different fields of science. Additonally, older publications are poorly covered. To continue non-scholarly literature and citations are included.

Field-Weighted Citation Impact (FWCI)

Field-Weighted Citation Impact (FWCI) is the ratio of the total citations actually received by the denominator’s output, and the total citations that would be expected based on the average of the subject field. To Scopus database it is sourced directly from SciVal.

Examples of top cited articles listing

CiteSeerX most cited articles in the field of computer science

ScienceWatch: hot papers