Evaluating journals with quantitative methods is based either on reader statistics or some other statistics, or then on evaluating the impact of the journal based on citation information with calculable indicators.
The good sides of quantitative methods when evaluating journals are objectivity, and the data is easy to access. However, the methods are not problem free. The results are often difficult to interpret because they do not directly measure quality. Sometimes it not easy to find out how different indicators have been calculated, and on what sort of citation information they are based. Citation information often contains mistakes, and it can be intentionally manipulated. Comparing journals from different disciplines is also not possible because publishing and citing norms differ.
A large circulation and plenty of readers testify to the quality of the journal, and its importance in its own subject area.
The subscriptions to a journal can mean something other than just the quality, because some of the journals are distributed to all members of a scientific society, and because the subscription costs and availability also affect the amount of subscribers a journal has. You can find information on the number of subscribers from Ulrich's International Periodicals Directory and from the publishers' websites.
Academic journals are not read just because of the demands of your own research, but also purely out of interest and a desire to follow scientific developments more broadly. People who do not themselves publish in academic journals do nevertheless read them, for example doctors who do not conduct research. Hence, examining the reading trends of a journal could give different insights into its status than looking at the citations a journal has received. However, it is rather difficult to assess accurately reading patterns of printed journals, and the relationship between the reading and citations has not been widely researched. Research on journals in the field of Medicine has shown that when researching the reading patterns and citation information, it has been found that journals that are frequently cited are also frequently read.
Journals that are offered a large number of manuscripts, and have a high rejection rate are seen to be prestigious. Journals often inform us of their acceptance and rejection rates on their websites, or in the printed versions of the journal. These numbers are, however, fairly unreliable for the purpose of comparing journals, as the ways of calculating differ from journal to journal. Journals also reject manuscripts for reasons other than just scientific weaknesses; for example, an article may be rejected because it not suitable for the profile of the journal. Rejection percentages are also not permanent and can vary rather quickly. Journals can also claim higher rejection percentages than is actually the case, in order to attract quality manuscripts.
In journals on Economics that deal with leadership, research shows that the correlation of rejection rates to other criteria for evaluating the quality is a low. In journals that, using other measures, had been evaluated as quality journals, the rejection percentage rate varied greatly - between 10-75 percent.
There has been research trying to determine the reasons for rejections and rejection percentage rates. The rejection percentages vary according to discipline, and also among journals of the same discipline. For example, in Physics the rejection rate has fluctuated between 19-35%, and in History it has been between 60-90%, in Medicine 48-67% and in Sociology 59-87%. There have been numerous explanations for rejections: not deemed suitable for the journal, no new information, not about an important issue, theoretically and conceptually week, methodological weaknesses, loose interpreting of findings and results, inadequate literature survey and mistakes in statistical proceedings etc.
Whether a journal is indexed in the major indexing/abstracting service in the field is another criteria that can be used to assess the quality of a journal. Ulrichsweb lists the indexing/abstracting services that cover each journal.
Generally, one of the criteria of quality is that the journal belongs to the Thomson Reuters' database. A journal being part of the Thomson Reuters database or not can affect the visibility of the journal hugely, and through this affect the prestige of the journal. It has been observed that the Impact Factor of a journal increases substantially a few years after it has been added to the Thomson Reuters' database. This does not only tell us about the fact that Thomson chooses journals that it has evaluated as being of high quality, but also that the journal gains more attention when it belongs to the Thomson database, and thus better quality articles are submitted to it.
E-journals can be evaluated differently from printed journals as you can examine how many times they have been included as links on other websites, the number of visitors to their website, and the number of downloads. But then with e-journals you cannot always have access to all the same evaluation as with printed journals. With open access journals one cannot, for example, acquire information on subscribers.
Typically e-journals are evaluated by looking at their electronic links to other e-journals or web pages; these are called web citations.
One can get more precise information on the usage of e-journals than with printed journals, where the information is often focused on the citations they receive, so the reading trends and other sorts of usage go unnoticed. One can get a count from e-journal websites of how many users have visited the site, and also article specific access numbers.
We can also have access to the download numbers of e-journals that are available in PDF and PostScript format; these numbers separate people who are just browsing, or random visitors, from people who are genuinely interested in the article. In terms of electronic journals, future evaluation will be based more on individual articles rather than evaluating whole journals.