Clarivate listed nursing journals in 2020:what they publish and how they measure use of social media

2021-12-29 11:16RogerWatsonAhtishamYounasSalmaAulRehmanParveenAzamAli
Frontiers of Nursing 2021年4期

Roger Watson*, Ahtisham Younas, Salma Aul Rehman, Parveen Azam Ali

a Shaqra University, Shaqra , Shaqra 11961, Kingdom of Saudi Arabia

b Faculty of Nursing, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador 4200, Canada c School of Nursing, University of Hull, Hull HU6 7RX, UK

d School of Nursing & Midwifery, University of Sheffield, Sheffield S10 2TN, UK

Abstract: Objectives: To investigate what the most common types of articles that nursing journals purport to publish are and what they actually publish. And to investigate the extent to which academic nursing journals listed by Clarivate track alternative metrics.

Keywords: academic publishing • alternative metrics • citations • journal content

1. Introduction

The number of academic nursing journals included in the journal impact factor (JIF) league table created and curated by Clarivate (formerly part of Thomson Reuters)shows a net increase annually. Journals are added and removed, and since 2011 it has grown by six journals.The list is always controversial, and the annual release is eagerly awaited by editors and publishers. We do not intend to debate the controversies around JIF in this article; suffice to say that the formula used to calculate the JIF, first used by Eugene Garfield in 1956,1is arbitrary and has no meaning outside of its own context. It is,however, an attempt to relate the number of citations a journal receives relative to its size over a defined period.Many alternatives to the JIF have been developed but none are given as much attention and most are, in any case, highly correlated to the JIF.2However, what

ever controversies surround the use and misuse of the

JIF, what remains is the importance for journals being included or indexed on the list. Along with other listing such as PubMed, Clarivate listing in itself is a mark of prestige and, to be listed, Clarivate examines a range of factors including how international the journal is in terms of authorship and editorship; the extent to which it follows international academic publication standards and follows publication ethics. Journals must not excessively self-cite their own material or encourage this in any way,or they risk – as has happened3– being removed from the listing.

Our aim in this study was to examine the academic nursing journals listed in 2020 in the Clarivate listing of those currently awarded a JIF. We were interested to investigate, generally, the scope of what these journals purport to publish and to compare it with the scope of their actual output. In addition, while these journals all have a JIF and the Clarivate list contains other citationbased metrics, we were also interested to discover how much use was made of alternative metrics, which measure the extent to which the contents of these journals are referred to in a range of social media sites.

1.1. Background

The publishers and editors of academic journals pay close attention to the performance of their journals as do potential authors.4,5The reasons why publishers and editors pay attention to their performance is to draw comparisons with their competitors and to consider how to improve that performance. Authors give this attention because, generally, it is considered better by authors to get published in higher-performing journals. This is most commonly due to pressure from academic employers –universities and research bodies – to publish more6and be seen to be published in the most prestigious places and, while the extent to which this pressure is applied across the world varies, it is unusual anywhere for some measure of journal performance not to be taken into consideration by authors.

Naturally, the most commonly referred measure of journal performance is the JIF, referred to in the Introduction above. The appeal of the JIF lies mainly in its simplicity and the fact that it is widely used. On the other hand, it is widely misunderstood and, while most (but by no means all) journal editors can explain its calculation and what it means; few authors can do this precisely.7First, it is not a measure of any kind of impact on research and practice, as widely misunderstood. It is purely a measure of citation activity to one journal and, arbitrarily, it is a measure of the number of citations in 1 year to the articles published in the previous 2 years. It comes as a surprise to many authors that citation within the year of reporting the JIF does not contribute to the JIF.8The failings of the JIF are well known,and these include the fact that only a small percentage of the articles published in a journal contributes toward the JIF, there is always a “tail” of articles in any journal that never gets cited. Thus, the JIF tells us nothing about the performance of most articles in a journal. The debate about the uses and abuses of the JIF has been rehearsed elsewhere and a consideration of the JIF is not the focus of this article.

Citation-based alternatives to the JIF have almost universally failed to make inroads into the debate about journal performance.9One attractive feature of the JIF is the relative ease with which it can be calculated: the number of citations in the reporting year to the articles published in the previous 2 years divided by the number of articles published in the previous 2 years. All the alternatives such as the Eigenfactor score or the Scimago ranking attempt to account not only for the number of citations but also where those citations take place.Thus, they provide weightings for more prestigious journals but the problem with the formulae for calculating these alternative indices is that they are inordinately complicated. Another feature which tends to render them redundant is that these measures, being citationbased, all correlate highly with the JIF and therefore do not provide much additional information about the relative performance of journals.2

While the primacy of the JIF remains, although frequently discussed and questioned, and alternative citation-based metrics continue to be considered, a phenomenon took place which has led to the possibility for an alternative type of metric; one which has the potential to be more meaningful and which is growing in importance. The phenomenon is social media, and the new measures can be collectively described as alternative metrics. There is already a growing interest in how journals are mentioned in social media5and guidance available on how best to make use of this.10There is a range of alternative metrics with two being much more common than the others. The original alternative metric was the Altmetricⓒ and this was used by a range of publishers and was considered to be the main measure until the publisher Elsevier introduced the PlumXⓒ metric.Both the Atmetric score and the PlumX score are run as commercial concerns and they are, demonstrably,very similar.11Both systems use mentions in a range of social media sites such as online newspapers, Twitterⓒ,and blogs to generate their scores. Not all social media sites are considered to be equal and, for example, under the Altmetric system a mention in an online newspaper generates a higher score than a mention on Twitter.The PlumX metric includes mention in Mendeley, which is not used in the Altmetric score and it is notable that Elsevier – the owners of PlumX – also own Mendeley.These alternative metrics are updated in real time, which is highly advantageous over citation-based metrics that are updated annually. They are article-specific and an alternative metric score is not awarded to a journal.A particular feature of both the Altmetric and the PlumX systems is that they both use attractive and recognizable logos which appear on the landing page for an article with a number attached indicating the score. Using both logos it is possible to obtain more details than the score. With the Altmetric logo, the reader can hover over the logo with the mouse cursor and see a popup with a basic indication of the social media sites where the article has been mentioned. Clicking on “Further information” provides a more detailed breakdown including a world map showing where the mentions have taken place and the type of citation, for example, “Members of the public” or “Scientists”. Next to the PlumX logo clicking on “Further information” provides a breakdown of where mentions have taken place. The inclusion of Mendeley allows the reader to see in which other articles the article has been cited.

A cursory inspection of the Clarivate list of academic nursing journals shows that other metrics are used by a few journals, specifically Metrics and Dimensions .We were unable to obtain any independent information about Metrics but, where it is used, Metrics provides a table of where the article has been mentioned and to our knowledge, these include: Crossref; Google Scholar;and Scopus. Dimensions are metrics-based but include a reference in patents and have an attractive logo that serves a similar function to the Altmetric logo.

Journal metrics of whatever kind are related to what the journals publish, the journal content. The content of journals varies, and all authors know this as they target specific journals for particular kinds of manuscripts.Moreover, the content of journals evolves as publishers and editors review their contents, usually for performance in terms of metrics, hits on their web pages, and downloads. For example, some academic nursing journals will publish concept analyses, while others eschew them. It is also known that some kinds of articles attract more citations than others. For example, it is widely known that review articles tend to be more highly cited than other kinds of articles.12This is one of the only identifiable patterns in academic publishing and, while other articles can be highly cited, usually based on their own merits, there is usually no discernible pattern. Therefore, it should be of interest to those who use academic nursing journals to compare and contrast the contents of these journals. It should also be of interest to catalog what the most common types of articles that journals purport to publish are and what they actually publish.

1.2. Research questions

To investigate the above phenomena, the research questions guiding this study were:

1. What do academic nursing journals listed by Clarivate purport to publish?

2. What do academic nursing journals listed by Clarivate actually Publish?

3. To what extent do academic nursing journals are listed by Clarivate track alternative metrics?

2. Methods

We used Clarivate Analytics’ 2020 Journal Citation Report (JCR) based on data for 2019 to access information about nursing journals indexed in Web of Science with a JIF. Specifically, all journals included in the JCR journal category described as nursing were identified and considered suitable for inclusion in the analysis.The JCR includes various indicators of journal performance (such as JIF) and ranks journals in two broad categories of types of science: the Science Citation Index Expanded (SCIE) and the Social Science Citation Index (SSCI). There were 123 journals in the 2019 JCR nursing category.

To identify what journals say they publish, the instructions for authors were reviewed online and mention of each type of article was identified. The instructions were then downloaded and each of these documents was carefully read and the types of articles permitted were extracted to a spreadsheet. To explore what the journals published, the tables of contents of each issue of each journal published during 2019 were examined and the types of articles published were extracted to a spreadsheet. Likewise, the use of alternative metrics by each journal was extracted to a spreadsheet. The data were then enumerated and recorded under each type of article and each type of alternative metric and tabulated.Any anomalies were checked by two authors.

In terms of article types, for ease of presentation and interpretation, in recording and reporting what journals permit and publish some categories of papers were collapsed. For example, meta-analyses and other systematic reviews are all reported as “Systematic reviews”and the category “Original research” included qualitative, quantitative, and mixed-methods studies and methodological papers, which 14 journals permitted. are included under discussion papers. Data were entered into SPSS version 27.0 for analysis using Pearson’s and Spearman’s correlation.

3. Results

In the 2020 JCR, 123 journals were listed. Table 1 shows,in order of frequency what journals purport to publish.We report 15 types of articles that journals said they permitted as submissions. The most common article type permitted was original research (n= 119), followed by review papers (n= 116), and discussion papers (n= 92).

We report these 15 categories of manuscripts published in journals as shown in Table 2. The top three categories mirrored the permitted types of manuscripts as follows: original research (n= 7045); review papers(n= 1268), and discussion papers (n= 1225). Editorials(n= 793) and commentaries (n= 776) were the next most commonly published categories of articles. Editorials were ranked fifth in the types of permitted manuscripts (Table 1;n= 36) but were ranked fourth in the types of articles published (n= 793) followed by commentaries (n= 776). It should be noted that, while two journals reported that they permitted poetry, we found no examples in the period examined. Figure 1 represents the relative percentages of the types of articles in a funnel plot; we used this type of plot since the total number of article types exceeded 100%.

Table 3 shows the frequency with which journals reported the use of alternative metrics. Of the 123 journals examined, 108 (96.8%) tracked mentions on social media. By far the most common method of tracking social media use was the Altmetric (n= 75) followed by the Elsevier journals’ use of PlumX (n= 29). Only two journals use each of Metrics and Dimension. Figure 2 represents the relative percentages (total = 100%) in a pie chart.

Pearson’s correlation (r= 0.73;P= 0.002) between the number of articles permitted and published and Spearman’s correlation (ρ = 0.86;P< 0.001) in terms of the rankings of the permitted and published articles were both strong.

Table 1. Categories of articles permitted in order of frequency.

Table 2. Categories of articles published in order of frequency.

4. Discussion

Figure 1. Categories of articles permitted in order of frequency (%)*. Note: *NB: the total% is >100 as journals publish > type of article.

Table 3. Alternative metric measures used by journals.

We set out to investigate what academic nursing journals say they publish and what they actually publish. We also examined the use of methods of tracking mentions of journal content on social media. The most obvious conclusion from our results is that not all journals publish the same range of articles. Also, there is an obvious relationship between the most frequently permitted article types and those published, especially for the most frequent categories of both. There is remarkable congruence between the top three categories of articles permitted and published: original articles; review papers and discussion papers, respectively. These three types of articles are, clearly, the backbone of academic publishing in nursing with original articles vastly outweighing review and discussion papers. This is hardly surprising due to the amount of original research taking place as an outcome of funded research and master’s and doctoral projects being carried out in hospitals and universities. Reviews and are dependent on original research and seek, periodically, to synthesize original research to provide the best evidence, identify gaps and formulate novel research questions.13To some extent, discussion papers are also – but not completely – dependent on original research; some seek to take current issues or theoretical and methodological developments and set an agenda for further research and discussion.

Editorials are the next most commonly published types of articles and the importance of editorials to nursing journals is something that has been considered.14Editorials serve a range of purposes such as allowing the editors of a journal to promote the content of the journal or to promote their views. But they are also used to indicate when changes have taken place in journal content,15to discuss issues related to journal content16,and to request particular types of content.17Also, editorials, while not included in the denominator for the calculation of the JIF, citations to editorials within the two-year window for calculating the JIF are added to the numerator and these constitute “free cites”. As such, these can be very valuable to editors and publishers as a way of increasing their JIFs. In this light, it can be very valuable to permit critical discussion of published articles,whether this is positive or negative.18In a similar light,correspondence and commentaries can also contribute to free cites in the journal, although these will frequently– but not always – be published alongside or within the same year as publication as the articles to which they refer and, thereby, may not contribute to the calculation of the JIF. Correspondence was the fourth most common category of permitted articles but was only seventh,behind editorials and commentaries and pilot studies in terms of frequency of publication.

Figure 2. Categories of articles published in order of frequency (%).

Concept analysis articles were only permitted by four journals. There has been a move away from publishing concept analysis articles by at least one journal in the list – theJournal of Advanced Nursing– and this led to some controversy.19,20It appears that despite the low number of journals permitting concept analysis articles,there is still a desire to publish more of them as 70 articles were published in the period of the study. This suggests that these articles are clustered in a very small number of journals. The same could be said of protocol articles, which were only permitted by six journals but of which 58 articles were published. Publishing protocols have only relatively recently been encouraged by journals21and have arisen from the increasing requirement to register clinical trials and to specify study protocols as required of academic journals with publishers who are signatories of the AllTrials Campaign.22

Increasingly, academic nursing journals have ceased to publish book reviews, which were once very common23and this is reflected in the fact that only three journals permit them and, concomitantly, publish them.It is also apparent then expert interviews play a very small part in publishing in academic nursing journals.The above review of content is not comprehensive but highlights some main points of interest.

5. Conclusions

It is hardly surprising that both the raw and the ranked correlation between articles permitted and published were strong. However, for future similar studies in nursing and in other subjects a baseline has been established whereby trends may be measured and compared.

5.1. Alternative metrics

Most Clarivate listed journals now use some method of tracking alternative metrics which is an indication of how seriously publishers take their social media profiles as represented by reference on a range of social media sites to the content of their journals. While there is some evidence for a relationship between social media mentions and citations,5,24,25the evidence is not strong and compounded by the fact that highly cited articles may,subsequently, be the ones that receive greater attention on social media. Thus, it is hard to discern cause from effect. However, the use of alternative metrics is not solely concerned with its relationship to and potential for increasing citations and the impact factor; alternative metrics offer a wider insight into the impact of the work published in a journal and, especially, beyond the scientific community, although this aspect of the use of alternative metrics, with one exception,5has not been widely or systematically studied.

Limitations and recommendations

Our study was restricted to nursing journals and it would be useful to compare the situation investigated here with other cognate subjects such as medicine and allied health. Our study was purely descriptive and did not formally investigate relationships between any of the variables; as such it aimed only to provide baseline information for further investigations.We already know what factors lead to greater citations in academic journals, for example, publishing review articles and also methodological articles.26We strongly recommend that future studies investigate how the range of what is published in academic journals relates to the use of and success with alternative metrics.

For convenience, we combined some categories of articles. As such, we made no distinction between, for example, qualitative and quantitative studies and systematic reviews with and without meta-analysis. Future studies could achieve greater granularity and could also investigate trends in the publication of different types of articles and how this relates to different trends in research and the measurement of metrics.

Ethical approval

Ethical issues are not involved in this paper.

Conflicts of interest

All contributing authors declare no conflicts of interest.