Thursday, December 23, 2021

Beall's List of potentially predatory publishers and journals

 https://www.sciencedirect.com/science/article/pii/S0099133321001750


Abstract

Although there are at least six dimensions of journal quality, Beall's List identifies predatory Open Access journals based almost entirely on their adherence to procedural norms. The journals identified as predatory by one standard may be regarded as legitimate by other standards. This study examines the scholarly impact of the 58 accounting journals on Beall's List, calculating citations per article and estimating CiteScore percentile using Google Scholar data for more than 13,000 articles published from 2015 through 2018. Most Beall's List accounting journals have only modest citation impact, with an average estimated CiteScore in the 11th percentile among Scopus accounting journals. Some have a substantially greater impact, however. Six journals have estimated CiteScores at or above the 25th percentile, and two have scores at or above the 30th percentile. Moreover, there is considerable variation in citation impact among the articles within each journal, and high-impact articles (cited up to several hundred times) have appeared even in some of the Beall's List accounting journals with low citation rates. Further research is needed to determine how well the citing journals are integrated into the disciplinary citation network—whether the citing journals are themselves reputable or not.

Introduction

Beall's List (Beall et al., 2020) is the oldest and best known list of potentially predatory Open Access (OA) journals.1 It identifies more than 1300 publishers and nearly 1500 additional journals that authors are encouraged to avoid. (If a publisher appears on the list, all its journals are considered suspect.) The List was intended as a means of identifying predatory journals and publishers—those that pose as peer-reviewed outlets but accept nearly all submissions in order to maximize revenue from the article processing charges (APCs) paid by authors, their institutions, and their funding agencies. However, publishers' intentions, predatory or otherwise, are difficult to gauge, and the compilers of the List have relied on a set of subjective criteria that represent departures from the established norms of scholarly communication. For instance, the 54 warning signs identified by Beall (2015) include unusually rapid peer review, failure to identify editors and board members, boastful language, misleading claims about index coverage or citation impact, lack of transparency in editorial operations, absence of a correction/retraction policy, poor copyediting, assignment of copyright to the publisher rather than the author despite the journal's OA status, and the use of spam e-mail to solicit authors or board members.

The use of these criteria is not unreasonable, given the number of OA journals and the difficulty of evaluating scholarly content or peer review practices across a wide range of academic disciplines. Arguably, however, a comprehensive assessment would require the consideration of at least six distinct dimensions of journal quality2:

1.

Editors' and publishers' intentions, legitimate or predatory. Intentions are difficult to evaluate, however, since organizations with limited resources and little publishing experience may be earnest in their efforts to provide for rapid publication in a high-growth field, to support a small but distinctive research community, or to address topics of interest to populations whose needs may be overlooked by the larger commercial publishers, scholarly societies, and university presses.

2.

Adherence to established norms of peer review such as anonymous review, use of expert reviewers, absence of bias in reviewer selection, adequate time for review, a reasonable acceptance rate, and reviews intended to improve the paper through revision. This standard refers to the integrity of the process rather than its outcomes.

3.

Adherence to norms of scholarly publishing other than peer review: e.g., transparency, economic sustainability, provisions for long-term preservation of content, reasonable fees, professionalism in presentation and web design, and a web interface that facilitates discovery and access.

4.

Scholarly quality, as assessed by expert evaluators. The evaluators' assessments may account for factors such as clarity of research questions and goals, data quality, appropriateness of methods, statistical rigor, empirical grounding of interpretations and conclusions, extent to which the results can be generalized to other contexts, uniqueness, innovation, and importance within the framework of previous research.

5.

Impact on subsequent scholarship: e.g., number of times cited, outlets in which the journal is cited, rate at which citations accrue, multidisciplinary impact, and extent to which the theories, perspectives, and methods introduced in the journal are incorporated into later work. Although scholarly impact is related to quality, it is also influenced by other factors (Bornmann et al., 2012Bornmann & Leydesdorff, 2015Tahamtan et al., 2016).

6.

Impact on teaching and practice, as shown by citations in textbooks, citations in students' papers, inclusion of articles in course syllabi and reading lists, and influence on professional norms and standards.

The criteria used to evaluate journals for Beall's List focus almost exclusively on dimensions 2 and 3, neither of which directly represents the scholarly quality of the papers published in the journals. For instance, sending spam e-mail to potential authors is unproductive and likely to generate a negative reaction (Kozak et al., 2016Lund & Wang, 2020), but it does not tell us anything about the journal's quality or impact. Moreover, even journals established solely to generate revenue may publish work that is legitimate and innovative. That is, a publication outlet may be useful in functional terms even if the publishers' intentions are predatory.

This study uses Google Scholar (GS) data to evaluate the scholarly impact of the 58 accounting journals on Beall's List as well as a comparison group of 61 presumably non-predatory accounting journals indexed in Scopus. Citations per article and CiteScore percentile are calculated or estimated for each journal based on data for more than 13,000 articles published from 2015 through 2018. The results focus on four research questions:

1.

Is inclusion in Beall's List necessarily associated with low citation impact? Are quality dimensions 2 and 3 (the criteria used by Beall) good indicators of a journal's status with regard to dimension 5, which is widely accepted in other contexts as an indicator of scholarly merit?

2.

Where would each of the Beall's List accounting journals fall within the hierarchy of Scopus accounting journals if they were included in Scopus? Do some have substantially higher citation rates than others?

3.

How extensive is the variation in citation impact among the articles within the Beall's List journals? Are the more highly cited articles concentrated in the more highly cited journals, or can they be found in the less cited journals as well?

4.

Do Google Scholar citation data provide an effective means of gauging the citation impact of journals not included in Web of Science or Scopus? For the journals included in both GS and Scopus, is citations per article closely related to CiteScore?

Context and previous research

The methods and results of this study are informed by research on (1) predatory journals and journal lists, (2) the impact and quality of predatory journals, and (3) journal ratings and rankings in accounting and related fields.

Predatory journals and journal lists

Beall's definition of predatory publishers is grounded in the publisher's motives (Beall, 2012Cobey et al., 2018Krawczyk & Kulczycki, 2021). Motives are not always easy to discern, however, and some legitimate journals display idiosyncrasies that might lead potential authors to question their legitimacy (Eriksson & Helgesson, 2018Grudniewicz et al., 2019Siler, 2020). For instance, Olivarez et al. (2018) discovered that several well-regarded library and information science (LIS) journals display at least some predatory characteristics, and Shamseer et al. (2017) found that the most distinctive characteristics of predatory journals are attributes unrelated to the content of the published articles, such as poor web site design.

Beall's List and its closest equivalents, such as Cabell's Predatory Reports, are valuable tools in the fight against predatory journals. Nonetheless, these lists have been criticized for imprecise evaluation criteria, inconsistent application of those criteria, lack of transparency, infrequent updates, bias against publishers in developing countries, and the absence of systematic mechanisms for the re-evaluation of publishers and journals (Berger & Cirasella, 2015Chen, 2019Davis, 2013Dony et al., 2020Esposito, 2013Kakamad et al., 2020Kscien Organization, 2021). Moreover, nearly all such lists focus on just one or two dimensions of journal quality. Of the 15 predatory journal lists identified by Koerber et al. (2020) and Strinzel et al. (2019), none evaluate journals based on a systematic examination of their scholarly quality or impact. This is perhaps not surprising, since content-based reviews require the detailed evaluation of individual articles.

Impact and quality of predatory journals

Three recent investigations have examined the citation impact of the articles published in predatory journals. The most thorough is that of Björk et al. (2020), who compared the five-year GS citation counts of articles in 250 predatory journals (from Cabell's watchlist) and 1000 presumably non-predatory journals (from Scopus)—one article from each journal. Both sets of journals covered a range of subject areas. The watchlist articles had an average of 2.6 citations, and 56% were uncited after five years. In contrast, the Scopus articles had an average of 18.1 citations, and only 9% were uncited after five years.3 Investigating further, Björk and associates found that just 10 predatory journals (10 articles) accounted for nearly half the citations, and that at least 4 of the 10 would not be deemed predatory based on their current characteristics. They concluded that although most predatory journals have “little scientific impact,” some do include articles that could have been placed in much better journals if the authors hadn't “opted for the fast track and easier option of a predatory journal” (Björk et al., 2020: 1, 8). Similar findings have been reported by Bagues et al. (2019), who reviewed the papers published by Italian academics in Beall's List journals from 2002 to 2012, and by Nwagwu and Ojemeni (2015), who compiled information on the papers published in 32 predatory biomedical journals based in Nigeria.

The meaning assigned to these citation statistics is not always straightforward, however. Not all citations have the same purpose, and not all reflect well on the cited work. A paper may be cited to highlight an important finding, to honor the groundbreaking work of early investigators, to point out the flaws in a previous study, to draw support from the dominant perspectives of a field or subfield, to critique a poor research design, or simply to acknowledge the existence of a research project or report (Beed & Beed, 1996Ha et al., 2015Nisonger, 2004). Likewise, a predatory journal's relatively high citation rate may be interpreted in multiple ways.

The current analysis was undertaken based on the assumption that relatively high citation counts would reflect favorably on the Beall's List journals—that a higher citation rate for a “predatory” journal demonstrates that it contributes to the literature despite the factors working against it. In terms of evaluators' perceptions, the deck is stacked against Open Access journals and against the developing countries where many Beall's List journals operate (Albu et al., 2015Berger & Cirasella, 2015Frandsen, 2017Nwagwu & Ojemeni, 2015Shen & Björk, 2015Xia et al., 2015Yan & Li, 2018). Articles in Beall's List journals are generally excluded from the foremost bibliographic databases, so they are presumably harder to find, and it is reasonable to assume that authors will be biased against citing any journal that has been publicly labeled as predatory. Within this framework, citations to an article in a Beall's List journal indicate that subsequent authors have identified it as a genuine contribution despite the biases that make them inclined not to cite it.

There is an alternative interpretation, however. Several authors have argued that citations to the papers in predatory journals do not necessarily indicate that the articles are legitimate (Akça & Akbulut, 2021Anderson, 2019Frandsen, 2017Nelson & Huffman, 2015). In fact, they regard these citations as potentially harmful—as indicators that the scholarly literature has been polluted with flawed methods and potentially false results. Viewed from this alternative perspective, the more highly cited Beall's List journals have been successful at claiming legitimacy for papers that may be inaccurate, biased, or otherwise misleading.

The actual scholarly quality of the papers in predatory journals is therefore central to understanding the implications of high or low citation rates. However, just two studies have directly addressed this issue. McCutcheon et al. (2016) compared 25 articles published in Beall's List psychology journals with 25 published in Scopus journals of intermediate citation impact. Each article was blinded, then reviewed by the research team on the basis of five criteria. Although the team found more than four times as many statistical and grammatical errors in the Beall's List articles, the score differentials for the other three criteria (literature review, research methods, and overall contribution to science.) were not nearly as pronounced. Overall, McCutcheon and associates were struck by the variations in quality among the Beall's List articles. Some were of uniformly low quality while others received high marks in every area. Likewise, a review of 358 articles in Beall's List nursing journals revealed that 48% of the papers were poor rather than average (48%) or excellent (4%), and that many had numerous errors in writing or presentation. Nonetheless, only 5% reported findings “potentially harmful to patients or others” (Oermann et al., 2018: 8). That analysis may have been biased, however, since all the assessors knew in advance that the papers had been published in predatory journals.

Journal ratings and rankings in accounting and related fields

A clear distinction can be made between the quality of a paper and the quality of the journal in which it appears.4 The evaluation of every paper is not always feasible, however, so journal ratings and rankings remain important to researchers, evaluation committees, universities, policymakers, and funding agencies.

Citation-based rankings of accounting journals can be found within each of the three large, multidisciplinary citation databases: Scopus, Journal Citation Reports (part of Web of Science), and Google Scholar (Walters, 2017). Additional ratings or rankings of accounting journals have appeared in the scholarly literature. These include at least five investigations based on actual behaviors such as publishing, indexing, and citing (Beattie & Goodacre, 2006Chan et al., 2009Chan et al., 2012Chang & McAleer, 2016Guthrie et al., 2012) as well as several based on surveys that ask respondents about their opinions, choices, or hypothetical behaviors (Australian Business Deans Council, 2019Bonner et al., 2006Chartered Association of Business Schools, 2018Harzing, 2020, June 24Lamp, 2010). For scholars interested in the relative standing of the journals on Beall's List, all these information sources have a major disadvantage: they seldom include the journals at the lower end of the prestige hierarchy. Of the 58 active accounting journals on Beall's List, just three are included in Scopus and none are included in Journal Citation Reports. No more than three are included in the journal ratings of the Australian Business Deans Council (2019), the Australian Research Council (Lamp, 2010), or the Chartered Association of Business Schools (2018), and none are included in any of the other publications mentioned here.

Fortunately, Google Scholar does cover most of the journals ranked lower in the hierarchy. It therefore allows us to compare the accounting journals on Beall's List with the presumably non-predatory accounting journals indexed by Scopus. Although GS has been criticized for bibliographic errors that have limited its effectiveness as a citation database (Bar-Ilan, 2006Bauer & Bakkalbasi, 2005Jacsó, 2005aJacsó, 2005bJacsó, 2006), these errors have become less common over time (Doğan et al., 2016). Recent studies have shown that Google Scholar's coverage of the scholarly literature is comprehensive and unbiased (Chen, 2010Delgado López-Cózar et al., 2019Harzing, 2013Harzing, 2014Martín-Martín et al., 2017Walters, 2007), and that its citation counts are consistent with those obtained from Scopus and Web of Science (Harzing, 2013Harzing & Alakangas, 2016Harzing & van der Wal, 2009Martín-Martín et al., 2018Prins et al., 2016). Within the field of accounting, Rosenstreich and Wooliscroft (2009) and Solomon and Eddy (2019) recommend GS for citation analysis due to its comprehensive coverage.

READ THE REST HERE


No comments:

Post a Comment