1 Background
Over the past ten years, JCOM has seen a consistent increase in manuscript submissions, with the number of submitted refereed article types doubling over the last four years. We welcome this growth as a signal of the field’s vitality and an affirmation of JCOM’s relevance as a scholarly home for researchers and practitioners worldwide. Most of the manuscripts we receive reflect genuine scholarly effort, careful thinking, and a real commitment to advancing knowledge.
At the same time, like other scholarly journals, we are receiving some manuscripts that do not meet the standards of rigour and intellectual honesty that define credible scholarship. In particular, we have observed a growing number of submissions that bear the unmistakable traits of irresponsible use of generative artificial intelligence (AI). The use of AI tools in research and publication is not, in itself, unethical or irresponsible. Most major academic publishing houses provide guidance on when authors are required to report AI use in research, what they must report, and where they must disclose it. Problems arise when these tools are used without appropriate care, verification, and transparency.
2 A growing problem with fabricated references
An earlier JCOM special issue explored the influence of AI on the ecosystems of science communication and its impact on research methods and theory in our field [Kessler et al., 2025]. An emerging and troubling trend we have observed is a rise in so-called “ghost references”, fabricated by AI tools. The authors’ names sound real; the journal title may exist; the publication and publication years are often plausible for the field. Yet some or all of these elements are mismatched with other publications (Frankenstein references) or entirely fabricated. In some cases, the digital object identifiers (DOIs) match a relevant journal, but not a paper, lead nowhere, or (even worse) they lead to real but entirely unrelated papers.
Fabricated references can arise when authors carelessly use large language model (LLM) chatbots to search for or summarise literature, draft academic text, or format references. LLM chatbots generate text that appears to be a reference list by predicting what a plausible reference might look like based on patterns in their training data. With web browsing enabled, LLM chatbots can also retrieve references from publicly accessible webpages, but they still cannot access paywalled databases, unless users upload the text (which many academic publishers prohibit). The results can be remarkably convincing, but often they are entirely wrong. They are, in short, AI hallucinations.
Several scholars have raised concerns about the threat that these fabricated references pose to scientific publishing [see, for example, Jamaluddin et al., 2023; Lee et al., 2023; McKenna, 2024]. Because this is a rapidly emerging phenomenon, much of the empirical research on fabricated citations currently appears as preprints. For example, Xu et al. [2026] evaluated 13 state-of-the-art LLMs and found that between 14% and 92% of 375,440 AI-generated citations were fabricated to some degree.
Illustrative case studies further demonstrate how easily fabricated references can pass undetected through existing publication systems. Spinellis [2025] describes being alerted to a fabricated article attributed to him, prompting an investigation that revealed broader weaknesses in editorial safeguards against AI-generated content. Moore [2025] similarly analysed a paper published in a reputable journal and found that the majority of its references were fabricated, ultimately leading to the article’s retraction.
3 Generative AI and responsible authorship
At JCOM, we recognise the potential value of generative AI tools for supporting scholarly writing, including language editing, translation, and structural organisation. Used carefully and transparently, such tools may form part of an academic author’s toolkit. However, JCOM’s editorial policy is explicit: authors are required to declare, in detail, any use of generative AI in the preparation of their manuscript, whether in the methods or acknowledgements section. This policy exists because generative AI outputs are known to be unreliable in key aspects of scholarly writing, including factual accuracy, correct attribution, and faithful representation of sources. Disclosure is therefore not a bureaucratic formality, but a matter of intellectual honesty, integrity, and essential quality control. Authors must take full responsibility for the content they submit for publication, fully acknowledge the work of others, and clearly and thoroughly disclose their sources in their manuscripts.
4 Our commitment to quality and integrity
JCOM maintains quality control at multiple stages of the publication process. Submitted manuscripts are assessed by an Editor-in-Charge for field relevance and initial quality, and those that proceed undergo double-anonymous peer review by at least two expert reviewers. Reviewers evaluate rigour, validity, originality, methodology, and contribution to the field.
Peer reviewers are a crucial line of defence against fabricated content, but peer review is not designed to function as a forensic audit of reference lists. Fabricated references are particularly problematic because they often appear plausible. We therefore ask our reviewers to remain especially vigilant about the accuracy of references, the plausibility of cited claims, and whether cited sources genuinely support the arguments being made. We also verify references during our editorial and production processes. Nevertheless, even rigorous checks cannot fully compensate for failures of authorship integrity. A growing number of automated systems extract citation metadata from references and validate them against sources such as CrossRef or Semantic Scholar, but these systems are still in development.
5 Our call to authors, and our commitment to ethical publishing
The Committee on Publication Ethics (COPE) states that the minimum requirements for authorship include substantial contribution to the work and accountability for its content and presentation [COPE Council, 2019]. A published paper carries each author’s name and professional reputation. Authorship, therefore, entails ownership, and ownership entails responsibility. Therefore, quality control mechanisms cannot replace authors’ ethical obligations.
The responsibility to submit accurate, honest, and carefully prepared manuscripts rests ultimately with those whose names appear on them. We therefore call on all authors submitting to JCOM to take these responsibilities seriously: to verify the accuracy and validity of every research claim and reference they cite, and to disclose fully and transparently if and how generative AI tools were used.
Our commitment to ethical publication practices is grounded in safeguarding JCOM’s standards and recognising that science communication, as a field of research and practice, has a particular stake in the integrity of knowledge production and sharing. In line with this commitment, we will reject, withdraw, or retract manuscripts that contain fabricated references as soon as they are identified — before, during, or after peer review. These actions are not punitive, but necessary to protect the credibility of the scholarly record.
We look forward to continuing to work with our authors, reviewers, and readers to uphold JCOM’s quality, integrity, and trustworthiness.
References
-
COPE Council. (2019). COPE Discussion Document: Authorship. https://doi.org/10.24318/cope.2019.3.3
-
Jamaluddin, J., Abd Gaffar, N., & Din, N. S. S. (2023). Hallucination: A key challenge to Artificial Intelligence-Generated writing. Malaysian Family Physician, 18, 68. https://doi.org/10.51866/lte.527
-
Kessler, S. H., Mahl, D., Schäfer, M. S., & Volk, S. C. (2025). Science Communication in the Age of Artificial Intelligence. JCOM, 24(2). https://doi.org/10.22323/2.24020501
-
Lee, P. Y., Salim, H., Abdullah, A., & Teo, C. H. (2023). Use of ChatGPT in medical research and scientific writing. Malaysian Family Physician, 18, 58. https://doi.org/10.51866/cm0006
-
McKenna, J. (2024). Artificial intelligence in scientific publishing. https://blog.mdpi.com/2024/04/03/ai-in-scientific-publishing/
-
Moore, E. (2025). The case of the fake references in an ethics journal. https://retractionwatch.com/2025/12/02/fake-references-chatgpt-journal-academic-ethics-springer-nature-whistleblowing
-
Spinellis, D. (2025). False authorship: an explorative case study around an AI-generated article published under my name. Research Integrity and Peer Review, 10(1). https://doi.org/10.1186/s41073-025-00165-z
-
Xu, Z., Qiu, Y., Sun, L., Miao, F., Wu, F., Wang, X., Li, X., Lu, H., Zhang, Z., Hu, Y., Li, J., Luo, J., Zhang, F., Luo, R., Liu, X., Li, Y., & Liu, J. (2026). GhostCite: A large-scale analysis of citation validity in the age of large language models. arXiv. https://doi.org/10.48550/arXiv.2602.06718
About the authors
Marina Joubert Centre for Research on Evaluation, Science and Technology (CREST) and DSI-NRF Centre of Excellence in Scientometrics and Science, Technology and Innovation Policy ([SciSTIP], Stellenbosch University
E-mail: marinajoubert@sun.ac.za Bluesky: @marinajoubert
Michelle Riedlinger is a Chief Investigator in the Digital Media Research Centre at the Queensland University of Technology. Her research interests include emerging environmental, agricultural and health research communication practices, roles for “alternative” science communicators, online fact checking, platformised engagement with scientific research, and, most recently, generative authenticity.
E-mail: michelle.riedlinger@qut.edu.au Bluesky: @riedlinm