Guest Editorial
Introducing C&RL’s Generative AI Policy
As the use of generative artificial intelligence (Gen AI) tools expands, journals, editors, reviewers, and authors continue to evaluate their practical and ethical applications within the broader research landscape. College & Research Libraries (C&RL) is no exception. Its newly formed policy took several months of discussion and planning. Once the C&RL editorial board decided a policy was needed to navigate expectations of Gen AI’s use in its author submissions and editorial work, a small working group of three editorial board members formed to assess the current climate and conversations within academic publishing.
The group, comprised of this editorial’s authors C&RL Editor Designate Michelle Demeter, C&RL Book Reviews Editor Melissa Lockaby, and C&RL Editorial Board Member Adrian Ho, spent the summer and fall of 2024 conducting a literature review across academic journals and publishers to examine authorship guidelines, data collection and analysis procedures, and journal editorial responsibilities related to Gen AI. Some of the journals included in our review included the Journal of Academic Librarianship, Journal of Librarianship and Scholarly Communication, Science, as well as a close review of the Committee on Publication Ethics’ (COPE) Position on Authorship and AI Tools. Our review and subsequent analysis led to the development of a small set of recommendations regarding C&RL’s guidance for authors, reviewers, and editors that was codified into a policy and approved by the C&RL Editorial Board (15 votes “yes”; 1 abstention) in March 2025.
While Gen AI tools (such as ChatGPT, Microsoft Copilot, and Claude) may make writing easier, their use complicates the reviewing and editing processes. Reviewers and editors have always needed to be alert for “traditional” scholarly integrity concerns of plagiarism, but the enhanced ability of Gen AI to “create” text cobbled together from online newspapers, articles, books, journals, and other information resources has complicated scholarly publishing in unexpected ways. What has not changed is the requirement for academic publications to uphold the rigors of scholarly research and writing, which is contingent upon the trust between writers, reviewers, and editors.
With these issues in mind, C&RL’s working group set out to answer a number of questions regarding the journal’s editorial process and ethics. What is C&RL’s stance on authors submitting manuscripts generated by AI tools? Should authors be permitted to use AI tools for editing purposes, including checking spelling, grammar, and/or performing translation? Is AI-generated data acceptable? Should peer reviewers or editors employ Gen AI tools when evaluating a submission or perhaps when proofreading their reviews? What other concerns do we have as an editorial board? How have other academic journals addressed these concerns?
To this end, the working group focused on two specific recommendations:
- Authorship of all submitted manuscripts (including all content and data collection/analysis) should be human.
- Peer reviewers and editors should not use Gen AI tools when evaluating a manuscript or writing a review.
This editorial provides an overview of the questions and process by which we worked through in order to arrive at these conclusions. It is our goal that prospective authors, and current reviewers and editors will find some clarity and transparency in how the C&RL Generative AI policy was crafted with the understanding that it will be revisited as Gen AI technology and applications evolve.
Authors
Our readers expect quality content from professional experts within their respective areas across academic and research librarianship. As such, any submitted article should follow a number of standards, including the expectations that the writing is the author’s original work, is free of plagiarism, includes proper citations, offers practical or theoretical value to the understanding or advancement of academic librarianship, and is of high quality (though the last point is admittedly subjective but falls within the purview of the editors and reviewers to determine). As noted in our Author Guidelines, C&RL has maintained these standards, along with the understanding that authors are responsible for the accuracy of any statements or data within their submitted and/or published work.
Because generative AI does not “create” original work, it cannot be considered an author of a manuscript. Because C&RL accepts only “original publications,” the use of Generative AI blurs the distinction between what is original work versus aggregated information produced as Gen AI output. Additionally, there are no guarantees of data accuracy, integrity, or research design because generative AI tools cannot be held morally responsible for their output.
Chief among the queries raised by the editorial board was the looming issue of how to enforce a policy for a tool whose usage cannot be confirmed and cannot be held responsible for its output. Ultimately, the editorial board determined that it has always been incumbent upon authors to be held responsible for the work they submit to a journal, and C&RL’s stance on this issue has not wavered. If an author chooses to use a generative AI tool in a support role, such as during the prewriting phase (e.g., ideation process) or as an aid to improve the author’s original writing (e.g., translation, editing, revision), its use does not need to be noted. The thought process behind this decision is analogous to a situation wherein an author might hire a copy editor or translator, or employ a spelling or grammar checker in Microsoft Word. In these situations, we do not ask authors to note the application of such tools or assistance, and it did not seem that the use of generative AI tools in these instances warranted notation. However, it is again emphasized that authors are responsible for any mistakes or errors found in any manuscript submission.
There was notable discussion among the editorial board members regarding whether generative AI should be permitted when crafting a literature review. It was agreed that authors may find sources to evaluate through an AI query, and anything considered should be rigorously checked to ensure they exist, are applicable to the developing study, and of course cited correctly. However, that initial query is where the line should be drawn; Gen AI tools should not be used to craft the text of the literature review, which is an essential (though often disparaged or dreaded) component of scholarly articles.
The literature review often requires patience and focus to effectively evaluate what may be an extensive range of established scholarship to provide foundational grounding of one’s research, while simultaneously identifying support and gaps relevant to one’s current research project. Some authors may experience anxiety that perhaps they have forgotten a critical study or are struggling with how to best synthesize a massive amount of information on their topic. However, it is precisely these challenges wherein authors wrestle with the previous literature and the impact this legacy of scholarship has created that make the literature review important. Relying on AI tools thus diminishes active engagement with the literature, which then prevents the author and reader from fully understanding or appreciating the article’s contributions within the larger scholarly conversation.
An exception regarding the use of Gen AI, which will be discussed in more detail in the next section on Data, is for manuscripts studying the implications or applications of generative AI or AI-generated data as a topic within academic librarianship. As the applications of Gen AI expand within database search functions and summaries, as well as within library reference services, collection development, and access services to name a few, we at C&RL anticipate a number of burgeoning areas of study examining how the use of these technologies are impacting user behaviors and our own workflows as librarians.
Data
Many of the big publishers (Sage, Taylor & Francis, Wiley) and university presses (Oxford, Cambridge, MIT) already have policies in place or statements issued addressing Generative AI usage that authors are required to follow. While some guidelines are brief, just a paragraph, others are the equivalent of two or more pages. Although artificial or synthetic datasets may have advantages, eliminating the need to seek IRB approval or anonymize collected data, there is a lack of accountability or, potentially, accuracy in their use. However, review of other academic publisher policies did not consider the benefits of manufactured data to override the ethical implications or lack of veracity. While only two or three publishers the working group looked at explicitly barred the AI-generated data or its manipulation, at least five** stated that authors were ultimately responsible for their articles and that Gen AI could not be held accountable for any part of the research or paper.
The editorial board took this to mean that Gen AI cannot conduct empirical research since its output is not based on observation and Gen AI tools are not able to produce a human value-laden analysis or interpretation. Additionally, the potential to “hallucinate” or fabricate data raises issues of accuracy, reliability, and applicability. Can results stemming from fabricated data be generalized or offer tangible evidence of what is occurring in libraries, classrooms, or real-world settings?
Ultimately, applying what we found from the published policies we studied, we decided to adopt what the majority of other journals were doing. AI-generated data would be barred unless the technology itself was the subject under study. For example, examinations of AI and the user experience, the accuracy of AI content, AI algorithms, or other such research inquiries would be valid as long as researchers were evaluating the findings. Researchers have an ethical responsibility to gather data from legitimate and human-initiated methodologies (e.g., observations or surveys), as well as to provide a human interpretation of results. Fabricated data cannot, necessarily, provide substantiation or confirmation of a theory; nor may the study be replicated.
Reviewers and Editors
As the use of Gen AI has become more common in scholarly communication, there have been discussions about applying Gen AI to peer reviews for academic publications. For example, it has been suggested that journal editors and/or peer reviewers use Gen AI to refine and/or restructure their comments to make their feedback more constructive, or to maintain a positive and professional tone. Such use of Gen AI is thought to be especially helpful to editors and peer reviewers whose first language is not English. Additionally, it is argued that Gen AI can relieve peer reviewers of having to evaluate the quality of writing in terms of grammar, spelling, and clarity of meaning. Thus, peer reviewers would be able to focus more on examining the validity and reliability of the research documented in the manuscript.
Although Gen AI can provide assistance in the peer review process, it is noted that its application to scholarly publishing can be contentious and problematic. Traditionally, peer reviews are performed by individuals who are members of an academic or professional community who possess the expertise and critical thinking skills required to evaluate the quality of a submitted manuscript. While Gen AI can swiftly supply information on various topics, it cannot replicate a peer reviewer’s cognitive prowess and/or moral capacity and thus is unable to evaluate manuscripts in a professional or insightful manner. Moreover, the output of Gen AI is dependent on its training data. If a Gen AI tool was trained on biased, incorrect, and/or outdated data gathered from certain geographical locations only, the tool would not be able to review manuscripts objectively and its feedback may include misinformation and even perpetuate biases. It is possible that the tool would not be able to discern the novelty or significance of the study being reviewed.
It has been demonstrated that Gen AI tools may provide different responses to the exact same prompt input at different times or by different people. The lack of consistency with its responses thus raises concerns about Gen AI’s reliability as a potential reviewer. Furthermore, one well-known problem regarding the use of Gen AI tools is its proclivity toward generating “hallucinations” or false output. Its tendency to fabricate facts and bogus citations has been widely reported in professional literature and mass media. If an AI-generated review includes false information about a manuscript, it will not do justice to the author and may give rise to negative impact on the editor’s evaluation of the manuscript.
In addition to the above-mentioned shortcomings of Gen AI, there are other issues that warrant careful consideration before a journal uses Gen AI for peer review. One significant ethical concern is the consequences of uploading an unpublished manuscript into an AI tool for review without an author’s consent. Doing so may allow the AI tool to train itself on the uploaded content while also making it immediately available to any related query prior to its formal acceptance for publication by the journal. This unethical and unauthorized sharing of an author’s unpublished work would result in a serious breach of confidentiality.
From the research community’s perspective, using Gen AI to review manuscripts can ease the demand for finding sufficient peer reviewers. However, it would deprive colleagues of an opportunity to participate in their own profession’s scholarly and editorial process. As a consequence, it may make it harder for author librarians to develop an understanding of academic journal publishing and to sharpen the skills required for evaluating research within the field. It will also reduce opportunities for service to the profession as an editor or reviewer. Meanwhile, some journal publishers might rely on Gen AI for peer reviews in order to reduce the turnaround time with a sacrifice in the quality of the reviews. This might also result in allowing journals to misleadingly promote themselves as “efficient” publishers in an attempt to attract more submissions and to publish more articles. Overall, such practices would harm the academic library research community and the scholarly communication ecosystem in the long run.
Thus, it is recommended that peer reviewers not employ Gen AI tools for their reviews because peer review requires prudent decision-making based on the reviewers’ expertise, experiences, and critical thinking. While the technology can assist peer reviewers in some aspects of checking their reviews for grammar or spelling, the review process should be driven by human reviewers who are able to be held responsible for the evaluation of manuscripts.
Conclusion
Overall, the C&RL Gen AI policy seeks to clarify ethical and practical author expectations surrounding the use of Gen AI while emphasizing transparency and accuracy of the authorship and editorial process. Because we created C&RL’s policy relatively later than other journals, the editorial board benefitted from having more time to see how generative AI has been employed in research while examining the policies of other journals. Even though having a broader understanding of how authors and editors have been applying these tools made it easier for the C&RL editorial board to develop its policy, the process for developing the C&RL policy still resulted in several insightful discussions and revisions. While this policy has been added to the journal’s Author Guidelines, it is crucial to note that this policy is not set in stone. It is expected that as advances are made in Gen AI technology that the editorial board will revisit the policy as needed to maintain clarity and expectations. Finally, if you are a potential author and are uncertain about whether your potential submission complies with this policy, or you have other inquiries regarding the use of AI tools or applications in your research or manuscript, please contact C&RL’s Editor directly . Bear in mind much of the landscape is still evolving, and editorial guidance may be fluid, but it is within those conversations that we all learn and ultimately support the quality of academic and research librarianship.
** Other publisher Gen AI policies considered: American Association for the Advancement of Science, American Chemical Society, Elsevier, Springer, Journal of the American Medical Association.

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Article Views (By Year/Month)
| 2026 |
| January: 52 |
| 2025 |
| January: 0 |
| February: 0 |
| March: 0 |
| April: 18 |
| May: 2016 |
| June: 248 |
| July: 167 |
| August: 85 |
| September: 104 |
| October: 106 |
| November: 96 |
| December: 131 |