TABLE OF CONTENTS
Artificial Intelligence Policy
Sage recognises the value of artificial intelligence (AI) and its potential to help authors in the research and writing process. Sage welcomes developments in this area to enhance opportunities for generating ideas, accelerating research discovery, synthesising, or analysing findings, polishing language, or structuring a submission.
Large language models (LLMs) or Generative AI offer opportunities for acceleration in research and its dissemination. While these opportunities can be transformative, they are unable to replicate human creative and critical thinking. Sage’s policy on the use of AI technology has been developed to assist authors, reviewers and editors to make good judgements about the ethical use of such technology.
For authors
AI assistance
We recognise that AI assisted writing has become more common as the technology becomes more accessible. AI tools that make suggestions to improve or enhance your own work, such as tools to improve language, grammar or structure, are considered assistive AI tools and do not require disclosure by authors or reviewers. However, authors are responsible for ensuring their submission is accurate and meets the standards for rigorous scholarship.
Generative AI
The use of AI tools that can produce content such as generating references, text, images or any other form of content must be disclosed when used by authors or reviewers. Authors should cite original sources, rather than Generative AI tools as primary sources within the references. If your submission was primarily or partially generated using AI, this must be disclosed upon submission so the Editorial team can evaluate the content generated.
Authors are required to follow Sage guidelines, and in particular to:
- Clearly indicate the use of language models in the manuscript, including which model was used and for what purpose. Please use the methods or acknowledgements section, as appropriate.
- Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors, biases or inconsistencies.
- Be conscious of the potential for plagiarism where the LLM may have reproduced substantial text from other sources. Check the original sources to be sure you are not plagiarising someone else’s work.
- Be conscious of the potential for fabrication where the LLM may have generated false content, including getting facts wrong, or generating citations that don’t exist. Ensure you have verified all claims in your article prior to submission.
- Please note that AI bots such as ChatGPT should not be listed as an author on your submission.
While submissions will not be rejected because of the disclosed use of generative AI, if the Editor becomes aware that Generative AI was inappropriately used in the preparation of a submission without disclosure, the Editor reserves the right to reject the submission at any time during the publishing process. Inappropriate use of Generative AI includes the generation of incorrect text or content, plagiarism or inappropriate attribution to prior sources.
Further information
- Using AI in peer review and publishing
- Assistive and Generative AI Guidelines for Authors
- New white paper launch: Generative AI in Scholarly Communications
- Committee on Publication Ethics (COPE)’s position statement on Authorship and AI tools
- World Association of Medical Editors (WAME) recommendations on chat bots, ChatGPT and scholarly manuscripts
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article