Generative AI (artificial intelligence) content must be cited like all other resources in scholarly work. However, unlike many other resources, generative AI is an evolving technology. The results of pending legal cases may present future implications for scholars (and society at-large) and may have an impact upon using AI-generated content in research and upon citation guidelines.
For now, researchers should confirm they have permission to utilize AI-generated content in their work by first checking with their publishers, journals, editors, or professors. If it is acceptable, it is a good idea to confirm current AI citation guidance at each use. Given the rapidly developing nature of AI, there may be an occasion where the style manuals are updated before this guide.
There is not one standard for citing AI-generated content, but this guide provides an overview from several style manuals and shares AI policies from different publishers and journals.
When utilizing or encountering a secondary source during research, you should always locate and verify the original source, regardless of whether it is AI-generated content or not. ChatGPT- and other AI-generated content, however, may not always be trustworthy (for example, see “When AI Chatbots Hallucinate”), so you should always confirm a citation is accurate before submitting it.
In addition to confirming with your professors and instructors that you are allowed to use AI-generated content in their course-specific assignments, all students at the University at Albany should review and be familiar with the University’s Undergraduate Academic Integrity Policy and the Graduate Standards of Academic Integrity.
Example from the APA style blog:
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Source: McAdoo, T. (2023, April 7). How to cite Chat GPT. APA Style Blog. https://apastyle.apa.org/blog/how-to-cite-chatgpt.
Example from the MLA site of a works cited list entry:
“Title of source” prompt. Name of AI Tool, version, Company, Date content was generated, General web address of tool.
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
Source: How do I cite generative AI in MLA style? (2023, March 17). MLA Style Center. https://style.mla.org/citing-generative-ai/?utm_campaign=sourcemar23&utm_medium=email&utm_source=mlaoutreach
Examples from style manual FAQ:
1. Text generated by ChatGPT, March 7, 2023, OpenAI, https://chat.openai.com/chat.
1. ChatGPT, response to “Explain how to make pizza dough from common household ingredients,” March 7, 2023, OpenAI.
Source: Citation, documentation of sources. (n.d). Chicago Manual of Style Online Q&A. https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html and https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0423.html
Personal communication between Jason Coleman and IEEE's Support Center: "AI-generated material is not considered a valid reference and should not be cited nor included as a reference."
Nature (and all journals published by SpringerNature, as of 20230124)
"First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.
Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM."
Science Editorial Policies (as of 20230126)
"Artificial intelligence (AI) policy: Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors. In addition, an AI program cannot be an author of a Science journal paper. A violation of this policy constitutes scientific misconduct."
Sources:
Nature. (2023, January 24). Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 613, 612. https://doi.org/10.1038/d41586-023-00191-1
AAAS. (n.d). Science journals: Editorial policies. Science. https://www.science.org/content/page/science-journals-editorial-policies