Using Gen AI as a research tool may be helpful; however, critically evaluating the output of any Gen AI prompt is essential. Verify the accuracy and objectivity of the information provided, and always read the articles that Gen AI cites. Here are a few things to look out for:
Gen AI may generate outputs that include accurate-looking but false or non-existent references [opens a new tab]. It is always important to verify the existence of the sources and the accuracy of the information gathered from them (UWaterloo Libraries, 2023).
Special care and attention should be given to information generated by AI. As of this guide’s creation, ChatGPT, for example, is trained on texts and data produced before 2021 (UWaterloo Libraries, 2023). Fact-checking and using additional, authoritative information sources is important. Do no take Gen AI outputs at face value. Check out this example from UWaterloo Libraries [opens a new tab] about ChatGPT providing inaccurate and outdated medical information to a user.
Gen AI is trained on data and information gathered from the internet without regard for the accuracy of said information. The internet has excellent sources of information as well as poor sources. Since it is learning from almost everything the internet has to offer, the information needs to be reviewed so that biased (implicit or explicit) information is weeded out.
Some potential use cases include contributing to building research outlines, enriching data retrieval, and helpful gathering of sources for literature reviews (UNESCO, 2023, p. 29). The better the prompt, the better the resulting output will be.
For example, you might consider using Gen AI to generate some research question ideas. A clear prompt could be: “Write 10 potential research questions for [topic x] and rank them in importance for [the field of research y].” (UNESCO, 2023, p. 30).
Using Gen AI as a data explorer and literature reviewer has its advantages as well. It can quickly gather information, explore a wide range of data, and automate parts of data interpretation. Both uses impose requirements on the user, including basic understanding of research topics, the ability to verify the information and its sources, and for more complicated requests like literature reviews, a robust knowledge of methodologies and techniques for analyzing data (UNESCO, 2023, p. 30).
To evaluate AI tools and information, the LibrAIry has developed the ROBOT evaluation test. If you are looking for tools to evaluate articles and other information sources cited or linked out from the AI output, check out SIFT and CARS.
For a text-based version: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry.
SIFT stands for stopping, investigating the source, finding better coverage, and tracing.
SIFT: The four moves: Stop, Investigate the source, find better coverage, trace the original context. CC BY.
Learn more by exploring our playlist of 4 videos (11 min total runtime).
CARS is primarily for scholarly research. It stands for credibility, accuracy, relevance, and support. It’s a process designed to help you determine the reliability of a source by highlighting bias or lack of evidentiary support.