Skip to Main Content
My Library Accounts

Artificial Intelligence (AI)

A guide to Artificial Intelligence and academic integrity for students.

Assessing AI-Generated Content

Accuracy

Generative AI tools like ChatGPT are able to produce a lot of different content, from quick answers to a question to creating cover letters, poems, short stories, outlines, essays, and reports. However, it often contains errors, false claims, or "plausible sounding, but completely incorrect or nonsensical answers" known as hallucinations or confabulations. Take the time to verify and check the content created to catch these problems. 

Generative AI can also be used to create fake images or videos so well that they are increasingly difficult to detect, so be careful which images and videos you trust, as they may have been created to spread disinformation.

Bias

Generative AI relies on the information it finds on the internet to create new output. As information is often biased, the newly generated content may contain a similar kind of bias. Example of potential bias include gender-bias, racial bias, cultural bias, political bias, religious bias, and so on.  Closely scrutinize AI-generated content to check for inherent biases. 

Comprehensiveness

When AI-generated content is accurate, it may still be selective as it depends on the algorithm which it uses to create the responses. Although AI Chatbots access a huge amount of information found on the internet, they may not be able to access subscription-based information that is secured behind paywalls (like most peer-reviewed research). Content may also lack depth, be vague rather than specific, and it may be full of clichés, repetitions, gaps, and even contradictions. 

Currency

AI tools may not always use the most current information in the content they create. Some tools have cutoffs in their training data that mean they don't "know" anything past a certain date. Others may simply not understand the importance of current data. In some disciplines, it is crucial to have the most recent and updated information available. There are many other examples, and it is important that you check the publication dates for any sources of information that are used in AI-generated texts. 

Sources

False sources are a form of AI hallucination that's particularly relevant to research. AI tools may provide citations by an author that usually writes about your topic, or even identify a relevant, well-known journal, but the title, pages numbers, dates, and sometimes authors are completely fictional. 

Not crediting sources of information used and creating fake citations are both cases of plagiarism, and therefore breaches of academic integrity. You are responsible for the work you turn in, even the fake source was generated by an AI tool. Check the VSCS Libraries Discovery Search and/or Google Scholar to verify whether the sources are correct or even exist. You can also get help from a librarian.

Copyright

Generative AI tools rely on what they can find in their vast knowledge repository to create new work, and a new work may infringe on copyright if it uses copyrighted work for the new creation.

For example, there have been several lawsuits against tech companies that use images found on the internet to program their AI tools. For example, Getty Images is suing Stable Diffusion for using millions of pictures from Getty's library to train its AI tool. They are claiming damages of $1.8 trillion.

There is much debate about the ownership of copyright to a product that was created by AI. Is it the person who wrote the code for the AI tool, the person who came up with the prompt, or is it the AI-tool itself? So far in the U.S., AI-generated works are not copyright protected, although that could change.

Related Information

Frequently Asked Questions