Generative AI tools can indeed make life and work a lot easier! Like any tool, however, there are many things to consider before using generative AI. You are responsible for educating yourself on the ethics of any AI you use and understanding the risks when you are permitted to use it.
Click the icon to enlarge the H5P activity.
Since Gen AI is trained on real-world data, text, and media from the internet, the content it provides to users may be misleading, factually inaccurate, or outright misinformation (like deep fakes, for example). The information provided may be implicitly or explicitly biased, outdated, or a “hallucination.”
AI hallucination: Gen AI fabricating sources of information even though it is meant to be trained on real-world data (Gonzales, 2023). IBM (n.d.) examines the various causes of AI hallucinations, indicating that common factors include “overfitting, training data bias/inaccuracy and high model complexity.”
Examples (IBM, n.d.):
You must examine output critically. For more information on evaluating Gen AI output, visit the Gen AI and Research section.
Staying safe online does not just apply to social media and password protection. Generative AI is trained on shared data and inputs gathered from users and around the internet. Read more for examples of how various Gen AI tools collect and use your data.
This section is adapted from Teaching with Generative AI LibGuide by BCIT Library Services, licensed CC BY-NC. Adaptations include rewording and condensing.
One may think that technology is objective and neutral. Generative AI, however, is trained on real-world data and information, such as images and text scraped from the internet. This information is rife with human biases.
AI Bias: "also referred to as machine learning bias or algorithm bias, refers to AI systems that produce biased results that reflect and perpetuate human biases within a society" (IBM Data and AI Team, 2023). Some common biases include gender stereotypes and racial discrimination.
Examples (Heaven, 2023):
Poet Joy Buolamwini shares "AI, Ain't I A Woman" - a spoken word piece that highlights how artificial intelligence can misinterpret the images of iconic black women: Oprah, Serena Williams, Michelle Obama, Sojourner Truth, Ida B. Wells, and Shirley Chisholm.
This spoken word piece was inspired by Gender Shades, a research investigation that uncovered gender and skin-type bias in facial analysis technology from leading tech companies.
Read more on MIT's Black History Archive.
Attribution:
"Like all of us, AI makes mistakes" from Artificial Intelligence Guide by Bronte Chiang, reused under a CC BY 4.0 International License.
Get hands-on experience with algorithm bias in this quick activity by The Artefact Group. Who will win the awards at Millennium Middle School? Will your predictions on the award winners align with the algorithm used by the Most Likely Machine to pick the winners?