According to Lance Elliot, there are some crucial ways that generative AI can go awry:
Generated AI Errors: Generative AI emits content telling you that two plus two equals five, seemingly making an error in the calculation.
Generated AI Falsehoods: Generative AI emits content telling you that President Abraham Lincoln lived from 1948 to 2010, a falsehood since he really lived from 1809 to 1865.
Generated AI Biases: Generative AI tells you that an old dog cannot learn new tricks, essentially parroting a bias or discriminatory precept that potentially was picked up during the data training stage.
Generated AI Glitches: Generative AI starts to emit a plausible answer and then switches into an oddball verbatim quote of irrelevant content that seems to be from some prior source used during the data training stage.
Lance Elliot's Summary
Generative AI is an incredible tool.
The sage wisdom goes that you’ve got to know your limitations, including the limitations of the tools that you choose to use. Most tools require that you bring awareness to the table when using the tool. What can the tool do? What can’t it do? What are good ways to use the tool? What are gotchas or pitfalls about the tool that I should know about? And so on.
Excerpts from: Eliot, L. (2013, May ). Lawyers Getting Tripped Up By generative AI such as ChatGPT but who really Is To blame, Asks AI ethics and AI law. Forbes. https://www.forbes.com/sites/lanceeliot/2023/05/29/lawyers-getting-tripped-up-by-generative-ai-such-as-chatgpt-but-who-really-is-to-blame-asks-ai-ethics-and-ai-law/?sh=1872dbc03212
Author: Dr. Lance B. Eliot's combines practical industry experience with academic research. Previously a professor at USC and UCLA, and head of a pioneering AI Lab, he frequently speaks at major AI industry events.
AI and Academia: The End of the Essay?
Video by Dan Lametti, Associate Professor of Psychology at Acadia University and a Senior Advisor to the technology company OneReach.
A session hosted by Maple League of Universities.
The strengths and weaknesses of large language models like ChatGPT are reviewed to demonstrate that they do a poor job of completing most university assignments without a knowledgeable human hand to guide them. Lametti argues these models are not a threat to higher education but a useful pedagogical tool that can help students learn how to write better papers, and facilitate more meaningful real-world interactions between students and professors. He also includes suggestions for how post-secondary educators might use AI to improve the experience of students in and outside of the classroom. In the future, language will be the mechanism by which humans interact with computers and we should prepare students for this change.