What it means to be an ethical and responsible user of AI
At UNO, we encourage faculty and students to explore how AI can be beneficial in education, but to make sure AI is being used in ethical and responsible ways. Using Generative AI comes with a lot of responsibility; ethical considerations, fact-checking, and proper attribution and transparency are all vital when it comes to being ethical consumers and users of generative AI.
Below are some basic guidelines for Generative AI use that align with our guiding principles, Integrity, Transparency, Equity, and AI Literacy.
AI is a tool in your toolbox.
While AI is a tool in your toolbox, it should not be the only tool in your toolbox. AI is not a replacement for completing work or for learning. It can be a great supplement, study buddy, or collaborative partner, but it should never be used as a replacement for you completing an assignment, quiz, or task that requires your thoughts.
If you're using AI to help in your studies or to help refine work, you should always make sure the final submission is a representation of your own thoughts and skills. When you do use AI, it is important that you are clear about where and how you used AI.Always cite when and if you used AI.
Similar to citing authors when incorporating their work into a paper, all work that is completed with the help of AI should be cited. Not giving proper attribution could give end users a false perception of how something was created, and be considered a violation of academic integrity.
In many classes, instructors will have expectations on how AI should be cited, whether that be APA, MLA, or more informal attribution for AI. As a student, you are responsible for making sure you're following the citation guidelines your instructors has asked from you. For a brief overview on citing Generative AI, check out the AI Prompt Book page on citing.Ensure you are using AI ethically.
By design, generative AI is trained by data and our interactions with the system. When used incorrectly, there can be a lot of unintended consequences that put you and/or others at risk.
When using generative AI, it is critical that you do not feed private or confidential information into the AI tool or use AI in a way that violates intellectual property rights or course policies.
This includes not feeding any identifying information into the tool (address, social security number, etc.), in addition to not feeding specific course materials or readings into the AI tool when it has not been approved by your instructor. Lawsuits, identity theft, and privacy violations are all potential consequences of inputting this information into the AI tool.
Do not accept AI output as fact.
Generative AI models are known to provide false, misleading or biased responses. When using AI, it is important that you do not accept its answers at face value but instead, think critically about the output you are receiving. When using AI you should always:
- Keep an eye out for biases and inaccuracies: Assume there are biases and inaccuracies in the responses. Consider what changes need to be changed to remove biases and inaccuracies, or how you engineer your prompt differently.
- Evaluate results: Compare the responses to the knowledge you gained through readings and class lectures. What makes sense and what sounds skeptical?
Fact Check: See if you can find the same article, book, and author, and information through a Google or Library search.
Learn how to be a good prompt engineer.
The common phrase for prompt engineering is: Garbage in equals garbage out. When you provide the AI tool with vague instructions, you are more likely to get inaccurate results that don't work for you. Good prompt engineering is an important component of using AI effectively.
Good prompts include clear directions and details on what you want the AI tool to do. For more information on how to become a good prompt engineer, check out the module on prompt engineering in the AI Prompt Book for Students.You are responsible for your assignment submissions.
While you may be allowed to explore AI as a tool for learning and productivity, you are ultimately responsible for the accuracy, integrity, and originality of the work you submit. If you submit work generated by AI that is biased, inaccurate, improperly cited, or otherwise violates academic integrity policies, you may face consequences for violating academic integrity.
As a student at UNO, it is your responsibility to critically evaluate AI-generated content, ensure proper attribution of AI use, and uphold academic standards in all submissions.To help you identify appropriate and inappropriate use cases of AI in your studies, the stoplight is a great approach, where green means go, yellow means proceed with caution, and red means no.
The examples below are categorized based on if they are great use cases, potentially good use cases, and not good use cases; review them and consider if and when AI might be helpful for you in your experience at UNO:
-
Green = Great use cases
- Personal proofreader: Use AI to evaluate and give feedback on grammar, spelling, and punctuation before you submit your writing.
- Brainstorming:AI can be great to brainstorm topics or ideas if you're stuck or struggling to get started.
- Organizing thoughts and ideas: Ask AI to take the notes you took in class and organize it into main ideas from the lecture.
- Ask for examples:Ask an AI tool to provide examples relevant to your interests to apply content to subjects you understand.
- Study partner:Provide an AI tool with vocabulary and definitions, and ask it to quiz you over the material.
-
Yellow = Potentially good use cases (with some caveats)
- Receive feedback on papers before submitting: Ask AI to provide a grade on your assignment based on a rubric or formatting instructions. Caveat: AI is not your instructor and may grade you more lenient than your instructor (or stricter). Your instructor's grade is the final grade.
- Interactive learning opportunity: Ask an AI tool to explain concepts, give examples, or quiz you over the content. Caveat: AI can provide incorrect information occasionally; cross-check the results with what you've learned in class or use class provided custom gpts only.
- Rephrasing content: While studying, if you don't understand a difficult concept, ask an AI tool to provide clarity or rephrase it in a different way to help you understand. Caveat: It is important when doing this that you break down content, not dumb down content.
-
Red = Not good use cases (violations of academic integrity)
- Writing a paper: Asking AI to write your paper and submitting the output as your own. If allowed to use AI to create an initial draft of a paper, ensure the final product is a representation of your own thoughts and work.
- Using AI to answer direct answers for homework problems, discussion posts, or essays: Asking AI answers to questions on homework or to write a discussion post is replacing learning with AI. This creates an over-reliance on AI and prevents you from developing necessary skills (e.g., critical thinking, problem-solving, etc.).
- Using AI on exams: Using AI on exams or summative assessments will prevent instructors from measuring your mastery of content. Summative assessments and exams are not an appropriate place to use AI.
- Using AI without giving proper attribution: Using AI to write or complete assignments and not giving proper attribution on how you used the AI tool is a violation of UNO's academic integrity policy.
The suggestions above are designed to help you see some of the possibilities and limitations of using AI in your studies. They are not inclusive of all great use cases or limitations, but instead should ease your mind of when and if AI can be beneficial to you in your studies.
It is also important to note that while we give these suggestions, it is most important that you are following the guidance of your instructor. A syllabus statement or instructor guidance overrides the examples above.
For more opportunities to learn about using AI and examples of how to use AI check out UNO's AI Prompt Book for Students.