Tutorial: Responsible use of large language models

The following policy is quoted from the course syllabus:

AI tools, especially Large Language Models (LLMs), can be powerful aids in learning. They can outline, summarize, explain, and write code. However, these tools also pose risks, including the potential for plagiarism and for students to rely on them without developing their own understanding.

The rapid evolution of AI tools raises important questions about learning, assessment, and the skills you need for the future.

For this course, the goal of learning is articulated in the course learning objectives. Rather than focusing on preventing or punishing “cheating,” we recognize that students will use LLMs on take-home and written material, and should learn to do so responsibly. Assessment will focus on approaches—including tests and in-class activities—that are not readily amenable to LLM use (note: the use of any hypothetical wearable AI device is prohibited). You are encouraged to explore the resources linked on the LLM resources page and elsewhere.

For this course, the goal of learning is articulated in the course learning objectives. Rather than focusing on preventing or punishing “cheating,” we recognize that students will use LLMs on take-home and written material, and should learn to do so responsibly. Assessment will focus on approaches—including tests and in-class activities—that are not readily amenable to LLM use (note: the use of any hypothetical wearable AI device is prohibited).

On this page you will find some resources to help you make your own assessments about if, how, where, and when to use LLMs in your work. Additional suggestions are welcome!