Tutorial: Responsible use of large language models
The following policy is quoted from the course syllabus:
AI tools, especially Large Language Models (LLMs), can be powerful aids in learning. They can outline, summarize, explain, and write code. However, these tools also pose risks, including the potential for plagiarism and for students to rely on them without developing their own understanding.
The rapid evolution of AI tools raises important questions about learning, assessment, and the skills you need for the future.
For this course, the goal of learning is articulated in the course learning objectives. Rather than focusing on preventing or punishing “cheating,” we recognize that students will use LLMs on take-home and written material, and should learn to do so responsibly. Assessment will focus on approaches—including tests and in-class activities—that are not readily amenable to LLM use (note: the use of any hypothetical wearable AI device is prohibited). You are encouraged to explore the resources linked on the LLM resources page and elsewhere.
For this course, the goal of learning is articulated in the course learning objectives. Rather than focusing on preventing or punishing “cheating,” we recognize that students will use LLMs on take-home and written material, and should learn to do so responsibly. Assessment will focus on approaches—including tests and in-class activities—that are not readily amenable to LLM use (note: the use of any hypothetical wearable AI device is prohibited).
On this page you will find some resources to help you make your own assessments about if, how, where, and when to use LLMs in your work. Additional suggestions are welcome!
Links and resources
- GitHub Copilot is an extension for VS Code that can provide suggestions for code completion and editing. It is free for students and educators.
- Blog: “Bob Carpenter thinks GPT-4 is awesome” This post highlights how GPT-4 is able to write a program in Stan, a statistical programming language, and also the mistakes that it makes. Finding and correcting these mistakes requires knowing the Stan language and having a deep understanding of the statistical model, but someone with this expertise could potentially use GPT-4 to accelerate their coding workflow. The comments are also interesting and insightful.
- AI Snake Oil is a blog that seeks to dispel hype, remove misconceptions, and clarify the limits of AI. The authors are in the Princeton University Department of Computer Science.
- Everyone is Cheating Their Way Through College is a long-form article that discusses some of the ways that students are using LLMs to cheat and underscores the need to rethink the goals of education and how we measure whether we are achieving those goals.