AI Taskforce Mission
Artificial Intelligence (AI) has the potential to significantly enhance
Despite its promising potential, the misuse of AI in education poses significant challenges. Concerns
The reputational damage associated with the misuse of AI in education cannot be overstated. When cases of AI misuse or data breaches become public, they can erode trust in the university's ability to protect and serve its students. This can lead to a loss of credibility and prestige, affecting not only current students and staff but also prospective students, donors and partners. Negative publicity can have long-lasting repercussions, tarnishing an institution's reputation and undermining its mission and values.
To safeguard against these risks, universities must establish comprehensive AI policies and guidelines. These should encompass ethical considerations, data privacy and security protocols, mechanisms for bias detection and mitigation and clear accountability structures. By doing so, universities can ensure that AI tools are used responsibly and ethically, aligning with their commitment to provide a high-quality educational experience.
Universities that adeptly integrate AI into their operations stand to significantly enhance their rankings and reputation. By harnessing the power of AI to improve personalized learning experiences, automate administrative tasks and offer innovative educational resources, these institutions demonstrate a commitment to modernizing education and addressing the evolving needs of students. This forward-thinking approach not only attracts prospective students and faculty but also garners recognition from academic ranking bodies, thereby elevating a university's status on both national and global stages.
Conversely, institutions that either ignore the potential of AI or impose overly restrictive measures on its adoption risk falling behind. Such reluctance to adopt new technologies can lead to outdated teaching methods, inefficiencies in administrative processes, and failure to provide students with a competitive, cutting-edge education. In a rapidly advancing technological landscape, these shortcomings can tarnish a university's reputation, making it less appealing to potential students, faculty and partners.
To further support this initiative, we have generated a statement of proposed policy changes, general guidelines and best practices within the FSU College of Arts and Sciences focusing on teaching. This document provides detailed guidelines on ethical AI usage, data privacy and mechanisms for bias mitigation, ensuring that our implementation of AI aligns with our institutional values and commitment to excellence. It will be updated regularly as issues arise in the adoption of AI tools in the college.
FSU College of Arts and Sciences Best Practices for AI and Teaching
Goals of Teaching: All instructors seek to build students’ ability to move through levels of learning, from basic building blocks to complex constructions. Bloom’s Taxonomy (Remembering, Understanding, Applying, Analyzing, Evaluating and Creating) usefully articulates student learning outcomes and makes transparent the scaffolding we use in the classroom. Our collective goal is to help students achieve dexterity in their problem-solving skills, ability to complete complex tasks, and hone their critical thinking skills. In addition, adaptability is a proficiency that transcends all disciplines at the university.
AI (Artificial Intelligence): AI can be used by instructors and students in the classroom and can enhance or hinder student learning depending on the specific application. For many disciplines, employers are looking for ways to leverage AI in the workplace, and they expect universities to prepare students to use it. What “it” is, however, isn’t entirely clear because AI is not static. AI’s mutability makes it atypical of most new technologies. Educators should follow best practices that support their goals for student learning outcomes.
Within that flexibility, there are some general best practices faculty can implement in their classrooms today.
- Classroom policies: We encourage the university and faculty senate to require a syllabi statement in the future to the effect that faculty should post a clear policy on AI in their syllabus. This statement should specify when it is acceptable for students to use AI generative text and/or other AI software or applications. We do not think it would be appropriate for the university and faculty senate to craft statements for all courses because there is no one-size-fits-all for this dynamic technology. Until policy is constructed, we believe the best practice is for syllabi to include language on AI usage in the classroom and graded coursework.
- Communication: Set aside time to discuss the AI policy after add/drop, when students are in place for the semester. Engage the students with questions about the policy so that they understand the reasons behind the policy, policy enforcement protocols and the policy implications.
- Suspecting a violation of your classroom policy: Detecting the use of AI is not always easily identifiable. AI detectors are unreliable and yield false positives. Be sure to follow the processes we currently use in any scenario of suspected violation of the academic honor policy. Faculty can consult the flowchart if they are unsure of the appropriate decision-making process. Central to the academic honor policy violation process is that faculty discuss the matter with a student. These discussions are critical learning opportunities for our students. We encourage departments to develop AI training for instructors and update training as appropriate.
- Electronic textbooks and quiz/test materials: Many faculty use electronic textbooks and their accompanying learning and testing platforms in their courses. Textbook publishers use AI in developing questions, especially questions that are generated as a student moves through adaptive quizzes, exams, and homework assignments in an online learning platform. Some publishers do not allow faculty to see all the questions being created prior to a student accessing them, because the tool and/or the material is proprietary. If faculty cannot see the questions their students will encounter and choose what to test their students on, Florida law specifies that they must not use that material. Faculty are responsible for all the materials they assign and use to assess student learning.
- Scaffolding assignments: Looking to adopt an assignment making use of AI in your class? For what purpose are you doing so? How does that AI assignment relate to the student learning outcomes of your class? What is the skill(s) you wish students to develop? As with all teaching questions, there is support on campus for helping you achieve your pedagogical goals. See the many resources offered by the Center for the Advancement of Teaching.
- Departmental guidance: Departments should have discussions about AI in their curriculum. The university is encouraged to develop a repository of resources for faculty related to AI in educational environments.
AI Taskforce Members
Chair Gary Tyson, Department of Computer Science
Gordon Erlebacher, Department of Scientific Computing and Program in Interdisciplinary Data Science
Meegan Kennedy, Department of English
Jennifer Koslow, Department of History
Karen McGinnis, College of Arts and Sciences and Department of Biological Science
J. Piers Rawling, Department of Philosophy
Brad Schmidt, Department of Psychology
Faculty members with questions should contact Gary Tyson at tyson@cs.fsu.edu.