FSU College of Arts and Sciences AI Taskforce

AI Taskforce Mission

Artificial Intelligence (AI) has the potential to significantly enhance the educational experience of university students. AI-powered educational tools can provide personalized learning experiences, adapting to individual student needs and learning styles. This personalization can lead to improved engagement, comprehension and retention of information, ultimately fostering a more effective learning environment. AI-driven platforms can provide students with 24/7 access to learning resources and support, breaking down traditional barriers of time and place. This aspect is particularly advantageous for students in remote or underserved areas, offering them opportunities for quality education that were previously inaccessible. Additionally, AI can support educators in designing and delivering more interactive and immersive learning experiences using virtual reality (VR) and augmented reality (AR), making complex concepts easier to grasp.

Despite its promising potential, the misuse of AI in education poses significant challenges. Concerns about data privacy and security must be addressed. AI tools often rely on vast amounts of data to function effectively, raising concerns about how student information is collected, stored and used, as well as what data sources are used in the training process. Without stringent policies and guidelines, there is a risk of data breaches, unauthorized access and misuse of sensitive information which can have severe implications for students' privacy and security. Another concern is the potential for bias in AI algorithms. If not properly monitored and regulated, AI systems can perpetuate and even exacerbate existing biases that can undermine the principles that educational institutions strive to uphold. Moreover, students may turn to AI or be required to integrate AI in assignments before they’ve mastered basic skills, short-circuiting their learning process. It is important for students to learn how to use AI, like any other tool, but faculty should ensure that its use does not hinder students’ acquisition of a basic skillset in their chosen discipline.

The reputational damage associated with the misuse of AI in education cannot be overstated. When cases of AI misuse or data breaches become public, they can erode trust in the university's ability to protect and serve its students. This can lead to a loss of credibility and prestige, affecting not only current students and staff but also prospective students, donors and partners. Negative publicity can have long-lasting repercussions, tarnishing an institution's reputation and undermining its mission and values.

To safeguard against these risks, universities must establish comprehensive AI policies and guidelines. These should encompass ethical considerations, data privacy and security protocols, mechanisms for bias detection and mitigation and clear accountability structures. By doing so, universities can ensure that AI tools are used responsibly and ethically, aligning with their commitment to provide a high-quality educational experience.

Universities that adeptly integrate AI into their operations stand to significantly enhance their rankings and reputation. By harnessing the power of AI to improve personalized learning experiences, automate administrative tasks and offer innovative educational resources, these institutions demonstrate a commitment to modernizing education and addressing the evolving needs of students. This forward-thinking approach not only attracts prospective students and faculty but also garners recognition from academic ranking bodies, thereby elevating a university's status on both national and global stages.

Conversely, institutions that either ignore the potential of AI or impose overly restrictive measures on its adoption risk falling behind. Such reluctance to adopt new technologies can lead to outdated teaching methods, inefficiencies in administrative processes, and failure to provide students with a competitive, cutting-edge education. In a rapidly advancing technological landscape, these shortcomings can tarnish a university's reputation, making it less appealing to potential students, faculty and partners.

Therefore, it is imperative for universities to strike a balance between leveraging AI's capabilities and establishing robust ethical guidelines with the flexibility to adapt to a changing technological landscape. By doing so, they can ensure responsible AI use enhances educational outcomes, fosters innovation, and strengthens their reputation as leaders in higher education. Those who achieve this balance will not only improve their rankings but also set a benchmark for excellence that others will aspire to follow.

To further support this initiative, we have generated a statement of proposed policy changes, general guidelines and best practices within the FSU College of Arts and Sciences focusing on teaching. This document provides detailed guidelines on ethical AI usage, data privacy and mechanisms for bias mitigation, ensuring that our implementation of AI aligns with our institutional values and commitment to excellence. It will be updated regularly as issues arise in the adoption of AI tools in the college.

FSU College of Arts and Sciences Best Practices for AI and Teaching

Goals of Teaching: All instructors seek to build students’ ability to move through levels of learning, from basic building blocks to complex constructions. Bloom’s Taxonomy (Remembering, Understanding, Applying, Analyzing, Evaluating and Creating) usefully articulates student learning outcomes and makes transparent the scaffolding we use in the classroom. Our collective goal is to help students achieve dexterity in their problem-solving skills, ability to complete complex tasks, and hone their critical thinking skills. In addition, adaptability is a proficiency that transcends all disciplines at the university.

AI (Artificial Intelligence): AI can be used by instructors and students in the classroom and can enhance or hinder student learning depending on the specific application. For many disciplines, employers are looking for ways to leverage AI in the workplace, and they expect universities to prepare students to use it. What “it” is, however, isn’t entirely clear because AI is not static. AI’s mutability makes it atypical of most new technologies. Educators should follow best practices that support their goals for student learning outcomes.

Within that flexibility, there are some general best practices faculty can implement in their classrooms today.

  • Classroom policies: We encourage the university and faculty senate to require a syllabi statement in the future to the effect that faculty should post a clear policy on AI in their syllabus. This statement should specify when it is acceptable for students to use AI generative text and/or other AI software or applications. We do not think it would be appropriate for the university and faculty senate to craft statements for all courses because there is no one-size-fits-all for this dynamic technology. Until policy is constructed, we believe the best practice is for syllabi to include language on AI usage in the classroom and graded coursework.
  • Communication: Set aside time to discuss the AI policy after add/drop, when students are in place for the semester. Engage the students with questions about the policy so that they understand the reasons behind the policy, policy enforcement protocols and the policy implications.
  • Suspecting a violation of your classroom policy: Detecting the use of AI is not always easily identifiable. AI detectors are unreliable and yield false positives. Be sure to follow the processes we currently use in any scenario of suspected violation of the academic honor policy. Faculty can consult the flowchart if they are unsure of the appropriate decision-making process. Central to the academic honor policy violation process is that faculty discuss the matter with a student. These discussions are critical learning opportunities for our students. We encourage departments to develop AI training for instructors and update training as appropriate.
  • Electronic textbooks and quiz/test materials: Many faculty use electronic textbooks and their accompanying learning and testing platforms in their courses. Textbook publishers use AI in developing questions, especially questions that are generated as a student moves through adaptive quizzes, exams, and homework assignments in an online learning platform. Some publishers do not allow faculty to see all the questions being created prior to a student accessing them, because the tool and/or the material is proprietary. If faculty cannot see the questions their students will encounter and choose what to test their students on, Florida law specifies that they must not use that material. Faculty are responsible for all the materials they assign and use to assess student learning.
  • Scaffolding assignments: Looking to adopt an assignment making use of AI in your class? For what purpose are you doing so? How does that AI assignment relate to the student learning outcomes of your class? What is the skill(s) you wish students to develop? As with all teaching questions, there is support on campus for helping you achieve your pedagogical goals. See the many resources offered by the Center for the Advancement of Teaching.
  • Departmental guidance: Departments should have discussions about AI in their curriculum. The university is encouraged to develop a repository of resources for faculty related to AI in educational environments.

AI Taskforce Members

Chair Gary Tyson, Department of Computer Science

Gordon Erlebacher, Department of Scientific Computing and Program in Interdisciplinary Data Science

Meegan Kennedy, Department of English

Jennifer Koslow, Department of History

Karen McGinnis, College of Arts and Sciences and Department of Biological Science

J. Piers Rawling, Department of Philosophy

Brad Schmidt, Department of Psychology

Faculty members with questions should contact Gary Tyson at tyson@cs.fsu.edu.