Since late 2022, we have been tracking the development of generative AI tools, like ChatGPT, Bard, and DALL-E, and the way they are being used in higher education nationally. Generative AI tools can produce new and unique outputs, including human-like text. They assist in tasks such as writing, research, creating computer code, and language translation, among other tasks. As the availability of these tools increases, so does their usage by students in the educational context. While many of these usages are positive, there is growing concern that higher education institutions may see a rise in students using these tools to engage in academic misconduct without proper citation or attribution.
In Fall, the Academic Technology & Instructional Spaces Subcommittee (ATISS), working with support from CTET, conducted a review of AI content detectors as part of a larger pilot to evaluate whether our campus should keep TurnItIn or switch to another academic integrity platform. This pilot program also included several surveys regarding faculty and student perspectives on the usage of AI in the classroom. ATISS has recommended:
- That an “AI Content Detection” service should not be officially enabled on campus (e.g., within Turnitin or a similar academic integrity tool) without further guidelines developed by shared governance on its appropriate uses and limitations. The pilot study observed instances where human created text was falsely identified as AI generated, which is potentially highly problematic for any students who could be accused under such circumstances.
- That SSU should keep TurnItIn as its Academic Integrity platform for SSU after a comprehensive pilot was conducted relative to a competitor.
- A set of opinion surveys of the faculty and student populations on campus regarding academic integrity and generative AI. Among faculty respondents (n=58), 67% of respondents consider plagiarism to be a major problem on campus and 59% believe they have experienced students submitting AI generated work. We also note an overall split among faculty in their general attitude regarding AI in education (26% Positive, 30% Negative, 43% Neutral, and 1% Prefer not to Answer).
- Among student respondents (n=163), 49% indicate liking AI tools and 47% indicated having used AI in their courses at SSU. When asked how they have used AI, very few reported using it to complete work on their behalf. The majority of respondents stated they used tools to help organize, structure, or proofread their work, and many reported using it for brainstorming, idea generation, and helping to clarify complex course material.
ATISS and CTET expect to release a more detailed analysis of the survey in the coming weeks.
More work is needed as we continue to investigate these topics and as shared governance begins to develop guidelines on the usage of AI in coursework. We continue to stress, however, the importance of faculty establishing their own positions with respect to the usage of generative AI in their classes. Learning more about the strengths and limitations of generative AI can help to empower faculty to make informed decisions in line with their comfort level.
Faculty, in particular, are encouraged to:
Determine your own course policies–this is your decision; here are 3 sample syllabus statements on the CTET website for you to consider and modify as need be.
Use the resources available. If you are concerned about your course design and assignments, CTET will offer Spring workshops on reducing the potential for AI use among students and addressing suspected cases of academic misconduct. You can also email CTET to schedule a one-on-one consultation, if you prefer (email@example.com).
Concerns around academic dishonesty are being fully addressed through faculty governance (with support from CTET). Any questions around that policy should be directed to your school representatives in faculty governance.
Finally, CTET is conducting a “Generative AI in Education” Faculty Learning Community this semester, where 12 faculty participants will be working proactively to develop instructional approaches incorporating generative AI in courses this semester. They will share the results of their activities with the campus community in a Showcase event on Friday May 3 from 12-2pm. CTET will provide more details later this semester
Adaptation to new technology in higher education is not new, but new technology always raises questions about teaching and learning. As I mentioned in my Fall 2023 memo, this was an issue we faced in higher education in the 1990s with the development of the World Wide Web, and it remains a problem we still face today. Until we have greater clarity (through research and data that we continue to collect) with respect to the threats and value of these tools, we will continue to refrain from establishing specific blanket policies on the use of generative AI (we will not ban AI at Sonoma State, for example). Instead, we choose to preserve academic freedom, allowing faculty to explore these tools in safe, analytical, and ethical ways, while also allowing them to manage their concerns about academic integrity. We only ask that faculty remember that even the best tools claiming to detect AI generated content are not 100% reliable, as the ATISS pilot has demonstrated.
I encourage you to continue to consider these topics, work with your representatives on shared governance, and help to ensure we develop thoughtful guidance on the usage of generative AI and how it overlaps with academic misconduct.
If you have any further questions, please see CTET’s AI in Education Initiative page for more information.