The pros and cons of an ever-evolving technology
Jasmine Garcha (she/they) // Contributor
Mesh Devkota (he/him) // Illustrator
Generative artificial intelligence refers to computer systems that respond to prompts by producing visual media like text or images. This is a tool open for public use through sites like ChatGPT. The recent rise in popularity of these sites has sparked debate about the ethics behind their use. If ChatGPT can produce paragraphs on any prompt fed to it, what does that mean for academics?
Currently, Capilano University doesn’t have a general AI policy. Professors are required to write individual policies for their classes. Should this tool be regulated at a higher level or should professors remain in control?
We can all assume the negatives of using generative AI in school refer mainly to plagiarism and a lack of creativity or free thinking, but what about the benefits? I always have friends with ADHD tell me that they find it difficult to read through dense documents for classes, so they ask ChatGPT for summaries instead. This usually comes up when I tell them I like to select the text and click “speak” so my computer reads to me while I follow along so that I can understand what I’m reading.
Higher education wasn’t made for people with neurodivergence, but education is a right and therefore it should be made as accessible as possible. A lot of professors use photos of printed documents as their attached readings—which means I can’t select the text. Using ChatGPT to summarize billion-year-old theories when I can’t find an online version of the text saves me five days of brain fog… but I digress.
As far as my search for CapU professors’ AI policies went, I found only one syllabus that included mention of AI tools, stating clear prohibition of their usage.
It’s been made clear by the president’s statements in the past that CapU is generally anti-AI usage. Having a general stance with room for further acknowledgement by professors if they so choose means that few professors will choose to add further input. In my opinion, this isn’t the best course of action. Since the stone tablet, technology has been ever-evolving. No use in pretending it isn’t happening.
Stanford’s policy is brilliant, comparing AI to getting assistance from another person; you can have a friend or tutor to explain the material to you, but you can’t have your friend or tutor fill in the answers for you. This policy lays it out nice and easy. CapU should take notes.
Harvard University’s student handbook includes very specific instructions, including that students must seek permission of use from several offices and cite the information they receive from generative AI like they would any other source.
It seems to be most Canadian universities that don’t take a direct general stance one way or another with the usage of generative AI. However, these same universities will take precautions in case of rule-breaking, like requiring the use of TurnItIn (another AI), which can detect plagiarized or AI-generated text.
Using TurnItIn and stating a maximum percentage (when you quote from a source, TurnItIn flags it as plagiarism, so there must be an allowance of a small percentage) is smart. Banning it? Downright idiotic. Go ahead and put Adam and Eve in the garden and tell them they can’t eat the apple…
I believe that it’s easier to regulate something than it is to ban and prevent it. In terms of who should create the policy, I think that there should be a page somewhere on CapU’s site that indicates a general policy (something similar to Stanford’s that would give clear, simple guidelines) and each professor should be required to give a more specific stance in their syllabus.