The human cost of the digital revolution
Matt Shipley (he/him) // Co-Editor-In-Chief
You’ve been stressing over a final term paper for two months. Sitting in front of a blank Word doc at 1am for the umpteenth time, you scroll languidly through Instagram and suddenly spot an alternative: for a small fee, or even for free if you’re fussy, you could have an AI write that dreaded paper for you in less than ten minutes.
Tempting, isn’t it?
AI has, in recent years, become a persistent problem in nearly every field. It has become difficult for an untrained eye to differentiate between a piece of AI art and a human creation. With some editing, AI writing is nearly indistinguishable from passable high-school-level writing.
And that’s really scary.
Not only for me as a writer — hell, I’ll get over that — but as an editor, it’s ethically and morally repulsive. With the advent of mainstream AI writing apps, my job has ballooned to include everything from rigorous data-checks to using and analyzing every AI writing app in hopes of learning how to spot them in articles. Suddenly, every time I sit down at my desk, I feel a moral ball and chain holding me down — what if I’m about to pay someone good money for a piece they didn’t write?
Any “revolutionary” new tool, anything that promises to make the life of a creator or business ten times easier, is destined to be misused. Any computer program built to write marketing copy can be used to write articles, papers, essays, and more. And, worst of all, no AI will ever report or complain about plagiarism. They won’t bash a company for using their work without attribution, or call out a student for ripping off their work, word for word, and presenting it as their own.
They’re so easy.
Considering my current ability level, I could write maybe four or five opinions articles every day before running out of stamina. In Courier terms, that’s $80-100 a day — not bad if I’m willing to sell my soul to the gods of the keyboard. If I used AI assistance, and broadened my scope to any Canadian newspaper, I could write a hundred articles a day without breaking a sweat.
With that, I would be raking in three thousand dollars (in Courier terms) a day, and none of the work I would have submitted would even be my own. I could market myself as a freelance writer and “write” marketing copy for any number of companies, bringing my arbitrary $3000/day revenue way up. And, still, I will have done absolutely zero work. I will have researched papers and publications that would take my work (as any writer would,) and market myself as a freelancer (as any writer would.)
But I would not be a writer.
Pardon my sincerity, but I would be a despicable fraud.
AI writing apps present no digital signature. There is no tried-and-true way to detect their work, especially in a university environment where (let’s be real) most of us write at a middle- to high-school level. And, in turn, we risk the possibility of passing off genuine human work as that of an AI — creating the same conundrum that has plagued the creative community since AI rose to prominence:
How can we, morally, pass judgment on whether or not something is written by a human?
And yet, how can we, morally, stand by and let it happen?