What to do about ChatGPT in higher education?
As generative AI has been mainstreamed and adopted by countless industries, many existential and practical questions have arisen, though always with the fundamental question of where the human ends and the machine begins. One of the more fiery examples has been in education, where debates have raged over what role it should, or will, or is, playing in the classroom.
I last taught in the Winter term (January to April), and it wasn’t on my mind very much, in part because it was a smaller group of mostly upper-year undergraduates taking a course about labour, so it was only a concern insofar as I mentioned it briefly once in discussion about the future of work. This coming January, I’ll be teaching again, this time a core curriculum course with many first-years new to university life, and the missives from the university about ChatGPT and other tools have been coming for months now.
Some universities, like the Université de Montréal, have banned the use of ChatGPT unless specifically approved by the professor. Others, like my own, have begun putting out general guidelines which tend to boil down to saying something the tool itself could easily spit out: generative AI is a world of contrasts. In other words, it is dangerous and can easily be misused, but it’s also a powerful resource we should embrace, so please let us know what happens!!!
I am, frankly, less interested in how particular schools or instructors choose to talk about or incorporate this technology. As far as I can tell, we have already missed the one real opportunity here, which would be to critically assess the existing and longstanding weaknesses of higher education — namely the generic nothingness of the typical undergraduate essay — to start moving in new directions for knowledge production and dissemination. On the periphery, some are talking about other experimental assignments or teaching methods (a colleague, for instance, is planning to have his students track their process in detail, from research to outline to writing), but generally the conversation has stalled around the best ways to actively incorporate it, to get ahead of the storm, to avoid looking out of touch or unprepared.
What we have here, then, is a whole lot of free advertising for OpenAI and other companies invested in generative AI, companies who have teased us all by initially allowing us to play around with their tech but who are increasingly putting its best features behind paywalls. All this talk only adds to the power and mystique around generative AI, of course, the hype cycle that, for once, has (some) real substance behind it, which only makes it more corrosive to public discussion.
Universities are in a troubling position, precisely because they have become so corporatized, largely operating today as a way to churn out productive citizens for capital. As a result, the real threat posed by ChatGPT and its ilk is that it could reveal just how meaningless so much of what happens inside hallowed college walls now is, the rote checking of boxes that already treats the joyous act of learning like a machine would. Any real argument in defense of higher education must acknowledge this and focus instead on using this moment, if we can, to evaluate what the university should do if it doesn’t solely cater to the whims of capital.
In reality, I don’t think most teachers even realize that they will most likely have no idea if ChatGPT was used for an assignment. Increasingly students are sharing best practice workflows to have the tool do as much of the heavy lifting as possible, and how-to prompt-writing (I guess we’re calling it prompt engineering) workshops are ubiquitous, for students and for faculty. Certainly, at least in the next couple of years, most schools will see a hybrid, middle-ground situation, as many classes include the tech directly and try to make distinctions between assignments for which it will be helpful and those for which it will not. The latter case is where the real work has to be done, as pedagogy — the art, method, and practice of teaching — will have to reorient itself back to more live, in-class assessments. Higher ed needs to go back to basics, is what I’m saying: it needs to go Socrates mode.
Whatever. The best teachers have always found inventive or spontaneous ways to encourage critical thinking, and the way the corporate university has allowed for the dominance of the generalized take-home essay as a way to assess it is not a new problem. In describing generated images, Rob Horning explains: “Generative models explicitly want to save us the work of imagination; they reinforce the pleasure that comes from strictly recognizing what ideas an image is trying to sell us…They deplete images of their utopian potential and make them rote and one-dimensional.” It seems obvious that this applies to generated text, as well. What ChatGPT really does is emphasize extant deficiencies in education that have been propped up by mega donors and corporate administrators, and the hand-wringing reflects this very practical reality just as much if not more than any existential or philosophical musings on the nature of teaching and learning.
I haven’t quite decided what to say to my students in January about all this. I could start, in typical fashion, by pointing out the structural element, that the university under capitalism has hopelessly warped the meaning of words like “value,” and the production of knowledge becomes further enmeshed in discourses about convenience and thinking smarter, whereby thinking smarter means getting a large language model to generate a satisfactory output quicker than you ever could if you had to actually stop and think. Which there’s no time for, obviously. Students using ChatGPT are understandably responding to the environment they’ve found themselves in, defined by the accrual of statistical achievements which signify your ability to efficiently perform your deliverables. If that sounds robotic, that’s because it is, but it is also the strategic positioning of one’s self within a deskilled society that no longer understands, or rather has no interest in understanding, how value should operate between people and between activities. I am reminded of how videos like this one, depicting a so-called ‘content factory’ in Indonesia, often go viral, firstly a casually racist reaction to ‘how they do things over there,’ but also a rather outmoded reaction to people taking advantage of the shamelessness of platform-based capitalism, the way we now accept immediately the practices of exchange that take place there and are certainly in no place to judge someone doing what they can to earn a living using the tools they’re given, the ones they’ve been learning to appease and navigate for their entire lives.
To me, over-worked undergraduate students trying to finish the same essay prompt a professor has used for over a decade before they head out to one of their part-time jobs and so turning to ChatGPT for some help is in the same boat. We should hardly be focusing our attention on AI-scraping tools (which won’t even be able to identify the software anyway) just to try and get these kids expelled for plagiarism or whatever, when we could be coming up with better ways to communicate to students that, ideally, there’s room for efficiency and convenience just as there’s room, when offered, to stop and think.
Ephemera
404 Media, from former Motherboard reporters and editors, has launched, and looks like a cool new publication covering tech. Probably their biggest story so far is this one on how generative AI tools are being used, of course, to make non-consensual porn drawn from images of countless real people.
If you somehow missed it, Ronan Farrow’s New Yorker article on Elon Musk is worth reading, even if there isn’t much that’s new for (sad, pathetic) people (like me) who have followed his career.
Tech Policy Press has a fantastic overview of generative AI’s impact on artists, recalling the history of the Creative Commons and open-source ideals and the challenges posed now: “A new framework will enable greater control over the use of artists’ data when artists share their work online, whether it is made with pens, pencils, or keyboards. It would incentivize greater restraint on platforms that scrape, train, or sell datasets containing these works without consent. It would motivate the platforms building these tools to develop new ways to rigorously audit and defend against the nonconsensual inclusion of artists’ data, and perhaps to welcome open models that invite greater public scrutiny.”
Song Recommendation: “itch” by quinnie