A student working on our local college paper interviewed me about ChatGPT. I gave a longer writeup to him, and in looking at it, it seemed I might as well put this out to the world.
What better place than Substack, even though this is off-topic of the usual fare here.
However, I know that ChatGPT is affecting movie criticism and academic writing on movies even as I write this. How many movie reviews have you read that were started using AI-generators? How many were completely AI-generated?
Probably a lot more than you’d wish to know about.
Here’s the written interview, with my responses. Thanks as always for reading.
How do you personally feel about ChatGPT?
I have no opinion or feeling on it. It was probably inevitable given the advances in computer science in my lifetime, and even since the 1950s.
What are ChatGPT’s positives and negatives in education?
The obvious negative is that it seems to be an instant-text generator, so that anybody with a writing assignment can put that assignment into it, and it outputs the assignment in no time. Anybody can become ChatGPT’s tool, in other words.
Moreover, because ChatGPT is a machine-learning system, it should improve at responding to our queries, especially if they keep upgrading it from version to version. The consequences could be that overall writing abilities for humans get much worse, as people rely on these machine-learning systems to do human tasks for them. I don’t think it’s a problem that most people don’t know long division and rely on calculators for that. However, it’s a problem potentially when few people know how to put together coherent grammatically-correct sentences. Descending more towards illiteracy is probably not a net-good.
The obvious positive is the same as the negative. You can save a ton of time in doing a longer piece of work. It’s analogous to a graphing calculator in calculus: you can use ChatGPT to “solve the equation” once you know how to set up the equation. Do you need a business plan or Substack essay? You can start it with ChatGPT, and there’s no need to type a first draft now.
If you know what you are doing, ChatGPT is an awesome tool for any professional with writing skills.
The ultimate point is whether we are the masters or the tools of this technology, and unfortunately too many people have decided to just use ChatGPT as if they are the tool and it’s the master. In other words, for us, students are depending on it and trusting its output. That they would put total trust in this is strange to me, yet maybe it’s no different than my own desire to have a self-driving car, for which I’d put complete trust of my life in the hands of a computer system. (I certainly have always longed for a robo-butler who will do my dishes and clean my garage!)
The difficulty in education now is that so many classes might have to shift from teaching writing concepts and practicing them, to teaching “how to use ChatGPT” as a major part of a course. This is true of any college course that involves writing, including lab reports in science and literature reviews in the social sciences. Anybody giving a summary assignment is just asking for students to cheat — because they can put a text into it and tell it to summarize — and I say that as a teacher who thinks summary-writing is the best way to get students to learn the basics of academic writing. ChatGPT is forcing me to change from teaching how to write summaries to teaching how to use ChatGPT as a way to output summaries.
The significance of ChatGPT and other machine-learning systems is that they affect every aspect of our institution, no matter the class or major. And it affects every part of a class, especially professor-student relationships. At this point, everybody suspects everyone else, just a little bit at least. Paranoia has gone up, trust has gone down. Not good for academic integrity in any way.
Have you received any ChatGPT written papers on assignments? If so, how many?
Yes, several, but who knows how many really?
A few weeks ago, Turnitin [our institution’s plagiarism-checking software] added an “AI Checker” feature, which has caused so much confusion that nobody knows anything anymore. I mean that it will tell you, supposedly, how much of a paper has been AI-generated. But when it tells you that 30%, or 15%, or 5% of a paper is AI-generated, we don’t know how serious that is. I must say that the 5% papers do look AI-generated, but I’m not sure. As a consequence, people have found false positives, false negatives, and both! Everybody is suspecting everybody else of using it, and students are paranoid that they will get caught when they haven’t used it.
How is our university planning to handle ChatGPT going forward?
I have no idea and I’ve heard nothing about it. I’ve been trying to tell people since early last fall that this would seriously affect everything here. But no word so far on what to do or how to respond. At the least, they need to address AI-writing in their academic integrity policies. We all need clarification on whether students can use this on their assignments and how they might attribute their work to it. You can’t have some professors banning it and others teaching students how to use it. This is completely different from some professors banning the use of the pronoun “I” and others encouraging it.
Is there any question I should have asked, and what would have been your answer to it?
Just note that this isn’t just about writing. Music, art, and video created via AI is coming, or already here, and it’s going to create similar problems. Very soon, nobody is going to be able to tell whether AI generated a video/image/essay or a human did. (Actually, there is a possible solution: tie authenticity to cryptographic proof via a blockchain. But I don’t know how an educational institution would do that now.)
I therefore strongly recommend reconfiguring your own thinking to say, “that video I’m seeing may or may not be real.” You have to do this with every video you see. The consequences for this new way of thinking for something like the legal system are beyond titanic.
This AI-generating technology is moving at lightning-speed compared to the tortoise-speed of institutions everywhere. The problem is that the institutions, moving at that speed, aren’t adapting well to the consequences that AI-generated material, done nearly instantly by any user, basically affects everything in education.
In a way, the integrity of all educational institutions is at stake. You will be able to tell the vision and values of an institution by how they respond to ChatGPT and other AI-generators. Are these institutions just credit-producers, as most now are at the administrative level — i.e., the main thing that matters is whether student-credits are generated, which means we don’t care as much about teachers, curriculum, and honest student assessments — or are institutions ultimately shaping humans into better humans by teaching wisdom and essential skills? Anybody who cares about the latter will be thinking very hard about how to deal with ChatGPT to enhance education itself.
One final point: this is the first time in human history where something besides humans can write for humans. Before this, writing was a powerful tool and trade that humans specialized in, and it’s arguably one of the main things that starkly separate us from the animal kingdom. With ChatGPT, now humans have a choice: let the tool take over that work, but in the process lose the knack for how to do this ancient-human task; or master the tool and even improve the discipline of writing.
I would like to be gloomy and say that our current fraudulent age can’t and won’t handle these new tools well, yet in the long-run we will probably adapt to them and even thrive from them.
Thanks for this writeup, it's really interesting and I think I agree with everything you say. I totally wouldn't mind more content "off-topic of the usual fare here" if you ever think about doing this more often!