As the use of artificial intelligence becomes more common in professional communications, it brings with it a bunch of thorny ethical issues, according to one expert.
Crystal Chokshi is an assistant professor at MRU and an expert in critical media studies. Chokshi says the rise of large language models such as ChatGPT is bad news for the environment, workers, accuracy of information and human connection.
In this Q&A, Chokshi shares her thoughts and critiques of AI’s role in shaping our industry — and humanity.
Halluma Seklani: In your opinion, how does AI impact emerging or current communication professionals?
Crystal Chokshi: What we’ve been sold, from especially generative AI companies, like the ChatGPT’s, and co pilots of the world , is that these technologies are meant to make our work better and faster, by, eliminating the grunt work or the brainstorming or the tasks we don’t typically enjoy. And as of course you know in an industry like communication or public relations where we have to produce a lot, these technologies look quite attractive, because they’re meant to speed up our tasks. That’s the way that big tech has very deliberately sold them.
There are other ways to look at these technologies, and the first way is by jumping aboard technologies that make life faster, we are always doing just that, making life faster. These technologies don’t take away work, they just give us more. So even if we’re eliminating the grunt work, we are replacing that work with something else. So the promise that technologies have sold us for years of making life better, doesn’t really hold. They just make us work in different ways, faster. So I think about that as a culture of acceleration, that we aren’t questioning. Let’s just do more, faster, all the time.

I think as communicators we need to be mindful of that, in terms of our own sanity. I also think that we have to be mindful as communicators especially, of human to human connection. Generative AI doesn’t write like humans at all, of course. So, again, if we are preoccupied with always doing more faster and leaning on generative AI, and therefore becoming less human in what is a human centered practice, I’m not sure if we are holding on to the heart of our practice anymore. To put that more simply, communications and public relations are about forging relationships. There is nothing relational about machine generated language. To me there’s a huge paradox at the heart of deploying generative AI in our field.
Yes exactly, my professor talked about this. There’s this idea that the use of AI will take work off your plate, but that’s not the case at all. For example, if a journalist was able to produce 10 more stories in comparison to their usual five, their editor will only assign them more work. Because we live in a capitalist rat race-esque centered society the workload is in fact going to increase, and I think that is why generative AI has been marketed the way it has.
I think you’re thinking there is bang on. What we’re moving towards, and I don’t think this is a dystopian or unbelievable take, is a world in which a large part of our information, is going to be machine generated, and then the consumption or the reading of that information is going to be done by machines to spit out summaries so we can move faster.
So we’re moving toward a world where the production, and consumption of all information and knowledge is going to be machine generated. But to what end? I’m talking specifically about generative AI, for uses like ours, communication professionals. So let’s say blog posts, marketing content, like advertisements, newsletters, etc. It’s things that aren’t, let’s say, essential, at the heart of it, we’re not saving lives, we’re not surgeons. We don’t need the latest surgical technique. So, it’s for what? All this acceleration, to what end?
What are certain ethical downsides of AI that we are not discussing enough?
There are two major topics. The first is our commitments to EDI. For example, at MRU, the university as a whole purportedly values equity, diversity and inclusion, and indigenization and the uplifting of minoritised communities. Generative AI is largely trained by folks in the global south and minoritized populations, so poor geopolitical spaces, in order to make our lives more convenient. These groups have to go through, reams and reams of graphic abuse materials, graphic sexual abuse materials, graphic language, racially motivated hate speech, etc., in order to train artificial intelligence.
And companies like Meta and Google outsource these tasks to poorer countries in the world which are largely not white. And so one of the questions that I want to ask of people in the communications profession and of my colleagues at MRU is, how far do our commitments to EDI extend? Is it just the people in our local community or are we willing to think about relationality across the globe? The other topic is the climate crisis. Generative AI relies on immense amounts of electricity and water. And for some reason, over and over, we choose acceleration, convenience, optimization, efficiency over the health of our planet. So these to me are major considerations in using generative AI today that we aren’t talking about enough.
“Communications and public relations are about forging relationships. There is nothing relational about machine-generated language.“
Crystal Chokshi
There is a process called the ‘Stuffed Oreo’, and it’s been described as ‘the ideal’ way to use AI. Essentially it’s using AI for 20 per cent brainstorming, then 60 per cent would be the human work, and finally 20 per cent would be AI editing and reviewing the work. Would you say that that is a responsible use of AI?
No. And my stance here originates in critical media studies. Which are studies that look at the power relationships that are built into media and technologies. I don’t think it’s ethical for practitioners to rely on technologies that make our lives easier or more efficient or more optimized, that depend on the exploitation of resources and peoples. We have to think a lot more carefully about who’s benefiting from technologies like this and who’s harmed. And again, if we can’t see those people, do they matter? The implicit answer is no, because we keep doing it. So, no, I don’t think it’s ethical.
With AI integrating itself into our lives so quickly in comparison to the internet for example, that had a much longer roll out, the question no longer is whether we should use it but how. Some say that the days of staring at a blank page are long behind us. What are your thoughts on that?
People are very compelled by the argument that has appeared across business publications for the last several years: If you don’t use AI, you’re obsolete. If you don’t use it, you’re going to get left behind. Or you’re going to fail to educate students if you don’t teach them how to use it. This kind of argument is very persuasive to people and I don’t buy into it. And what’s the problem with starting with a blank page? What is it about that that’s uncomfortable? There are assumptions baked into that, that it’s unnecessary or wasteful to be sitting in front of a blank page, and that we need to go faster.
That’s what I’m saying about this culture of acceleration, we are wasting time if we sit before a blank page rather than, three seconds later, have ChatGPT generate the intro to our essay or our blog post. Again, I don’t buy into this idea of constant acceleration. I don’t think that it’s going to lead to healthy communities or healthy societies. I think a lot of generative things happen when you’re staring at a blank page. There’s something actually problematic about skipping the blank page step, about letting something machine generated tell you where to begin your thinking. I’d rather begin your thinking with you, even if that took you a few extra hours.
Why do you think AI use raises red flags in educational spaces and institutions, like media outlets, which already face public trust issues?
If we’re talking about trust, then we already know that generative AI output isn’t always factual. We know that it hallucinates. We know that it has racist, sexist, homophobic, transphobic assumptions baked into it. Thinking about the American election happening and the fake content that has been produced by artificial intelligence in order to sway results in one way or another, that is a very real example of the destabilization of democracy. That is a big problem right now and one that we might live to see the fallout for. So we know that generative AI and trust are not two peas in a pod. Going back to the blank page idea, one reason that using generative AI in schools or universities, is that they’re a crutch for teaching us how to think, I know that can sound little bit alarmist, or harken back to moral panics about previous technologies, but really writing is about thinking and generation is about thinking and so I’m a wholehearted advocate for sitting with the blank page and thinking.
