The first instalment of the quarterly Media Futures seminar series was hosted earlier this year at Stellenbosch University by the Department of Journalism (in partnership with the Faculty of Arts and Social Sciences).
The first seminar, titled “The future of intelligence: ChatGPT and its implications”, was chaired by Herman Wasserman (professor of journalism and chair of the Department of Journalism at Stellenbosch University).
Two reports about this seminar, written by BA Honours journalism students, were selected in partnership with Stellenbosch University’s Department of Journalism for publication on LitNet.
Herewith Talia Kincaid’s report:
An artificial intelligence- (AI-) driven chatbot, ChatGPT has sparked fears of exacerbated plagiarism and blurred ethical lines among academics at Stellenbosch University (SU).
An expert panel recently kicked off the first instalment of a Media Futures seminar series to discuss the implications of AI at the Neelsie Cinema at SU on 27 February, according to Herman Wasserman, departmental chair of journalism at SU. “We want to take stock of the rapid evolution of the media and its growing influence on all spheres of society,” said Wasserman during his opening speech.
The journalism department, in partnership with the Faculty of Arts and Social Sciences, was eager to explore the complex ethical and practical challenges ChatGPT poses for journalism and academia. Wasserman’s introduction was followed by a lecture by Wim de Villiers, rector and vice-chancellor of SU, who spoke on the rising concerns academic circles are facing.
The future of intelligence, but how does it work?
“People can now basically have a conversation with the internet, and that is a very weird and new feeling for most people,” said Fanie van Rooyen, editor of Quest magazine. ChatGPT is a conversational AI system that uses an “artificial neural network” to connect the meanings and associations of words to their surrounding context, explained Bruce Watson, chair of informational thinking and computational science at SU.
“The version that’s embedded within ChatGPT is what’s known as a transformer,” said Watson. The model is customisable, and is tailored to the individual user, explained Watson. He likened the AI system to self-driving cars, but stated that the difference lies in the desired output generated by algorithmic functions.
“‘ChatGPT’, I think, will become one of those words like ‘Uber’ and ‘Kleenex’, a sign which becomes completely generic,” stated Watson. “Unlike many search engines, we just have to make one query that comes back with some answers, and … it will tailor the follow-up questions according to how the conversation has proceeded so far,” he continued.
Ethical considerations?
“How will [AI] shape our journalism, our education and our public sphere?” asked Wasserman, a question that echoed during the seminar.
“We have to navigate, now, what has been referred to as an information disorder,” said De Villiers. The emergence of technologies such as ChatGPT raises questions about governance and corporate gatekeeping, but also introduces an additional layer to the threatening amounts of “plagiarism, … fake news, disinformation, hate speech and conspiracy theories … circulating online,” De Villiers continued.
There is a concern that an overdependence on [ChatGPT] will “incentivise task automation” and “weaken journalistic standards”, said Van Rooyen. This ethical debate questions whether ChatGPT serves as a gateway into a democratic society, or becomes a platform that facilitates “nefarious agents and sources”. This is according to Van Rooyen, who cautioned against the possibilities of a “copy-paste culture”.
“Serious journalists will also have to become serious ethicists,” said Van Rooyen. AI can be used to hold journalists accountable by combatting sensationalism and bias; however, there is the risk that it could make discerning the truth more difficult, he warned.
Corporate gatekeeping and social marginalisation
“Consider how futurists [can use ChatGPT] to provide cheap legal, educational and medical advice for the poor,” said Scott Timcke, senior research associate at ICT Africa. Alongside Zara Schroeder, researcher and communications coordinator at ICT Africa, Timcke focused on discussing AI’s marginalising factors.
“It continues [to exclude the marginalised] from equitable participation in design and implementation decisions about these technologies,” said Timcke. Corporates can utilise AI to shape public discourse, using language and discriminatory data. “ChatGPT is currently trained to respond in English, [so it can] amplify system inequalities [and] biases,” Schroeder explained.
Academia: fight or flight
“We need to … move away from the discussion about the tool … and really reflect on what makes us human,” said Antoinette van der Merwe, senior director of learning and teaching enhancement at SU, who explained that ChatGPT should be integrated into learning models to complement the human ability to think critically.
“We should encourage students to use ChatGPT for peer learning and feedback. It’s a new art form,” said Van der Merwe, who thinks that students should be encouraged to work with the technology to apply their knowledge, problem-solving skills and moral compasses. This could actually open up more ways to test knowledge.
“You actually generate the essay through ChatGPT and then ask students to critique it,” Van der Merwe explained.
What does regulation look like?
“We need [to] apply and judge our critical thinking,” said Van der Merwe.
Effective information governance is the only way to manage the risks that follow the technological revolution, said Schroeder, who questioned how futurists might exploit ChatGPT as a model for misinformation, given that it cannot distinguish between fact and fiction.
“[Regulation] looks like an effective democratic state that is willing and able to intervene in business operations … and … provide social protections,” said Schroeder. She explained that the discriminatory behaviour exhibited by ChatGPT, associated with an amplification of toxic content, racism and bigotry, only furthers the need for a regulatory framework.
AI’s liberal dimensions are consistent with human bias, which is “something that will have to be developed out”, said Van Rooyen. “[AI] is a tool, so it’s not good or bad.”