Innovation

Harnessing the Tide, Not Stemming It: AI, HE and Academic Publishing

April 26, 2023 1531
Engraving of king in robes with his retinue ordering waves without success
Generative artificial intelligence in higher education is here and it is not going away, so the future lies in maximizing its benefits and minimizing its negatives, and not thinking one can banish it.

This weekend, I was in Hamburg for the Association of Computing Machinery’s annual Conference on Human Factors in Computing Systems (CHI2023) and was invited to join a panel on AI-assisted writing tools. This post is a summary of my contributions to the panel, which focused on the top use cases, benefits and limitations for AI-assisted writing tools in higher education and a discussion of the implications for the wider higher ed and academic publishing ecosystems.

As part of my work as vice president of books and social science innovation at Sage, I’ve been thinking deeply about the potential impacts of AI on HE. I led the work to create Sage’s guidelines for book authors on using GPT-4 and other generative AI tools in their writing, and I’ve recently set up an editorial working group on AI at Sage to help us think about the opportunities and impacts on our business. To inform this work, I’ve been closely tracking the news on the response to ChatGPT by higher education in the United Kingdom, watching the latest developments in the field, and spending lots and lots of time experimenting with GPT-4, AutoGPT, Bing AI and other AI tools in my own work. 

Who will use AI-assisted writing tools — and what will they use them for?

The short answer is everyone and for almost every task that involves typing. Once Microsoft Copilot launches, AI-assisted writing will become the norm and we should assume that everything that’s written using a computer or a phone will be helped along by AI, in the same way we assume most things have been spell-checked or autocorrected. 

So what does this mean for academics, students and academic publishers?

In HE, the initial focus has been on students using generative AI for written assessment and what universities can and should do about it. In the UK, there have been lots of news stories about students using ChatGPT to complete university work, and we should expect to see lots more of these. The launch of ChatGPT prompted some universities to outright ban the use of generative AI in assessment, deeming it plagiarism. Universities have been busy updating their academic integrity guidelines, developing short-term plans and convening or joining conversations on what comes next. The growing consensus is that banning is not a viable long-term solution. At a recent Wonkhe event I attended, the word was that universities in the UK are not planning to use Turnitin’s new AI-detection tool because of fears of false positives creating tons of work for universities and the risk of falsely accusing students of cheating, as well as an acknowledgment that an approach which considers the use of AI cheating and tries to catch students is not sustainable. 

So instead, universities will need to rethink assessment strategies. Over the summer, we’ll see organizations like the Quality Assurance Agency for Higher Education supporting universities as they pivot. Next year, we can expect to see universities experimenting with new approaches to assessment. 

So how might assessment change? 

Right now, broadly two camps are forming – those advocating for a return to pen and paper or oral exams where generative AI can’t be used, and those advocating for universities to embrace an AI future and either embed the use of tools into assessments or take this opportunity to move to new forms of authentic assessment. So far, practical examples of exactly what these authentic assessments would look like are pretty thin on the ground, but I expect we’ll start to see some really cool examples being shared over the summer and implemented in the next academic year. Most of the academics I’ve spoken to who teach courses which use courseware or take-home exam assessments acknowledge that it’s simply too late to do anything for this year’s cohort and that there will be plenty of students who submit work written by ChatGPT and don’t get caught. Again, expect lots of news stories this summer from students admitting to using it in ways they know they’re probably not supposed to and getting away with it. 

What about faculty? Are they using generative AI too? If so, what for?

While much of the conversation right now is focused on how to respond to students’ use of ChatGPT, faculty, of course, also are experimenting with how it can be used in their own work, both for research and in their teaching. On the research side, these tools are particularly beneficial for academics with English as a second language, as they can assist with language editing—a use case that we at Sage wholeheartedly support. Beyond that, one can imagine a future where most journal abstracts are written by AI. Sage has established guidelines for authors regarding the use of AI in their journal submissions and we remind authors that AI cannot be listed as an author due to its inability to take responsibility for the content it generates. For books, our guidelines encourage the use of AI for brainstorming, overcoming writer’s block, and language editing, while reminding authors of their responsibilities to watch out for inaccuracies, defamation, or plagiarism.

In both sets of guidelines, we currently ask authors to disclose their use of generative AI tools, but as AI integration becomes more commonplace, the idea of disclosing the use of these tools will likely become obsolete. 

On the teaching side, Ethan Mollick stands out as someone leading the way in suggesting ways to embrace AI in education, using it to streamline the work of educators and to enhance student’s learning, for example, through personalized learning tutors

How will academic publishers use AI writing tools? 

Publishers, like all knowledge workers, will be thinking about ways that AI writing assistants can streamline our work. I’ve personally experienced the benefits of using GPT-4 to enhance my productivity and have been exploring effective prompts for the kind of work I do. So far, I’ve found GPT-4 most useful when I use it didactically, asking it to take on the role of a business consultant, for example, and ask me prompting questions to guide my thinking and then getting it to summarize that into a report for me at the end.

Now, through our editorial working group at Sage, we want to think about how AI can help us with other parts of our publishing workflow – whether that’s copy editing tasks, improving the accessibility of our content, or expediting translation of our texts. However, we need to think carefully about the risks and challenges, too – it’s clear that publishers need to protect our authors’ IP and not expose copyrighted content to open models like ChatGPT.

And we need to be careful before publishing content generated by AI since the copyright situation is currently quite murky. Longer-term, these tools pose more fundamental questions for publishers – if our authors are submitting AI-generated content, and we’re using AI to read, assess and correct it so that students can use an AI to summarise rather than read it, we probably need to start asking ourselves some deeper questions about what it’s all for! 

In the meantime… 

The future is uncertain and things are moving fast, so the best thing to do is to engage with the challenges and opportunities. Some of my favorite resources for getting and keeping up to speed are below.

AI news summary from Emergent Mind 

Stephen Wolfram on how it actually works (long and technical but really good)/ 

On writing good prompts

On the other thing, besides reasoning, AI needs – knowledge: Dan Shipper at Chain of Thought

Eight things to know about large language models from NYU Courant

Ethan Mollick’s substack One Useful Thing

On how ChatGPT’s impact on HE is being covered in media from Simon Fraser University’s Journal of Applied Learning and Teaching 

Katie Metzler is vice president of books and social science innovation at Sage. She has more than 15 years of experience in the publishing industry and has led cross-functional teams to bring new digital products from discovery to delivery.

View all posts by Katie Metzler

Related Articles

Why Social Science? Because It Makes an Outsized Impact on Policy
Industry
March 4, 2024

Why Social Science? Because It Makes an Outsized Impact on Policy

Read Now
Why Don’t Algorithms Agree With Each Other?
Innovation
February 21, 2024

Why Don’t Algorithms Agree With Each Other?

Read Now
Tejendra Pherali on Education and Conflict
Social Science Bites
February 1, 2024

Tejendra Pherali on Education and Conflict

Read Now
NSF Responsible Tech Initiative Looking at AI, Biotech and Climate
Investment
January 24, 2024

NSF Responsible Tech Initiative Looking at AI, Biotech and Climate

Read Now
New Report Finds Social Science Key Ingredient in Innovation Recipe

New Report Finds Social Science Key Ingredient in Innovation Recipe

A new report from Britain’s Academy of Social Sciences argues that the key to success for physical science and technology research is a healthy helping of relevant social science.

Read Now
Did Turing Miss the Point? Should He Have Thought of the Limerick Test?

Did Turing Miss the Point? Should He Have Thought of the Limerick Test?

David Canter is horrified by the power of readily available large language technology.

Read Now
How Intelligent is Artificial Intelligence?

How Intelligent is Artificial Intelligence?

Cryptocurrencies are so last year. Today’s moral panic is about AI and machine learning. Governments around the world are hastening to adopt […]

Read Now
5 1 vote
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments