Innovation

Should ChatGPT Be Listed as an Academic Author?

March 7, 2023 1356
Industrial robot writing in script
Or is this not the kind of writing we’re talking about? (Photo: Mirko Tobias Schaefer/CC BY 2.0, /Wikimedia Commons)

Unless you’ve spent your summer on a digital detox, you’ve probably heard of ChatGPT: the latest AI chatbot taking the world by storm.

Recent discussion about ChatGPT has focused on the risk of students using it to cheat, and whether it should be allowed in schools and universities.

But there’s yet another question ChatGPT has thrown up: that is, whether ChatGPT could be considered an academic author.

It might seem far-fetched, but several papers published recently have listed ChatGPT as an author, including an editorial published in the journal Nurse Education in Practice.

Last year, some researchers also tried to list GPT-3 as an author on a paper it wrote about itself – but they struggled with listing the “author’s” telephone number and email, and had to ask GPT-3 if it had conflicts of interest.

The issue of AI authorship is now clearly on the minds of commercial academic publishers. Last week, both the Science and Nature journals declared their positions on the use of ChatGPT to generate articles.

Science is updating its license and editorial policies to “specify that text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools.”

Similarly, Nature has formulated the following principles:

  1. 1. “No LLM (large language model) tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility”
  2. 2. “Researchers using LLM tools should document this use in the methods or acknowledgments sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.”

These are drastic steps which highlight a fast-moving issue. But why does it matter whether or not ChatGPT can author an academic paper?

The Conversation logo
This article by Danny Kingsley originally appeared on The Conversation, a Social Science Space partner site, under the title “Major publishers are banning ChatGPT from being listed as an academic author. What’s the big deal?”

Authorship: the currency of the academic realm

To understand this, it’s important to first understand that authorship in academia isn’t the same as authorship of, say, a newspaper article.

That’s because researchers are not paid to publish papers. They’re rewarded through successful grant applications, or through promotion, for the number of times they’re listed as an author on an academic paper (and especially if the paper is published in a prestigious journal).

In the academic world, authorship doesn’t necessarily mean having actually “written” the paper – but it should, ideally, reflect genuine involvement in the research process.

It also conveys responsibility for the contents of the paper. The 2018 Australian Code for the Responsible Conduct of Research includes a guide on authorship which states:

All listed authors are collectively accountable for the whole research output. An individual author is directly responsible for the accuracy and integrity of their contribution to the output.

This raises the question: can an AI tool be held “responsible” for the content it produces? As an extreme example, if ChatGPT’s “contribution” to a paper included an error that led to people dying, who would be held accountable?

There’s also author order to consider. In most areas of research, the first-listed author is considered the lead author. Other disciplines have their own acknowledgment systems, which can include alphabetical listing.

But ChatGPT doesn’t derive any career benefit from authorship, so where would that contribution sit within the relevant author order?

Copyright issues

Then there is the issue of copyright. Commercial academic publishing is a hugely profitable business that relies on authors signing over copyright to the publisher.

This is a commercial arrangement. The author retains their moral right to be listed as an author and to take responsibility for their work, while the publisher charges for access to it.

The question of whether an AI program can “own” copyright is being debated. Copyright differs across the world, but traditionally has required a human to generate the work.

There are echoes here of a U.S. case in which it was debated whether a monkey who took a “selfie” could own copyright of the image. The decision was it could not.

Brave new world

There’s clearly a great deal of work that will need to happen to understand how AI tools will exist in our lives in the future.

ChatGPT isn’t going anywhere. Even if it’s banned from being acknowledged as an academic author, there’s nothing to stop researchers using it in their research process. The academic community will need guidelines on how to manage this.

There are interesting parallels here with the open-access movement. Many discussions about ChatGPT in educational settings point to a need to move away from the traditional essay as assessment, and instead concentrate on marking students for “showing their work”.

We could see something similar in academia, where each aspect of the research is made openly available, with acknowledgment of the originator, including ChatGPT. Not only would this increase transparency, it would also reduce the over-reliance on authorship as a primary mechanism for rewarding researchers.

Where authorship is failing

Because of the value of having one’s name on a paper, there has long been a concept of “gift” or “honorary” authorship.

This is where a person’s name is added to the author list even if they didn’t contribute to the paper. They may have been the person who obtained the research grant, or may have simply been added because they have a high profile and could increase the chances of the paper being published.

Two recent studies, one in Europe and one in Australia, reveal the level of pressure PhD and early-career researchers are under to provide gift authorship. This supervisory pressure reflects what’s happening at a larger scale.

There have also been alarming revelations about payment being exchanged for authorship, with prices depending on where the work will be published and the research area. Investigations into this are leading to a spate of retractions.

There are clearly significant issues around academic authorship worldwide. Perhaps the arrival of ChatGPT is a wake-up call; maybe it will be enough for the academic community to take a closer look at how things could be better.

What About SAGE?
SAGE Publishing is the aprent of Social Science Space, and for the time being it has decided not to list ChatGPT and its kin as academic authors. As Louise Skelding Tattle, chair of SAGE’s Research Integrity Group, explains:
“We may be having different discussions in a few years but at the moment we feel that AIs shouldn’t be included on the byline since they are tools used in the production of the article and cannot fulfil all the criteria for authorship as laid out by the ICMJE (International Council of Medical Journal Editors). This view is also shared by WAME in their Recommendations on ChatGPT.
“And how to treat content generated by ChatGPT? The Research Integrity Group at SAGE discussed whether we would be comfortable in publishing work generated by ChatGPT where ChatGPT is not the subject of the article. Ultimately, we didn’t want to restrict authors in this way given that ChatGPT and its uses will evolve over time. We have therefore introduced our first – but not likely to be our last – generative AI policy for our authors, which stipulates that the use of generative AI must be made clear in the text and also acknowledged in the Acknowledgements section. This policy can be found on our Journal Gateway here.”

Related Articles

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing
Innovation
April 10, 2024

To Better Forecast AI, We Need to Learn Where Its Money Is Pointing

Read Now
Second Edition of ‘The Evidence’ Examines Women and Climate Change
Bookshelf
March 29, 2024

Second Edition of ‘The Evidence’ Examines Women and Climate Change

Read Now
Free Online Course Reveals The Art of ChatGPT Interactions
Resources
March 28, 2024

Free Online Course Reveals The Art of ChatGPT Interactions

Read Now
Why Social Science? Because It Makes an Outsized Impact on Policy
Industry
March 4, 2024

Why Social Science? Because It Makes an Outsized Impact on Policy

Read Now
Did the Mainstream Make the Far-Right Mainstream?

Did the Mainstream Make the Far-Right Mainstream?

The processes of mainstreaming and normalization of far-right politics have much to do with the mainstream itself, if not more than with the far right.

Read Now
Why Don’t Algorithms Agree With Each Other?

Why Don’t Algorithms Agree With Each Other?

David Canter reviews his experience of filling in automated forms online for the same thing but getting very different answers, revealing the value systems built into these supposedly neutral processes.

Read Now
The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

The Use of Bad Data Reveals a Need for Retraction in Governmental Data Bases

Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments