Is there a crisis of trust in digital scholarship?
Given academe’s headlong dive into the online pool, if there is a crisis, the stakes are as high.
This fundamental concern lies at the heart of the recently released research project sponsored by the Alfred P. Sloan Foundation and known as Trust and Authority in Scholarly Communications in the Light of the Digital Transition. Conducted by teams from the University of Tennessee’s Center for Information and Communication Studies and CIBER Research Ltd. in the United Kingdom, the project focused on academic quality control. (SAGE, the parent of Social Science Space, participated in the project.)
It echoes a 2007 study conducted for the British Library and JISC (formerly the Joint Information Systems Committee) on how researchers who were digital natives might challenge established practices, “especially in regard to trustworthiness.” This latest effort, begun in 2012, follows how the inevitable transition was faring.
The more things change, it could have summarized, the more they stay the same.
“Researchers have moved from a print-based system to a digital system, but it has not significantly changed the way they decide what to trust. The digital transition has not led to a digital transformation. Traditional peer review and the journal still hold sway.”
Here are 10 key takeaways from the report:
1. Social media has its uses but has not established itself as fully trusted …
“Personal networks and circles of trust were central to formal scholarly communication and were made much easier to maintain by emails mainly, but also other social media. Young researchers were more likely to use social media in their research work, as were US researchers and social scientists, but almost all researchers had a low level of trust in content received or discovered by this route; good enough to use material sourced this way, but not good enough to cite or publish in.”
2. … yet.
“[W]hile the [2010 Research Information Network] study researchers did not trust social media and had concerns regarding the quality of the information communicated in this way, just a year later a CIBER study finds differently: the issue of quality was not high on the agenda. This, for three reasons: first, participants believed they had sufficient evaluative skills to establish the trust and authority of sources; secondly, one of the main benefits of social media is that the community filters out rubbish, and rubbish as self-defined by the community; thirdly, there were different types and models of authority – not just the traditional peer-review model.”
3. But it’s an uphill fight for those born into the digital age.
“As [Diane] Harley et al. (2010) report, the advice given to pre-tenure scholars is consistent across all fields: focus on publishing in the right venues and avoid spending too much time on public engagement, committee work, writing op-ed pieces, developing websites, blogging, and other non-traditional forms of electronic dissemination. Indeed, according to [Ross] Housewright et al. (2013) only about a third of their respondents make their research results available via blogs.”
4. “It was completely out of the question to cite social media”
5. Open access isn’t ‘there’ yet, but peer-reviewed open access is closer than non-peer reviewed. Nonetheless, the community can clearly read the OA tea leaves.
“Some of the distrust, or dislike, of open access from an author and reader perspective that was clearly evident can be put down to misunderstandings and unfamiliarity. However, there were genuine worries and these concerned ethical issues that arise from paying to get published and the quality of peer review, a real trust touchstone for open access publications. Despite the criticism and confusions, most researchers felt that open access was the wave of the future. Mandates were making a difference and would nudge more researchers towards open access authorship.”
6. Altmetrics remain, well, ‘alt.’
“Most researchers knew little about them and those who did know something regarded them as dubious popularity indices that had no bearing on research activities. Social media mentions were thought to be even less an indicator of quality and credibility than usage metrics. However, researchers from less developed countries were more positive in their opinions towards altmetrics, perhaps because it was more difficult for them to excel in regard to the citation indices.”
7. The old ways are best
“The results, then, of this long, large and robust investigation confirms what some commentators had suspected, but had little in the way of hard evidence to support their suspicions that the idea, methods and activities associated with trustworthiness in the scholarly environment have not changed fundamentally.”
8. That specifically means peer review, especially for detecting overall quality, readability and sussing out errors, although less so for detecting outright fraud or plagiarism
“The biggest finding has to be that peer reviewed journals retain and, if anything, have increased their lead as the preferred and trusted vehicle for formal research communication.”
9. The Google Generation does expect some things to be easy, and get frustrated when it’s not.
“[T]hey: a) were true to their stereotype in that they expended less effort to obtain information, so they were more likely to compromise on quality; b) viewed open access more positively as it offered them more choices and helped them to establish their reputations more quickly; c) compensated for their lack of experience by relying more heavily on trust markers, such as impact factors; d) used all the outlets available to them in order to improve the chances of getting their work published and, in this respect, made the most use of the new and innovative digital services with which they were more familiar; e) were more willing to adopt all types of citation practice to boost the chances of getting their paper accepted; f) were more pessimistic about scholarly standards and the quality of research.”
10. Scholarly communication is … people.
“The biggest surprise, perhaps, was that nobody talked about information overload. The explanation lies in the fact that researchers cope with the increase in information by utilising and maximising their personal networks.”
The conclusions are based on a literature search, meetings with 14 focus groups (eight in Britain and six in the United States), critical incidence interviews and an online survey of academics. In the focus groups, as an example, half the participants were from the social sciences and the rest from the life of physical sciences; a little over half were female and the age range was concentrated among academics between the ages of 30 and 60.
Demographics mattered, but weren’t destiny. There were differences between the old and the under-30 crowds, and researchers from the developing world were more accepting of technologies that gave them greater reach or clout.
Lastly, the perception that there’s a looming trust deficit turns out to be not so widespread.
CIBER’s David Nicholas and six co-authors explained in the journal Learned Publishing that most researchers queried for the project did not cite trustworthiness in scholarly communication as a major concern. Why? Because they feel their discernment is sufficient to separate the wheat from the chaff.
[W]hile nearly all researchers thought it was an important issue, nobody really said trustworthiness was a big or pressing issue, even in the wake of a massive digital transition and considerable market disruption. This was because researchers had developed methods over time for determining what was good and not good in the digital environment. They used metrics, abstracts, and/or journal or author reputation to judge the quality of content. They admitted that these were not perfect measures and did not really like the fact that their citing or publishing decisions were based on tenure or university policy pressures rather than their perception of the quality of the source. But that was the world they now inhabited.