Social, Behavioral Scientists Eligible to Apply for NSF S-STEM Grants
Solicitations are now being sought for the National Science Foundation’s Scholarships in Science, Technology, Engineering, and Mathematics program, and in an unheralded […]
David Canter considers some implications of ChatGPT and what it tells us about real intelligence, general, artificial or otherwise.
The role of AI in the production of research papers is rapidly moving from being a futuristic vision, towards an everyday reality; a situation with significant consequences for research integrity and the detection of fraudulent research. Rebecca Lawrence and Sabina Alam argue that for publishers, collaboration and open research workflows are key to ensuring the reliability of the scholarly record.
ChatGPT is by no means a perfect accessory for the modern academic – but it might just get there.
The authors have identified a convergence among architectures, reflecting a combination of neural, behavioral and computational studies and so have begun a communitywide effort to capture this convergence.
According to NIST’s Reva Schwartz, bias manifests itself not only in artificial intelligence algorithms and the data used to train them, but also in the societal context in which AI systems are used.
David Canter follows his concern that psychologists are losing contact with people by considering how computers are presented as replacements for human ‘intelligence’. This ignores the importance of in situ person to person contact, which has been shown by the COVID pandemic to be so crucial for people.
At the moment, little guidance, policy or oversight is available regarding technology, AI and academic integrity for teachers and educational leaders.
As I write this, I am using text-to-speech technology, a nifty online feature that enables the reader to listen to, rather than […]