International Debate

Report Offers Guidelines for Ethics of Technology Design International Debate
Not all the ethical ramifications of new thechnology revolve around weaponization, but that is certainly a genuinely important area needing firm guidelines.

Report Offers Guidelines for Ethics of Technology Design

December 14, 2017 1377

Campaign_to_Stop_Killer_Robots_opt

Not all the ethical ramifications of new thechnology revolve around weaponization, but that is certainly a genuinely important area needing firm guidelines. (Photo: Campaign to Stop Killer Robots/CC BY 2.0 /Wikimedia Commons)

If kids spend hours a day speaking to digital personal assistant Alexa, how will this affect the way they connect to real people? When a self-driving car runs over a pedestrian, who do you take to court? Is it OK to manipulate people’s emotions if it’s making them happier?

Together with an international team of researchers in fields as diverse as philosophy, engineering and anthropology, we set out to tackle these questions. The result is a new set of guidelines focused on the ethical and social implications of autonomous and intelligent systems. That includes everything from big data and social media algorithms to autonomous weapons.

The Conversation logo

This article by Rafael A Calvo and Dorian Peters originally appeared at The Conversation, a Social Science Space partner site, under the title “Engineers, philosophers and sociologists release ethical design guidelines for future technology”

The report, Ethically Aligned Design, was released Tuesday, December 12, by the Institute of Electrical and Electronics Engineers (IEEE). It is the culmination of a year’s work by 250 world leaders in technology, law, social science, business and government spanning six continents.

IEEE is the world’s largest technical professional organization. With more than 420,000 members in 160 countries, it’s the global authority for professional standards related to technology. The latest report proposes a set of recommendations (suggestions) that are open to public feedback.

Once adopted, the guidelines in the report will be implemented by professional organizations, accreditation boards and educational institutions to ensure the next generation of engineers incorporate ethical considerations into their work.

Guiding principles
The big questions posed by our digital future sit at the intersection of technology and ethics. This is complex territory that requires input from experts in many different fields if we are to navigate it successfully.

To prepare the report, economists and sociologists researched the effect of technology on disempowered groups. Lawyers considered the future of privacy and justice. Doctors and psychologists examined impacts on physical and mental health. Philosophers unpacked hidden biases and moral questions.

The report suggests all technologies should be guided by five general principles:

  • protecting human rights
  • prioritizing and employing established metrics for measuring well-being
  • ensuring designers and operators of new technologies are accountable
  • making processes transparent
  • minimising the risks of misuse.

Sticky questions
The report runs the spectrum from practical to more abstract concerns, touching on personal data ownership, autonomous weapons, job displacement and questions like “can decisions made by amoral systems have moral consequences?”

Ethics in Technology report cover_opt

Click to download the full report.

One section deals with a “lack of ownership or responsibility from the tech community.” It points to a divide between how the technology community sees its ethical responsibilities and the broader social concerns raised by public, legal, and professional communities.

Each issue tackled includes background discussion and a set of candidate recommendations. For example, the section on autonomous weapons recommends measures to ensure meaningful human control. The section on employment recommends the creation of an independent body to track the impact of robotics on jobs and economic growth.

A section on affective computing – an area that studies how computers can detect, express and even “feel” emotions – raises concerns about how long-term interaction with computers could change the way people interact with each other.

This brings us back to our question: if kids spend hours a day speaking to Siri or Alexa how will these interactions change them?

The report makes two recommendations on this point:

1) To acknowledge how much we don’t know (we need to learn much more before these systems become widely used);

2) That humans who witness negative impacts – parents, social workers, governments – learn to detect them and have ways to address them, or even shut technologies down. Experience shows this is not always easy – try forbidding your child from watching YouTube and see how well that flies.

Clearly affective computing is an area in which we are at a particular loss for evidence of its human impact.

Consultation and feedback
IEEE standards are developed iteratively and the organisation will use the findings in this report to build a definitive set of guidelines over time.

Early feedback on an earlier version of the report highlighted its Western-centric bias. As a result, a larger and more diverse panel was recruited. A number of new sections were added, including the section on affective computing, along with policy, classical ethics, mixed reality (including augmented reality technologies like Google Glass) and wellbeing.

Over the next year, the final version will be released as a handbook with recommendations that technologists and policy makers can turn to, and be held accountable for, as our technological future unfolds.

The ConversationThis is an important step toward breaking the protective wall of specialisation that allows technologists to separate themselves from the impact of their work on society at large. It will demand that future tech leaders take responsibility for ensuring that the technology we build as humans genuinely benefits us and our planet.


Rafael Calvo is a professor at the University of Sydney, ARC Future Fellow and Director of the Wellbeing Technologies Lab. Dorian Peters is a designer, author, and specialist in user experience for learning and wellbeing. She is currently creative leader at the Positive Computing Lab at the Faculty of Engineering, University of Sydney and is also a member of the Centre for Research on Computer Supported Learning & Cognition.

View all posts by Rafael A Calvo and Dorian Peters

Related Articles

‘Settler Colonialism’ and the Promised Land
International Debate
September 27, 2024

‘Settler Colonialism’ and the Promised Land

Read Now
Webinar: Banned Books Week 2024
Event
September 24, 2024

Webinar: Banned Books Week 2024

Read Now
Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures
Impact
September 23, 2024

Research Assessment, Scientometrics, and Qualitative v. Quantitative Measures

Read Now
Revisiting the ‘Research Parasite’ Debate in the Age of AI
International Debate
September 11, 2024

Revisiting the ‘Research Parasite’ Debate in the Age of AI

Read Now
Trippin’ Forward: Management Research and the Development of Psychedelics

Trippin’ Forward: Management Research and the Development of Psychedelics

Charlie Smith reflects on his interest in psychedelic research, the topic of his research article, “Psychedelics, Psychedelic-Assisted Therapy and Employees’ Wellbeing,” published in Journal of Management Inquiry.

Read Now
The Future of Business is Interdisciplinary 

The Future of Business is Interdisciplinary 

By actively collaborating with industry, developing interdisciplinary programs and investing in hands-on learning opportunities, business schools can equip graduates with the specific skills and experiences that employers are seeking.

Read Now
Daron Acemoglu on Artificial Intelligence

Daron Acemoglu on Artificial Intelligence

Economist Daron Acemoglu, professor at the Massachusetts Institute of Technology, discusses the history of technological revolutions in the last millennium and what they may tell us about artificial intelligence today.

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments