Industry

New SSRC Project Aims to Develop AI Principles for Private Sector

July 19, 2024 2087

Aligning artificial intelligence products with society’s objectives is impossible, according to the newest program of the New York-based Social Science Research Council, without corporate disclosure and auditing of the potentially substantial risks associated with AI.

Given this, the new program, the AI Disclosures Project, seeks to create structures that both recognize the commercial enticements of AI while ensuring that issues of safety and equity are front and center in the decisions private actors make about AI deployment. The project is led by technologist Tim O’Reilly, known for coining terms such as “open source” and “Web 2.0,”  and economist Ilan Strauss.

In announcing the program, the SSRC laid out the terrain in which this ‘first, do no harm’ concept will lie:

Current AI governance frameworks focus on risks that are inherent in the capabilities of the models themselves, on limiting the ability of various bad actors to use them for harm, and on the security of various high-risk domains. These frameworks don’t adequately consider how AI risks may also originate in how companies compete for market share and profits. Companies may “move fast and break things” in order to gain scale while the market is still young, and they may exploit their market power once AI markets mature. In addition, they may successfully identify AI risks but under-invest in countering them. Furthermore, as the power of AI models grows, the risk profile may change, and current safety practices may be insufficient.
headshot of Tim O'Reilly
Tim O’Reilly

 “If we want prosocial outcomes,” O’Reilly has written, “ we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved.”

Noting that companies have adopted frameworks in other areas, such as the Generally Accepted Accounting Principles or the International Financial Reporting Standards, O’Reilly added that the issue isn’t the lack of AI principles so much as it is their lack of specificity and general milquetoastery. “Today, when disclosures happen, they are haphazard and inconsistent, sometimes appearing in research papers, sometimes in earnings calls, and sometimes from whistleblowers. It is almost impossible to compare what is being done now with what was done in the past or what might be done in the future. … This is unacceptable.”

Through high-quality research, collaboration, and policy engagements, the SSRC project would develop a systematic disclosure and auditing framework that can become the basis for a set of “Generally Accepted AI Management Principles.” Given the business and economics orientation of the program leaders, the program will draw from the business community and its best practices and metrics instead of imposing a top-down set of principles. “Our goal,” according to SSRC, “is to learn from companies that are acting responsibly, and to use their best practices to shape disclosure standards for AI auditing and regulation that are informed by the commercial realities of AI markets.

headshot of Ilan Strauss
Ilan Strauss

O’Reilly, the principal investigator and co-director of the project, is the founder, CEO, and chairman of O’Reilly Media, and a visiting professor of practice at the UCL Institute for Innovation and Public Purpose (IIPP). At the institute, he and Mariana Mazzucato oversaw a multi-year research project sponsored by the Omidyar Network that investigated Big Tech’s use of algorithmic allocations to extract rents from their ecosystems.

Strauss, the program director of the AI Disclosures Project, is an honorary senior fellow at IIPP, where he was head of digital economy research on a multi-year Omidyar Network-funded research project. He is also a visiting associate professor at the University of Johannesburg. Strauss was the joint recipient of an Economic Security Project grant investigating Big Tech’s acquisitions of technological capabilities.

Related Articles

Who Gets to Flourish? 
Public Policy
June 5, 2025

Who Gets to Flourish? 

Read Now
David Autor on the Labor Market
Social Science Bites
June 2, 2025

David Autor on the Labor Market

Read Now
Isaac Asimov’s critique of algorithmic thinking
Science & Social Science
June 1, 2025

Isaac Asimov’s critique of algorithmic thinking

Read Now
Advocating For and Supporting Academic Freedom
Ethics
May 28, 2025

Advocating For and Supporting Academic Freedom

Read Now
Academic Freedom and Censorship: Why Librarians are Better Together

Academic Freedom and Censorship: Why Librarians are Better Together

In 2023, the American Library Association documented 1,247 censorship cases with known locations. Of these cases, 2 percent occurred in academic libraries, […]

Read Now
The Chilling Impact of Censorship in Higher Education

The Chilling Impact of Censorship in Higher Education

Perhaps because college students are generally considered adults, and college and university campuses and classrooms have long been viewed as places to […]

Read Now
Pope Francis, Human Dignity, and the Right to Stay, Migrate and Return

Pope Francis, Human Dignity, and the Right to Stay, Migrate and Return

Pope Francis devoted his Message for World Day of Migrants and Refugees in 2023 to the “right” or “freedom” to stay or […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest


This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments