Industry

New SSRC Project Aims to Develop AI Principles for Private Sector

July 19, 2024 5893

Aligning artificial intelligence products with society’s objectives is impossible, according to the newest program of the New York-based Social Science Research Council, without corporate disclosure and auditing of the potentially substantial risks associated with AI.

Given this, the new program, the AI Disclosures Project, seeks to create structures that both recognize the commercial enticements of AI while ensuring that issues of safety and equity are front and center in the decisions private actors make about AI deployment. The project is led by technologist Tim O’Reilly, known for coining terms such as “open source” and “Web 2.0,”  and economist Ilan Strauss.

In announcing the program, the SSRC laid out the terrain in which this ‘first, do no harm’ concept will lie:

Current AI governance frameworks focus on risks that are inherent in the capabilities of the models themselves, on limiting the ability of various bad actors to use them for harm, and on the security of various high-risk domains. These frameworks don’t adequately consider how AI risks may also originate in how companies compete for market share and profits. Companies may “move fast and break things” in order to gain scale while the market is still young, and they may exploit their market power once AI markets mature. In addition, they may successfully identify AI risks but under-invest in countering them. Furthermore, as the power of AI models grows, the risk profile may change, and current safety practices may be insufficient.
headshot of Tim O'Reilly
Tim O’Reilly

 “If we want prosocial outcomes,” O’Reilly has written, “ we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved.”

Noting that companies have adopted frameworks in other areas, such as the Generally Accepted Accounting Principles or the International Financial Reporting Standards, O’Reilly added that the issue isn’t the lack of AI principles so much as it is their lack of specificity and general milquetoastery. “Today, when disclosures happen, they are haphazard and inconsistent, sometimes appearing in research papers, sometimes in earnings calls, and sometimes from whistleblowers. It is almost impossible to compare what is being done now with what was done in the past or what might be done in the future. … This is unacceptable.”

Through high-quality research, collaboration, and policy engagements, the SSRC project would develop a systematic disclosure and auditing framework that can become the basis for a set of “Generally Accepted AI Management Principles.” Given the business and economics orientation of the program leaders, the program will draw from the business community and its best practices and metrics instead of imposing a top-down set of principles. “Our goal,” according to SSRC, “is to learn from companies that are acting responsibly, and to use their best practices to shape disclosure standards for AI auditing and regulation that are informed by the commercial realities of AI markets.

headshot of Ilan Strauss
Ilan Strauss

O’Reilly, the principal investigator and co-director of the project, is the founder, CEO, and chairman of O’Reilly Media, and a visiting professor of practice at the UCL Institute for Innovation and Public Purpose (IIPP). At the institute, he and Mariana Mazzucato oversaw a multi-year research project sponsored by the Omidyar Network that investigated Big Tech’s use of algorithmic allocations to extract rents from their ecosystems.

Strauss, the program director of the AI Disclosures Project, is an honorary senior fellow at IIPP, where he was head of digital economy research on a multi-year Omidyar Network-funded research project. He is also a visiting associate professor at the University of Johannesburg. Strauss was the joint recipient of an Economic Security Project grant investigating Big Tech’s acquisitions of technological capabilities.

Related Articles

Andrea Medina-Smith on Making Research Data More FAIR
Industry
February 9, 2026

Andrea Medina-Smith on Making Research Data More FAIR

Read Now
ICE: Good People and Dirty Work
News
January 28, 2026

ICE: Good People and Dirty Work

Read Now
Higher Education In The UK Is In Crisis. We Need to Reimagine Its Very Purpose If It Is To Survive
Higher Education Reform
January 14, 2026

Higher Education In The UK Is In Crisis. We Need to Reimagine Its Very Purpose If It Is To Survive

Read Now
A Status Check on Hallucinated Case Law Incidents
Innovation
January 12, 2026

A Status Check on Hallucinated Case Law Incidents

Read Now
Why is It So Difficult to Agree About Masks and Respiratory Infections?

Why is It So Difficult to Agree About Masks and Respiratory Infections?

The Northern Hemisphere is experiencing its regular seasonal increase in viral respiratory infections. Traditional schedules have not fully adjusted post-Covid so influenza […]

Read Now
Scientists Should Keep in Mind It’s Called the ‘Marketplace of Ideas’ for a Reason

Scientists Should Keep in Mind It’s Called the ‘Marketplace of Ideas’ for a Reason

People often see science as a world apart: cool, rational and untouched by persuasion or performance. In this view, scientists simply discover […]

Read Now
Mutually Assured Distrust and the Gyrations of Trump’s Science Policy

Mutually Assured Distrust and the Gyrations of Trump’s Science Policy

Before 2025, science policy rarely made headline news. Through decades of changing political winds, financial crises and global conflicts, funding for U.S. […]

Read Now
0 0 votes
Article Rating
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments