OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

Chat GPT

OpenAI has expressed its intention to integrate public input into shaping the behavior of its future AI models, aiming to align them with human values. The AI startup is establishing the Collective Alignment team, consisting of researchers and engineers, to develop a system for gathering and encoding public input on the behaviors of its models into OpenAI’s products and services.

In a recent blog post, OpenAI stated that it would collaborate with external advisors, grant teams, and conduct pilots to incorporate prototypes in steering its models. The company is actively recruiting research engineers with diverse technical backgrounds to contribute to this initiative.

The formation of the Collective Alignment team by OpenAI stems from its public program initiated in May, aimed at granting funds for experiments in establishing a “democratic process” for determining the rules AI systems should adhere to. The program sought to support individuals, teams, and organizations in developing proof-of-concepts addressing questions about AI guardrails and governance.

OpenAI highlighted the diverse range of projects undertaken by grant recipients, spanning video chat interfaces, platforms for crowdsourced AI model audits, and strategies for mapping beliefs to dimensions for fine-tuning model behavior. The code used in the grant projects has been made public, along with brief summaries and key takeaways.

Despite efforts to present the program as separate from commercial interests, critics find it challenging to accept this stance, particularly in light of OpenAI CEO Sam Altman’s critiques of EU regulations, emphasizing the company’s perspective on the need for crowdsourcing due to the rapid pace of AI innovation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top