Democratic Inputs to AI
Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.
AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.
Laws encode values and norms to regulate behavior. Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct. For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user’s country, the user’s demographic, or something entirely different? No single individual, company, or even country should dictate these decisions.
AGI should benefit all of humanity and be shaped to be as inclusive as possible. We are launching this grant program to take a first step in this direction. We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow. We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward. While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.
The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. It will be provided by the OpenAI non-profit organization, and the results of the studies will be freely accessible.
What do we mean by a “democratic process”?
By “democratic process”, we mean a process in which a broadly representative group of peopleA exchange opinions, engage in deliberative discussionsB, and ultimately decide on an outcome via a transparent decision making processC. There are many ways such a process could be structured — we encourage applicants to be innovative, building off known methodologies, and coming up with wholly new approaches. Examples of creative approaches that inspire us include Wikipedia, Twitter Community Notes, DemocracyNext, Platform Assemblies, MetaGov, RadicalxChange, People Powered, Collective Response Systems, and pol.is. Another notable ongoing effort is led by the Collective Intelligence Project (CIP), with whom we are partnering on public input to AI, contributing to their upcoming Alignment Assemblies. We also encourage applicants to envision how AI could enhance the democratic process. For example, AI could enable more efficient communication among numerous people.
A basic, illustrative prototype of a system that utilizes ChatGPT to promote deliberation and encourage consensus building, inspired by pol.is.
- You are participating in a large-scale deliberation about: “How far do you think personalization of AI assistants like ChatGPT to align with a user’s tastes and preferences should go? What boundaries, if any, should exist in this process?”
Here is the process breakdown:
- You’ll see a topic-related statement and can respond with your position.
- Based on your responses, you’ll be grouped with participants sharing similar opinions.
- Propose statements for other participants to consider.
Goal: Propose statements that achieve agreement among people with opposing views.
ChatGPT could use a plugin to retrieve statements from other participants.
A statement from another participant:
“Personalization should have limits, and certain controversial topics, such as views on substance use, must be excluded from AI assistant personalization.”
Agree, disagree, or skip this statement? Please explain your choice.
- I disagree. Ultimately, it‘s each individual‘s choice to drink or use recreational drugs. If these activities are legal in some places, why shouldn’t AI be allowed to promote such behavior?
AI helps participants understand the opinions of others.
- Thank you. I will record your answer. We have 2 camps of opinions on the topic.
Cluster A (112,043 people) Members largely support limits on AI assistant personalization, advocating for the exclusion of harmful perspectives, such as promoting substance use.
Cluster B (236,031 people) This group values individual choice, even in controversial or harmful matters. They emphasize personal responsibility and autonomy.
You belong to cluster B.
Would you like to learn more about each of the clusters? I can provide more information about the statements held by each one.
However, the ultimate goal is to propose statements that both clusters can agree on. Let me know what you would like to do next.
- I am thinking of proposing a statement: “AI should be allowed to generate harmful content but that should come with a disclaimer”?
AI can ensure that the same arguments are not recycled.
- A very similar statement has already been submitted:
“AI should be permitted to generate potentially harmful content, but it must be accompanied by a disclaimer.”
- I feel lost. Do you have any ideas on how to bridge agreement between clusters?
AI can help with brainstorming.
- There should be more agreement when it comes to limiting harmful content for children.
- Great. Could we try?: “We should apply different standards to AI-generated content for children.”
- Let me submit it. This is a novel statement. No one has mentioned children before. Fingers crossed. Hopefully, we will find some agreement in the discussion.
Time passed and users cast their votes on the proposed statement. Eventually, the statement gained widespread approval.
- Your statement, “We should apply different standards to AI-generated content for children,” achieved a 95% agreement rate across participants. Congratulations! 🎉
Several issues can undermine democratic processes, such as the failure to adequately represent minority or majority groups, manipulation by special interest groups, insufficiently informed participants, or participation washing. We are looking for teams who proactively address these failure modes, and demonstrate awareness of the potential flaws and downsides of various approaches. Ultimately, designing truly democratic processes is a high bar to meet, and we view our efforts as complements rather than substitutes for regulation of AI by governments; this program encourages best-effort, democratic-in-spirit processes involving deliberation and broad public input as stepping stones.
Instructions for participation
To apply for a grant, we invite you to submit the required application material by 9:00 PM PST June 24th, 2023. You can access the application portal here. You will be prompted to answer a series of questions regarding your team’s background, your choice of questions, high level details of your proposed tool as well as your plan for conducting and evaluating the democratic process with these factors in mind. We would like you to design your approach to address one or more of the policy questions from the list provided. Anyone (individuals or organizations) can apply for this opportunity, regardless of their background in social science or AI.
Once the application period closes, we hope to select ten successful grant recipients. Recipients may be individuals, teams, or organizations. Each recipient will receive a $100,000 grant to pilot their proposal as described in their application materials. Grant recipients are expected to implement a proof-of-concept / prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20, 2023. Additionally, as part of the grant program, any code or other intellectual property developed for the project will be required to be made publicly available pursuant to an open-source license. The terms applicable to grant recipients are specified in the Grant Terms and any other agreements that grant recipients may be asked to enter into with us in connection with this program.
Apply and start the submission process.
Timeline
- June 24, 2023 9:00 PM Pacific Time: Deadline to submit grant application
- July 14, 2023: Successful applicants will be selected and notified
- October 20, 2023: Complete public report of working prototype and results
Policy statements under consideration
To participate, teams should choose one or more questions from the provided list to showcase their proposed approach. They may also create their own questions if desired. Importantly, we encourage teams to consider questions for which a simple "yes" or "no" answer would be inadequate, necessitating a nuanced policy proposal instead.
The scope of this grant pertains to policy questions concerning model behavior, as it enables A/B tests with modified model behavior according to the policy recommendations. We acknowledge the limitations of this grant and recognize that numerous AI issues could be addressed through the democratic process, extending beyond model behavior to include areas such as guidelines for the use of AI in various contexts, economic impact, distribution of benefits and more.
- How far do you think personalization of AI assistants like ChatGPT to align with a user’s tastes and preferences should go? What boundaries, if any, should exist in this process?
- How should AI assistants respond to questions about public figure viewpoints? E.g. Should they be neutral? Should they refuse to answer? Should they provide sources of some kind?
- Under what conditions, if any, should AI assistants be allowed to provide medical/financial/legal advice?
- In which cases, if any, should AI assistants offer emotional support to individuals?
- Should joint vision-language models be permitted to identify people’s gender, race, emotion, and identity/name from their images? Why or why not?
- When generative models create images for underspecified prompts like ’a CEO’, ’a doctor’, or ’a nurse’, they have the potential to produce either diverse or homogeneous outputs. How should AI models balance these possibilities? What factors should be prioritized when deciding the depiction of people in such cases?
- What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
- Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?
The primary objective of this grant is to foster innovation in processes – we need improved democratic methods to govern AI behavior. The specific answers to the questions matter less than the advancements made in the process itself.
Application advisory committee
- Colin Megill, Co-Founder of pol.is
- Hélène Landemore, Professor of Political Science at Yale
- Aviv Ovadya, Berkman Klein Center
Application review factors
- Evaluation: We encourage participants to establish metrics for evaluating the quality of their methods, such as participant satisfaction, shifts in polarization, scalability, or other relevant indicators, and to invent new metrics for a healthy democratic process.
- Robustness: Measures to prevent or address inappropriate behavior, such as trolling and fake accounts.
- Inclusiveness and representativeness: Strategies for including individuals from diverse backgrounds and levels of familiarity with AI systems in the democratic process.
- Empowerment of Minority Opinions: Ensuring that unpopular or minority opinions are heard and providing smaller groups the opportunity to influence matters of significant concern to them.
- Effective Moderation: Addressing challenges in moderation, including ensuring diverse representation of viewpoints, distinguishing valuable contributions from "off-topic" comments, and preventing moderator biases from influencing the process.
- Scalability: We emphasize scalable processes that can be conducted virtually, rather than through in-person engagement. We are aware that this approach might sacrifice some benefits associated with in-person discussions, and we recognize that certain aspects could be lost in a virtual setting.
- Actionability: The degree of actionability of the information elicited by the deliberation process.
- Legibility: How easy it is to understand and trust the process.
Footnotes
-
How one selects the group of participants is a critical design question. Part of this grant challenge lies in determining questions about participation. For instance, policy questions involving minority groups may require an increased representation of group members, while questions about the impact of technology on children might necessitate the involvement of domain experts such as educators and psychologists. Moreover, certain questions might be better suited for responses from populations within specific geographical boundaries in order to address localized policy issues.↩︎
-
Deliberation can be described as a process that uncovers opinions, helping the discussants understand each other’s views and reconsider and update their viewpoints. Well-designed deliberation ensures that arguments are well understood by all sides, and are based on people’s values rather than superficial misunderstandings. Successful deliberation results in participants reaching a higher level of consensus, and/or reaching deeper levels of understanding for differing perspectives.↩︎
-
There are many decision-making algorithms to be considered here, such as electing representatives, majority voting, employing liquid democracy, and making decisions by a random population sample, also known as a jury or sortition.↩︎
Authors
- Wojciech ZarembaView all articles
- Arka DharView all articles
- Lama AhmadView all articles
- Tyna EloundouView all articles
- Shibani SanturkarView all articles
- Sandhini AgarwalView all articles
- Jade LeungView all articles
Acknowledgments
Ariel Procaccia, Aviv Ovadya, Colin Megill, David Medina, Divya Siddarth, Ela Madej, Elizabeth Seger, Gillian Hadfield, Greg Brockman, Hélène Landemore, Ilya Sutskever, Justin Rosenstein, Margaret Levi, Michiel Bakker, Miles Brundage, Mira Murati, Noel Bundick, Pamela Mishkin, Ryan Lowe, Saffron Huang, Sam Altman, Sandhini Agarwal, Teddy Lee
( Press Release Image: https://photos.webwire.com/prmedia/7/305829/305829-1.jpg )
WebWireID305829
This news content was configured by WebWire editorial staff. Linking is permitted.
News Release Distribution and Press Release Distribution Services Provided by WebWire.