Inclusive.AI

Engaging Underserved Populations in Democratic Decision-Making on AI

InlcusiveAI, a platform equipped with decentralized governance to empower underserved groups, in decision-making processes related to AI.

poster

We have conducted this project in collaboration with OpenAI for their Democratic Input to AI Grant.

Related Publications

Tanusree Sharma, Jongwon Park, Yujin Kwon, Yiren Liu, Yun Huang, Sunny Liu, Dawn Song, Jeff Hancock, Yang Wang. Inclusive.AI: Engaging Underserved Populations in Democratic Decision-Making on AI. OpenAI Grant Interim Report & Underway for Publication

Tanusree Sharma, Yujin Kwon, Kornrapat Pongmala, Henry Wang, Andrew Miller, Dawn Song, and Yang Wang. Unpacking How Decentralized Autonomous Organizations (DAOs) Work in Practice (Preprint) Under Review IEEE ICBC

Inclusive.AI Blogpost - 10 minutes read Inclusive.AI: Engaging Underserved Populations in Democratic Decision-Making on AI.

Introduction

A major criticism of past AI development is the absence of thorough documentation and traceability in design and decision-making, leading to adverse outcomes such as discrimination, lack of inclusivity and representation, and breaches of legal regulations. Underserved populations, in particular, are disproportionately affected by these design decisions. Conventional law and policymaking methods have constraints in the digital age while traditional methods like interviews, surveys, and focus groups for understanding user needs, and expectations have inherent limitations, including a lack of consensus and regular insights.

In this project, we aim to utilize Decentralized Autonomous Organization (DAO) mechanisms to empower underserved groups, such as people with disabilities, in decision-making processes related to AI. Different DAO mechanisms and configurations were tested to facilitate democratic decision-making. We developed a collaborative decision-making platform, named Inclusive.AI, that allows diverse parties to engage in discussions, proposals, and voting related to AI-related issues. We conducted a series of randomized online experiments with 235 users, including, people with disabilities and individuals from the Global South, through a 2x2 experiment design where we manipulated the voting methods (ranked voting vs. quadratic voting) and voting token distribution (equal distribution vs. differential 20/80 distribution). Our goal was to establish a consensus on critical issues related to AI model behaviors, particularly in the context of addressing stereotypical bias in text-to-image models.

High Level Questions

How do different voting mechanisms affect people’s voting experiences as part of the decision-making process for AI? under different voting mechanisms, does the voting outcome reflect the value perceptions of the majority of participants?

When generative models create images for underspecified prompts like ’a CEO’, ’a doctor’, or ’a nurse’, they have the potential to produce either diverse or homogeneous outputs. What are people’s perceptions in terms of how AI models should balance different options? What factors do people consider important when deciding the depiction of individuals in such cases?

We chose to focus on these questions mainly because they have direct, foreseeable impact on our target populations (i.e., teenagers, people with disabilities, people of color, and people from the Global South). We have worked closely with our target populations in past/ongoing research. We aim to include diverse voices in the decision-making process and ensure that AI rules are inclusive and equitable.

Our target populations are marginalized groups that could be disproportionally affected by these rules. Our broader goal is to inform AI developers, researchers and practitioners on how to navigate these thorny questions by considering inputs from these marginalized groups. Our expected results could also change how AI tools will be developed or at least configured so that our target marginalized groups will not be further marginalized because of AI

Motivation

In our previous work, we conducted an empirical analysis of a diverse set of DAOs (100+) of various categories and smart contracts, leveraging onchain (e.g., voting results) and off-chain data (e.g., community discussions) as well as our interviews with practitioners. Specifically, we defined metrics to characterize key aspects of DAOs, such as the degrees of decentralization and autonomy for future DAO or related governance systems based on our findings. Building on the insights about DAO governance from our systematic analysis, we plan to develop mechanisms that can seamlessly integrate with AI (e.g.,ChatGPT) allowing users to actively participate in the democratic governance decision-making process. In particular, the DAO-like mechanisms will inform the development of AI system.


Team

Yang Wang

Associate Professor at UIUC. Dr. Wang's interests focus on privacy and security, and public policy issues, especially regarding privacy. His current project involves Teaching High School Students about Cybersecurity and AI Ethics.

Yun Huang

Associate Professor at UIUC. Dr. Huang's interests include crowdsourcing systems, HCI, mobile applications and systems. One of her current project involves, advancing STEM Online Learning by Augmenting Accessibility and AI.

Tanusree Sharma

PhD Candidate at UIUC. She previously worked at Google, and Max Planck Institute on topic related Privacy/Security Risk Assessment Toolings. Her current work involves defining and evaluating decentralized governance metrics, in particularly in DAOs.

Dawn Song

Professor, Faculty co-Director of UC Berkeley Center on Responsible Decentralized Intelligence (RDI). Her work involves designing and developing new techniques and tools for Responsible AI.

Sunny Liu

Associate Director of the Stanford Social Media Lab. Her research involves designing and testing digital literacy interventions for older adults, adolescents, and rural residents.

Jeff Hancock

Harry and Norman Chandler Professor of Communication, founding director of the Stanford Social Media Lab, Stanford University. His work involves designing novel framework for AI-Mediated Communication.