Artificial Intelligence (AI) has been a transformative force across various sectors, and its potential impact on democracy is a topic of intense debate. While there are valid concerns about AI systems like ChatGPT and GPT4 potentially harming democracy by overwhelming public discourse with autogenerated arguments, there’s also a promising alternative perspective. AI, if harnessed correctly, could serve the public good and strengthen democracy rather than undermine it.

The key lies in developing AI systems that are not controlled by large tech monopolies but are instead developed by government entities and made accessible to all citizens. This public AI option could be specifically designed for use cases where technology can best aid democracy. It could educate citizens, facilitate deliberation, summarize public opinion, and identify potential areas of agreement. Politicians could use large language models (LLMs) like GPT4 to better understand their constituents’ needs and desires.

Currently, state-of-the-art AI systems are controlled by tech giants like Google, Meta, and OpenAI in collaboration with Microsoft. These companies dictate how we interact with their AI systems and what access we have. They can shape these AI systems to align with their corporate interests. However, the ideal scenario would be to have AI options that are public goods and are directed toward the public good.

Existing LLMs are trained on material gathered from the internet, which can reflect biases and hate. Companies attempt to filter these datasets, fine-tune LLMs, and tweak their outputs to remove bias and toxicity. However, there are concerns that these companies are rushing to market with half-baked products in a race to establish their own monopoly.

These companies make decisions with significant implications for democracy but with little democratic oversight. We don’t hear about the political trade-offs they are making. Do LLM-powered chatbots and search engines favor some viewpoints over others? Do they avoid controversial topics altogether? Currently, we have to trust these companies to tell us the truth about the trade-offs they face.

A public option LLM would provide a crucial independent source of information and a testing ground for technological choices with significant democratic implications. This could work much like public option healthcare plans, which increase access to health services while also providing more transparency into operations in the sector and exerting productive pressure on the pricing and features of private products. It would also allow us to understand the limits of LLMs and direct their applications accordingly.

We know that LLMs often “hallucinate,” inferring facts that aren’t real. It isn’t clear whether this is an unavoidable flaw in how they work, or whether it can be corrected. Democracy could be undermined if citizens trust technologies that randomly fabricate information, and the companies trying to sell these technologies can’t be trusted to admit their flaws.

However, a public option AI could do more than just check the honesty of technology companies. It could test new applications that could support democracy rather than undermine it.

Most notably, LLMs could help us formulate and express our perspectives and policy positions, making political arguments more cogent and informed. This doesn’t mean that AI will replace humans in the political debate, but rather that it can assist us in expressing ourselves. If you’ve ever used a Hallmark greeting card or signed a petition, you’ve already demonstrated that you’re okay with accepting help to articulate your personal sentiments or political beliefs. AI could make it easier to generate first drafts, provide editing help, and suggest alternative phrasings.

If the hallucination problem can be solved, LLMs could also become explainers and educators. Imagine citizens being able to query an LLM that has expert-level knowledge of a policy issue, or that understands the positions of a particular candidate or party. Instead of having to parse bland and evasive statements calibrated for a mass audience, individual citizens could gain real political understanding

through question-and-answer sessions with LLMs that could be unfailingly available and endlessly patient in ways that no human could ever be.

Moreover, AI could facilitate radical democracy at scale. AI could manage massive political conversations in chat rooms, on social networking sites, and elsewhere, identifying common positions and summarizing them, surfacing unusual arguments that seem compelling to those who have heard them, and keeping attacks and insults to a minimum.

AI chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of AI-moderated civic debate could also be a dynamic alternative to opinion polling. Politicians turn to opinion surveys to capture snapshots of popular opinion because they can only hear directly from a small number of voters, but want to understand where voters agree or disagree.

Looking further into the future, these technologies could help groups reach consensus and make decisions. Early experiments by the AI company DeepMind suggest that LLMs can build bridges between people who disagree, helping bring them to a consensus.

This future requires an AI public option. Building one, through a government-directed model development and deployment program, would require a lot of effort — and the greatest challenges in developing public AI systems would be political.

Some technological tools are already publicly available. Tech giants like Google and Meta have made many of their latest and greatest AI tools freely available for years, in cooperation with the academic community. Although OpenAI has not made the source code and trained features of its latest models public, competitors such as Hugging Face have done so for similar systems.

For further reading on this topic, I recommend checking out these articles:

  1. Bing’s A.I. Chat: ‘I Want to Be Alive. on nytimes.com.
  2. The Security Hole at the Heart of ChatGPT and Bing on wired.com.
Categories: AICybercrime

0 Comments

Leave a Reply

Avatar placeholder
en_US