Leo Lintang / Adobe Stock / Big Think
Key Takeaways
- AI’s impact on democracy depends less on the technology itself and more on how people choose to apply it.
- Schneier and Sanders argue that governments and citizens must demand responsible uses of AI that enhance speed, fairness, and accessibility in public systems.
- Without strong, activity-based regulation and public alternatives, AI risks concentrating political power and accelerating authoritarian tendencies.
You’d be forgiven for thinking AI represents a classic Faustian bargain, as every reported blessing seems tied to a sinister curse.
AI will help us navigate the immense amounts of information and data created every day in the modern world, but it will also make it easier for bad actors to swamp the infosphere with disinformation. AI can enable real-time translations to spread ideas seamlessly across language barriers, but it may also make the marketplace of ideas less pluralistic by concentrating power in a few individuals. AI will make all of our jobs easier or straight-up replace us.
If everything goes well, AI will usher in a techno-utopia of unprecedented wealth, leisure, and productivity — and the monkey paw curls.
But this framing doesn’t match reality. In a recent interview, Bruce Schneier, a cryptographer and computer security professional, reminded me that AI isn’t a monolithic force — one meta-technology to rule them all. AI instead represents “a hundred different things” that can be used in “a hundred different ways.”
“It really shows the promise and the peril of the technology,” he says. “There are so many things that it can do, so many ways it changes existing things we’re doing.”
Schneier and Nathan Sanders, a data scientist focusing on policymaking, recently co-wrote Rewiring Democracy, which explores how AI may alter democratic systems and bureaucracies at every level. While a high-level takeaway of their argument may again seem Faustian — AI can make democracies more representative if it doesn’t make them less — the devil’s in the details.
As we discuss, AI’s power does not lie solely with the technology itself or even the corporations controlling the space. It lies in how we choose to implement it and the problems we enlist it to solve. And because citizens have a say in how democratic societies run, the possibilities are far more nuanced than the forecasts of your standard doomers and boomers.
(This interview has been edited for length and clarity.)
Big Think: In your book, you described democracy as an “information system.” Why did you choose that framework?
Schneier: I came up with thinking of democracy as basically an information system in which people decide what to do — [one] that processes individual preferences to produce a singular group output. What should the tax rate be? What should the unemployment policy be?
This is valuable when looking at information technologies, including AI. AIs will process information differently, either better or faster, so [this framework] helps us think about how we collectively, as a society, process information.
Big Think: How is AI different or similar to past information technologies, such as radio or the telegraph?
Schneier: Radio, television, the internet, social media, what AIs do that none of these technologies do is mirror in some way human cognition. Whether that makes it qualitatively different depends a lot on the application.
Sanders: In the book, we talk about four different ways AI will impact democracy. We talk about speed — AI can make decisions faster. [There’s] scale — AI can perform more tasks at the same time — and scope — it can perform more types of tasks. Finally, sophistication. We’re specialized as humans in keeping five, maybe seven, ideas in our heads at once. AI can plausibly take a much larger set of factors when analyzing a situation.
We think those are the capabilities that point to the specific places where AI will impact democratic processes.
Big Think: One example from your book that stood out to me is the tax system. AI can make personal tax filings easier and detect fraud at scale. It could also help the chronically underfunded and understaffed IRS sift through reams of data.
Schneier: This is a good example because it concerns the power of AI. It can find tax fraud; it could also commit tax fraud, and the amount is kind of proportional to your income. The rich get more tax fraud with it.
Also, the IRS’s chronic underfunding is a political decision. Congress has decided to underfund [the service] because tax sheets give them a lot of money in their campaigns. Here, the technology can do both good and bad, but it can’t solve the fundamental problem because it’s not a technology problem.
Big Think: It’s a problem of application and norms. What’s another example of how AI can potentially improve or degrade a democratic process?
Sanders: Administration decisions are a huge part of what democracies do. They support people. They provide benefits such as health insurance, Social Security, and disability insurance.
[But] the U.S. bureaucracy is big and complex. It requires a lot of scale, but one thing it hasn’t historically had is speed. Tens of thousands of people in the U.S. die every year waiting for a payout on disability benefits [because] of how slow our bureaucratic process is. There’s an opportunity [for AI] to do good — to speed up the process by taking in information and making decisions faster than we are willing to pay for the number of humans needed to do that.
Of course, there are opportunities to misuse AI here, too. It could instead automate inequality by encoding systematic biases into that process and implementing policies at scale that deny benefits to the people we think deserve them.
Big Think: So, in addition to things like speed and scale, AI also has the potential to cement bias, concentrate power, and erode trust. How do we begin to tackle this problem?
Schneier: Take everything you said, remove the word AI, and add people. We manage [by focusing] on the people, not the technology. It’s the people using the technology for good or bad, for this or that, for things we decide are moral or immoral.
Now, differences can happen at an enormous scale. AI propaganda just happened at that scale. Nobody has enough humans to pump out that much propaganda. The question is when those differences of degree become differences in kind. This is very application-specific.
Sanders: In the book, we discuss recommendations for what governments and citizens can do to change the circumstances that we’re in.
For the government, especially thinking about the U.S., we call for more action on regulation. [But] the government has other ways to shape the AI ecosystem besides just controlling private action. The government can take action, too. We advocate for the development of public AI systems that set a standard for AI development. [These systems] can provide a competitive baseline that private companies would need to meet to be successful — a baseline in terms of cost, availability, and upholding ethical principles such as transparency.
For citizens, I would summarize two major recommendations. First, citizens should resist inappropriate uses of AI. When AI is being used to automate inequity, people should call that out, resist, and protest those applications. Citizens should also demand responsible use of AI.
Big Think: Do you think there is enough political will for people to make those demands? I think about the many alarms raised over social media, yet they never seem to translate into political action.
Schneier: It depends on the jurisdiction. In the U.S., no, but that’s because the U.S. government isn’t functional in so many ways.
[However,] California has a considerable number of AI regulations. The E.U. has the AI Act, which also intersects with the Digital Markets Act, the Digital Services Act, and GDPR [General Data Protection Regulation]. We can complain about some of those regulations, but at least they’re trying.
Big Think: It can feel like AI is so large and moving so fast that it’s difficult for anyone to keep up. What are your thoughts?
Schneier: The speed of tech outpacing the speed of regulation is a problem, especially in a world where tech has such an influence and mistakes are more costly. Again, this is a bigger problem than AI, certainly one that we need to figure out. Building our society for the near-term benefit of a bunch of tech billionaires is a dumb way to organize things.
Sanders: In the book, we discuss the dichotomy between entity-based regulation — meaning lawmakers keep up with the technology and say it can have so many model parameters or be trained with so many flops, etc. — versus activity-based regulation — which says what people are allowed to do in general. We come down very much on the side of the latter.
We recognize that we can do a lot with far fewer resources through examples like Deep Seek, and I don’t think it’s necessary or should be expected that policymakers keep up with those advancements. Instead, they need to do a clear job of saying what people are allowed to do, what they aren’t, and try to [determine which] technology makes that possible.
Big Think: If you had a legislator’s ear, what kind of regulation or mandate would you like to try?
Sanders: I’ll go back to the example I gave earlier: We should set expectations around the government decision-making process so the benefits people rely on happen faster. If there’s a process that takes months for people to get an answer — especially on something that is routine and where a large percentage of people are ultimately approved — there’s no reason people should have to wait that long. We should mandate that those timelines shrink.
We should not require the government to use AI to accomplish that, but it should force the question. Maybe we need to hire more people, or maybe we should leverage technology. In either case, we should demand that the government becomes more responsive and faster.
Big Think: Much of the talk surrounding AI today sounds like a 21st-century arms race. What are your thoughts on that part of the discussion?
Schneier: U.S. companies love the metaphor because it means don’t regulate us. Give us lots of money and look the other way because we’re going to do all sorts of damage to win this arms race. It’s self-serving to use that framing.
But that’s not the way this tech works. Think about Deep Seek. China comes up with a fundamental advance in how to build a model more cheaply and quickly than OpenAI. What do they do? They publish the paper. Google engineers publish papers. That is the way it works. This is not 1960s tech.
Big Think: Your point reminds me of how Sam Altman often uses apocalyptic language when discussing the potential of AI — while his company works to build more advanced AI.
Schneier: Fund me or you will all die.
Big Think: Exactly.
Sanders: I’m glad that you raised Sam Altman and OpenAI here.
At the start of the [second] Trump term, the administration had a comment process where they invited companies and others to influence its AI policy. OpenAI submitted a letter with their recommended plan. And you can see that what they asked for showed up in the administration’s policy verbatim.
One example is the proposed moratorium on state-level AI regulation in the name of innovation. This was something that OpenAI asked for within the frame of an arms race. If you shackle us, they wrote, the U.S. won’t be able to keep up with China. You can see a direct line from that framing to the U.S. administration acting on it.
Big Think: Speaking of China, authoritarian and totalitarian states also have this technology. How do you see AI affecting those governmental information flows?
Sanders: It was a conscious choice that we focused on democratic systems [in our book], but we recognize there are a million things to be said about AI. There is a broad range of democracies, some more authoritarian and some more pluralistic. We think there are serious dangers for democracies to veer toward authoritarian systems by leveraging AI.
One example we wrote about happened with the second Trump administration and DOGE: the use of AI to concentrate power within bureaucracies.
One way we have sheltered the American democratic system against authoritarian impulses — even when they have existed in the executive branch before — was the layers of human bureaucracy necessary to implement policy. It takes time to develop new policies and filter them through that bureaucracy. To the extent that we further automate decision-making with AI, it gives a central button or lever to control and implement policy across many layers of bureaucracy.
When the folks involved in DOGE said they wanted an AI-first policy in decision-making, it illustrates the potential for that kind of danger.
Schneier: More generally, think of AI as a power-enhancing technology. In a democracy, it could enhance democracy; in the hands of an authoritarian, it could enhance authoritarianism.
[Another] thing we are worried about is AI-enhanced spying. It used to be that governments hired people to watch other people. With computer and phone technologies, they can know where someone is without following them, but they still need people to listen in on conversations. AI has the potential to automate spying, allowing for mass spying in the same way that the previous technologies allowed for mass surveillance.
In the hands of an authoritarian, it is a nightmare.
Big Think: You mention that citizens should resist inappropriate uses of AI and demand responsible applications. How might we do that?
Sanders: I’ll give you a straightforward answer and then an example. The straightforward answer is that people and organizations should look to build their power. That’s how democracy works. They should always do that, and they should recognize the potential AI has to help them do that.
In our circles, Bruce and I interact with many people who are skeptical of AI. We understand those reasons, but we worry that if entire factions of people systematically choose not to use or engage with the technology, it creates a power imbalance in which one side of a political argument has capabilities that are not available to the other.
We’re excited about innovations in union organizing, labor organizing, and political campaigning — especially at the state and local levels, where people are finding effective ways to use the technology to amplify their power.
I [recently] represented a project called the Massachusetts Platform for Legislative Engagement [at the State House]. The project looks at the pain points that advocates have when they try to have a voice in the legislative process. For example, in Massachusetts, we file about 8,000 pieces of legislation every year. If you want to influence policy, you’re on your own to figure out what bills are relevant, what they do, what the legalese means, and which bills have a chance of passing.
We’ve tried to use AI to gather all that information with added context to explain the legislation and that process. I think that’s how you empower groups with AI.
Schneier: It’s inspiring. There are examples like this in other countries, as well. Taiwan uses AI [for] collective decision-making and consensus-building. The group Make.org, which is out of Europe, is building tools for large-scale citizen assemblies. Such tools are used by people to increase their power, both organized and less organized.
Big Think: To bring this full circle, because AI is a hundred different technologies, not just chatbots, we can use it to potentially fix a hundred different problems.
Sanders: We don’t want to fall into the trap of thinking that technology is the solution to all problems. We don’t believe it is, but people sometimes think of technology too narrowly, and they don’t recognize all the aspects of democracy that are already mediated by technologies.
We open the book by talking about the ancient Greek technologies that were used to establish fair (for the time) ways of distributing power in a democracy by identifying who would fill official government positions. That technology has evolved over time into representative democracy and modern voting systems. It’s all part of a continuum.
Likewise, AI is not only large language models. There’s a continuum of technologies, and we shouldn’t look to one specific type of AI to solve these problems. We should recognize that democracy is an information system, and technology plays a role in creating it legitimately.
Big Think: Is there anything you’d like to add?
Schneier: These technologies are changing incredibly fast. Whoever you are, whatever you know about the technology was either wrong three months ago or will be wrong three months from now. You need to pay attention to these technologies and not assume something is true because it was true a year ago.
New things are going to become possible, [and] there will be things that won’t be possible anytime soon. It’s all changing. You have to keep an open mind.
Tags aiCurrent EventsEmerging Techgeopoliticssociology Topics Interviews In this article aiCurrent EventsEmerging Techgeopoliticssociology Sign up for Big Think Books A dedicated space for exploring the books and ideas that shape our world. Subscribe