keiferski 27 minutes ago

Call me optimistic or naive, but I don’t worry too much about AI having a major effect on democratic elections, primarily because all of the things it is replacing or augmenting are edge case scenarios where a minute faction of votes ends up winning an election. There is already a massive amount of machinery and money aimed at that sliver, and all AI will probably do is make that operation more efficient and marginally more effective.

For the vast majority of people voting, though, I think a) they already know who they’re voting before because of their identity group membership (“I’m a X person so I only vote for Y party”) or b) their voting is based on fundamental issues like the economy, a particularly weak candidate, etc. and therefore isn’t going to be swayed by these marginal mechanisms.

In fact I think AI might have the opposite effect, in that people will find candidates more appealing if they are on less formal podcasts and in more real contexts - the kind of thing AI will have a harder time doing. The last US election definitely had an element of that.

So I guess the takeaway is: if elections are so close that a tiny amount of voters sway them, the problem of polarization is already pretty extensive enough that AI probably isn’t going to make it much worse than it already is.

  • higginsniggins 18 minutes ago

    Ok, but If elections are decided by the small swing group, wouldnt that mean a small targeted impact from AI would could *more* effective not less? If all it needs to do it have a 1 percent of impact that makes a huge difference.

    • keiferski 15 minutes ago

      Yes, but I guess my point is that this is just another symptom of the polarization problem, and not some unique nightmare scenario where AI has mass influence over what people think and vote on.

      So it matters in the same way that the billions of dollars currently put toward this small silver matter, just in a more efficient and effective way. That isn't something to ignore, but it's also not a doomsday scenario IMO.

      • ImPleadThe5th 4 minutes ago

        I think the concern is that it will become a leading contributor to polarization.

        Polarization is the symptom. The cause is rampant misinformation and engagement based feeds on social media.

AustinDev 41 minutes ago

Trying to put on my optimist hat. I believe the most beneficial near-term impact of AI on U.S. politics isn’t persuasion; it’s comprehension.

I believe our real civic bottleneck is volume, not apathy. Omnibus bills and “manager’s amendments” routinely hit thousands of pages (the FY2023 omnibus was ~4,155 pages). Most voters and many lawmakers can’t digest that on deadline.

We could solve this with LLMs right now.

  • observationist 21 minutes ago

    Processing everything that's already been passed as laws and regulations, identifying loopholes, bottlenecks, chokepoints, blatant corruption, and systematically graphing the network of companies, donors, bureaucrats, and politicians responsible - the strategy of burying things in paperwork isn't feasible anymore, and accountability will be technically achievable because of AI.

    We've already seen several pork inclusions be called out by the press, only discovered because of AI, but it will be a while before it really starts having an impact. Hopefully it just breaks the back of the corruption, permanently - the people currently in political positions tend not to be the most clever or capable, and in order to game the system again, they'll need to be more clever than the best AI used to audit and hold them to account.

  • fridder 38 minutes ago

    Even though I am fairly pessimistic about AI's impact, this is a good positive to call out.

tabbott an hour ago

I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.

Peter Pomerantsev's books are eye-opening on the previous generation of this class of tactics, and it's easy to see how LLM technology + $$$ might be all you need to run a high-scale influence operation.

lm28469 an hour ago

> About ten million Americans have used the chatbot Resistbot to help draft and send messages to their elected leaders

Who's reading these messages? Other LLMs?

  • CharlesW 41 minutes ago

    Considering that politicians are generally very late adopters, I would wager "interns".

    • SoftTalker 12 minutes ago

      The interns are likely using LLMs then.

      LLMs tend to be very long-winded. One of my personal "tells" of an LLM-written blog post is that it's way too long relative to the actual information it contains.

      So if the interns are getting multi-page walls of text from their constituents, I would not be surprised if they are asking LLMs to summarize.

nluken an hour ago

> So far, the biggest way Americans have leveraged AI in politics is in self-expression.

Most people, in my experience, use LLMs to help them write stuff or just to ask questions. While it might be neat to see the little ways in which some political movements are using new tools to help them do what they were already doing, the real paradigm shifting "use" of LLMs in politics will be generating content to bias the training sets the big companies use to create their models. If you could do that successfully, you would basically have free, 24/7 propaganda bots presenting your viewpoint to millions as a "neutral observer".

  • SoftTalker an hour ago

    The use to "ask questions" is where the vulnerability lies. Let's face it, outside of whatever expertise and direct experiences we have, all we know is based on what we learned in school or have read/heard about. It's often said that history is written by the winners but increasingly it's written by those who run the AI models. Very few of us know any history by direct experience. Very few of us are equipped to replicate scientific research or even critically evaluate scientific publications. We trust credible sources. As people become more and more accepting of what AI tells them when they "ask questions" the easier it will be for those who control the AI to rewrite history or push their own version of facts. How many of us are going to go to the library and pull a dusty book off a shelf to learn any differently?

avidiax an hour ago

The author didn't look at the structural side of this.

* There is continuing consolidation in traditional media, literally being bought by moneyed interests.

* The AI companies are all jockeying for position and hemorrhaging money to do so, and their ownership and control is again, moneyed interests.

* This administration looks to be willing to pick winners and losers.

I think this all implies that the way we see AI used in politics in the US is going to be in net in support of the super wealthy, and in support of the current administration.

The other structural aspect is that AI can simulate grassroots support. We have already seen bot farms and such pop up to try to drive public opinion at the level of forum and social media posts. AI will automate this process and make it 10 or 100x more effective.

So both on the high and low ends of discourse, we can expect AI to push something other than what is in the interests of the common person, at least insofar as the interests of billionaires and political elites fail to overlap with those of common people.

romaniv an hour ago

I liked Shneier much more when he was arguing against hyperbolic tech claims that were used as excuses for mass control and surveillance.

The notion that AI is reshaping American politic is a clear example of a made-up problem that is propped up to warrant a real "solution".

[removed] an hour ago
[deleted]