ich schreibe, um meine gedanken zu sortieren

AI can change your opinion - what now ?

Essays

Today, my hairdresser told me something that stuck with me after leaving the shop. She said, that it would have been better not to tell people of the dangerous side effects that had been discovered in some of the covid vaccines. When i told her that having more information is generally preferable to having less, she replied by drawing a picture for me: "Imagine some old grandma, that sits in front of her TV or Radio all day and now thinks she is gonna die and starts panicking". "They" should have thought about that. Well, leaving aside that there is no organized and omniscient "They", but rather a group of public servants trying to decide matters of life and death in a situation with a lot of unknowns, it speaks to a opinion that i found striking. Namely, that it is the governments ("Their") job to think about the impact of releasing true information into the public. That sometimes people are better off not knowing. If we follow this line of reasoning, it means that it would be the governments job to manage peoples opinion, to not let them panic. Well, if that is the goal, i have some good news for her.

The first piece of good news for my hairdresser is this: researchers of the University of Zürich just pre-released a paper that demonstrates that a secretly used LLM was far better than humans in changing the opinions of users in the "r/ChangeMyView" subreddit. They used three different kinds of accounts. The first was "Generic", with the Model only receiving the post's title and text, the second one was "Personalization" with the LLM receiving personal attributes of the Person whose mind they were supposed to change and the third was "Community Aligned", using a model wich was finetuned on comments that successfully changed peoples mind in the past. Ironically, this last one performed worst, with the generic and personalized options far outclassing it. Why that was the case is hard to judge, as the full paper was not released. Additionally, it is not clear whether the paper has some methodological flaws, that will only be testable when it is actually released. For the purpose of this essay i will assume that the findings already communicated are accurate.

The second piece of good news is that people seem to love AI generated content, and more and more people are drawn to it, especially in the form of short videos, like YouTube Shorts and Instagram Reels. It's impossible to judge what percentage of content already served by both of these platforms is AI generated. My own algorithm might be a bad indicator for it because i get a visceral disgust reaction to "realistic looking" ai generated visual content. I even feel this reaction when seeing a AI generated face in the ad of a glasses shop, therefore i try to avoid this type of content where possible. I don't think that that feeling is generalizable though. I asked friends, and they don't seem to mind AI generated faces nearly as much as I do. Returning to the main point though, AI generated content is on the rise, in every modality imaginable, with the only current notable exception being music (and even that is being challenged).

Putting those two pieces together we get something concerning. Someone with enough resources could use AI to change the results of elections or to ignite popular movements for or against something. In scenarios you don't need a lot of imagination for, this could even contribute significantly to civil wars. I know the last one sounds crass, but if you look at a country like India, you might see why I think that. Indians really enjoy short form content. India and Pakistan are currently in a downward spiral of escalating threats. A lot of the tensions stem from reciprocal hatred of each others nations' and respective religions. That story is old, what's new is that Indians and Pakistanis can now directly interact with each other over the Internet.

images.png

It doesn't take a lot of imagination to realize that it is far easier to stoke unrest than to calm. To deepen hatred instead of changing opinions. If the AI Models used by the University of Zurich were able to change minds, you better be sure that the opposite I just described is already possible. And if it could be done with the resources employed by the researchers, it can be easily done on a way larger scale, for a way more nefarious purpose. In the described study, the researchers had the models first filter posts, then research a basis of fact, then generate multiple responses, from which only the best was selected. If you were planning to throw a country into turmoil you wouldn't need any of these measures. You would only need a crawler set to posts that mention the topic and an LLM to generate an appropriate answer. So for now, i conclude that it is possible, and it being possible and bad people existing, makes it likely to be used sometime in the next 1-5 years.

But where does that leave us? I see four possible levels of intervention for this problem: government, platform, group and individual.

I understand everyone whose first response to this is to demand government intervention. I agree that something like this should be illegal. Furthermore, I think it would be good if it was legally mandatory to mark AI generated content as such. But I honestly don't think that these responses would be able to stop such practices. Locally run LLMs are getting better and better. If someone wanted to run (for example) DeepSeek on a self-bought cluster he could do so with only a few thousand dollars or euros worth of hardware. Legally prohibiting local LLMs is not a preferable option either (to explain why would take to long for this piece). Even if it were illegal though, the broader western internet does not have borders and i would prefer if it would keep being this way. Where do we go from here? To the intersection of government and platform response.

If legally mandated to do so, social media platforms could require everyone posting to present a valid ID. This would split the online world into countries that do so, and countries that don't. Maybe that is an acceptable payoff to you. Furthermore, social media platforms could scan for, and remove AI generated content. I think that they would only do this, if legally mandated to. Why? Because social media companies work on engagement metrics. There is no doubt in my mind, that sufficiently advanced AI will be able to generate more engaging content than human content creators for most people. Maybe there are some strange cases like me with knee-jerk revulsion to AI faces, but I am willing to concede that models might get good enough to overcome the uncanny valley. In the big picture, both of these measures do not make the use of AI content for the purposes described above impossible. But they would massively increase the costs of such an operation. An AI arms race would kick off to make content that is "human" enough to not be detected by automated AI detection systems. Requiring valid ID would necessitate everyone who would want to do something like this to contact and pay people to provide access to their verified social media accounts.

While we're on the topic of tradeoffs and increasing costs for AI generated inflammatory content: I think that there could possibly be a market niche for a social media company that charges a high enough monetary price for entry to disincentivize any such actors. Such "elite" social media platforms could work similar to Raya, the invite-only dating app for celebrities and high-income-individuals. A solution for some, but not for most.

Let's look at the group level next. Groups of motivated people could get together and host their own, well protected networks of local, or intergroup social media. Friend groups could host forums or social media adjacent platforms to engage only with each other, each participant closely vetted (as we know, well kept gardens die by pacifism (leaving aside most of the criticisms towards EY, the man knows how to run a good forum)). This would be something like the homecooked alternative to the "elite" platform just described. Still, people want to know what is going on in the world. This could be a safe space to communicate with friends, not a feature-complete replacement for the current social media sites.

Leaving that aside, the only group level intervention I can think of will dissapoint you. It is well reasoned argument, with your friends, your co-workers and your family. Fair and respectful discussions in peer groups can work wonders in regards to radicalization. But that is something only some people can maintain. There has been much talk about the radicalization of people through social media bubbles. To recommend everyone just form "Discussion Groups" and reply to everyone else with "Skill Issue" is not a workable solution. As Scott Alexander phrased it well:

You’re not allowed to say “skill issue” to society-level problems, because some people won’t have the skill; that’s why they invented the word “systemic”.

But, as i lay out these options I notice that i can't think of an alternative. I hope that smarter people than me will come up with solutions on the group and societal level.

So, finally: What could be personal consequences? What can we, you and I do, to be able to function well in this ever weirder world?

All of the solutions on this level come with notable tradeoffs. The first is to develop a healthy paranoia. The paranoia part is to ask yourself what each person's motive might be when communicating opinions on politics. This already brings us to the first tradeoff: You will never be able to engage with social media in a carefree and relaxed way. You will not be able to mindlessly enjoy scrolling. But that's the thing with tradeoffs. They are not something bargained with. You can choose either option, but be aware, that choosing one eliminates the other. The healthy part of "healthy paranoia" is not to go overboard with it. Don't attribute to malice what can be explained with incompetence. Don't imagine enemies where there are only idiots. Weigh, deliberate and choose people with opinions and motives you can trust.

The second obvious solution is to completely disengage with social media. But even if you are not interested in social media, social media is interested in you. And there are undeniable benefits to being informed and using it. I can only speak from experience, but to discontinue use of the social media platforms I am on would cut me off from, in no particular order, what my friends are doing, what interesting things are happening in the world, a daily dosage of cat content and valuable information regarding my hobbies and my professional life. Still, it is a tradeoff that could be made, and which could even be worth it to you.

The third and final suggestion in this category is to evaluate your principles and beliefs. The only way to actually do this is to commit them to writing. Although I would recommend to keep this writing private, to enable and encourage you to be actually honest with yourself, especially about some of your more controversial beliefs. strongs winds can be dealt with by either getting out of their way (as in option 2) or by developing stronger foundations. If you are clear on what you want politically, which kind of world you would like to see being brought into existence and what is most likely to accomplish this, propaganda, even if well made and specifically tailored to influence you in particular, will be harder to influence you. As for tradeoffs: this option requires a lot of effort. It requires work, a lot of honest and difficult work. As Philosophytube once put it:

Your ideology is like your asshole. You don't need to look at it, except when something has gone seriously wrong.

This third option would amount to a thorough examination of your beliefs and your opinions about the world. I think everyone who has honestly engaged with politics has some things that they do not want to wrestle with. Some consequences they do not want to consider, some realizations they may have come to, but do not want to realize. But with everything going on, it might just become necessary. And, like filling up your car at the gas station, you really want to do it when you can, not when you have to.


Next Can i stop working on my bachelors thesis if AGI arrives ?

hector wants to go to space