Filling social media with indistinguishable AI-bots is illegal with EU AI Act
Meta considers a future where AI-generated bots, avatars or artificial accounts seamlessly integrate into its platforms, filling networks with artificial agents that mimic human users. I must repeat: according to the (so far vague) remarks, these bots are supposedly to interact with people, generate content, and have profiles indistinguishable from real accounts.
Are we heading toward even more useless social media filled with non-information or even disinformation? What would be the value of that?
Let's set such philosophical questions aside and instead ask: is it even legal? Well it turns out, that nope. Such bots would have to be clearly marked as artificial agents. Let me explain why this is the case. EU AI Act, mandates transparency in AI systems and prohibits making them indistinguishable from humans. Specifically:
Article 50(1): AI systems must inform users they are interacting with AI unless it is "obvious to a reasonably well-informed, observant, and circumspect person." Meta’s vision risks violating this rule unless clear disclosures are ensured. Deploying it in ways indistinguishable from humans is impossible, at least in European Union.
Article 50(2): AI-generated content must be marked in a machine-readable and detectable format.
Furthermore, transparency measures must account for the needs of vulnerable users, such as younger audiences, who may struggle to distinguish AI from human interaction. Of special note is the need for "Such information and notifications be provided in accessible formats for persons with disabilities". For example, vision-impaired persons would have to have ways to see that a reply they received is from an artificial bots. Hearing-impaired persons should have to be notified that a text. video or audio creation is targeted at them by an automaton. This must be easy to understand.
Yes, bots can be useful
Meta’s vision highlights generative AI’s potential but challenges trust and authenticity. AI bots may offer practical benefits, like automating tasks or generating creative content. For example even today some Twitter/X accounts are able to create a website with threaded content, and this is useful. But the idea of holding conversations with artificial agents raises concerns.
Information Operations and Business Model for AI?
Digital platforms have long fought against botnets and troll farms exploiting their networks for disinformation. Interestingly, they now appear to consider adopting similar elements as part of a business model to monetize AI-generated interactions.
This isn’t necessarily unambiguously bad or impossible, but it’s something that demands careful consideration.
I am looking forward to new opportunities - if you could use expertise in global cybersecurity, risk assessment, privacy regulations/GDPR & standards, I’m open to new projects. Feel free to reach out me@lukaszolejnik.com.