Nürnberg - The web and social media are increasingly flooded with low-quality AI-generated content. Why this digital junk is becoming more common - and what consequences it brings.

Scrolling through social media feeds or receiving new video suggestions on YouTube, users increasingly encounter content that feels unsettlingly polished yet lacks real depth. Images that don’t quite look authentic, articles that read like they were written by a machine (because they were), or artificial-looking videos accompanied by voices that resemble a cross between an audiobook narrator and an answering machine. Behind these discoveries lies a phenomenon that, within a few years, has evolved from a niche issue into a widespread problem. The technical term for this is AI Slop.

This term refers to low-quality digital content typically produced in large volumes by artificial intelligence. It is not the result of any creative process but rather a calculated strategy: to generate as much content as possible in order to keep up in the ever-accelerating race for attention. One example demonstrating what this looks like - and the absurdly high reach it can achieve - was posted on the evening of January 24 and has already been shared over 2,000 times:

From „zombie football“ to „cat soap operas“

A 2025 analysis by The Guardian revealed the extent to which platforms are already saturated with AI slop: over 20 percent of the videos suggested to new YouTube users consisted of such content. The study examined 15,000 top channels and found that 278 channels were entirely made up of AI-generated content, together amassing 63 billion views, 221 million subscribers, and generating around 117 million dollars in annual revenue.

The same study also showed that nine of the 100 fastest-growing YouTube channels were pure AI productions, featuring content ranging from „zombie football“ to „cat soap operas.“

AI slop is also spreading rapidly across social networks. AI-generated images and videos can be produced in massive quantities within seconds, providing an ideal resource for platforms like Facebook or TikTok, whose algorithms reward activity rather than care or creativity.

Why is that a problem?

The motivation behind AI slop is straightforward: those who produce the most content gain the greatest reach. For some players, generative AI has thus become a tool to flood the internet with as much content as possible with minimal effort. This results in a deluge of fast, cheap, and easily shareable material, leading to a decline in overall content quality online and making it harder for serious work to stay visible - after all, users must wade through an increasing amount of subpar content.

But that’s not all: AI slop can also become a political weapon. When Hurricane Helene struck the U.S. mainland at the end of September 2024, killing at least 247 people, an obviously AI-generated image - shared millions of times - was used by the American right to blame the disaster on the Joe Biden administration. Even former President Trump enjoys using such content - whether to portray himself as the pope or to depict himself dropping feces from a fighter jet onto protesters while wearing a crown.

However, the growing volume of low-quality content poses a problem for AI itself: AI-generated material is increasingly used to train new models. This raises the likelihood that models will inadvertently learn from texts that are themselves machine-produced - a trend accelerated by the rapid spread of inferior content. In short: AI is consuming its own garbage.