Dead Internet Theory

The dead internet theory has been around for a few years now. If you’ve yet to hear about it, now’s your chance to understand what it is and why it could be becoming more relevant than ever. As AI begins to have a greater influence over the digital landscape and online communities, the question of what the internet is and who it’s for is one that more people are beginning to ask. While the dead internet theory might not be entirely true, it’s possible there’s more truth to it than many first thought.

What Is the Dead Internet Theory, Anyway?

The dead internet theory is a theory that emerged sometime around the end of the 2010s or early 2020s. It concerns the state of the internet, positing that the majority of the internet is merely bot activity or automatically generated content. The theory has been discussed by everyone from high-profile YouTubers to major publications such as The Atlantic.

Although the term “dead internet theory” is relatively new, people have been discussing similar ideas for longer. There have certainly been concerns about bot traffic and how easy it can be to “game the system” to get certain content seen by more people.

That Sounds a Bit Like a Conspiracy Theory…

Sure, at its most extreme, it definitely is. The idea that the entire internet is fake is definitely one that you can dismiss. After all, if you’re using it, then it’s guaranteed that there are other real people using it too. But, although it sounds ridiculous to dismiss the entire internet as fake, there could still be some kernel of truth to the theory.

With the rise of AI, it’s actually a theory that could be becoming even more relevant. Just how much of the content on the internet is currently generated by artificial intelligence? Of the content generated by AI, how much of it also benefits from a human touch? And is that going to change any time in the near future?

Is There Any Truth to the Dead Internet Theory?

Detecting fake content online can be tough. AI-generated content and scarily convincing deep fakes make it hard to tell fact from fiction, especially when you’re more likely to be scrolling (and reacting) quickly, rather than taking the time to thoroughly research everything you see.

SEO: Bowing to the All-Mighty Algorithm

Why is all of this happening? It’s hard not to argue that publishing content online has become a race to see who can be seen first and more often. It’s also hard to deny that search engines such as Google have played a role in this. To get to the first page of search results, you have to do what the search engine algorithm calls for. While Google says that they do their best to ensure content is helpful and made for humans, it’s clear that many content creators choose to put the algorithm first.

Search engine demands have always affected how online content has been produced. Previous iterations of the Google algorithm have created problems such as keyword stuffing, in which people overused irrelevant keywords in order to try and get their content seen first. Google has continued to try and improve its algorithm to make search results more relevant and helpful, and to discourage unethical behaviour. But these changes merely mean that content creators have to keep following the algorithm, whatever it requires of them.

Digital marketers have spent years building websites that cater to the all-powerful algorithm. Not only that, but they have to watch for major changes, in case a new update suddenly means their websites are no longer favoured by the search engine.

Google claims that its focus is on content that meets its E-E-A-T criteria, which is content that shows expertise, experience, authoritativeness, and trustworthiness. In theory, this sounds great, and Google also says that they carry out rigorous testing to make sure searches deliver high-quality results. However, this requires them to have a set of rules for determining what content they should show. This means that digital marketers can follow the rules for SEO without really providing anything of great value to users, even if Google tries to base its algorithm on weeding out poor-quality content. All marketers and developers need to do is tick the right boxes to get search engines to see their websites favourably.

How Has AI Contributed to This Problem?

It’s not just search engine algorithms that make a difference to the online landscape. The rise of AI technology is also having a major impact on the origin and quality of digital content. It’s now easy to generate all kinds of content, including text, images, and even video, with the touch of a button.

Of course, Google doesn’t actively discourage all AI content. Why would they when they have their own AI tools that they want to promote? However, Google does want to discourage poor-quality AI content, saying that the aim is to reward “high-quality content, however it is produced”.

There are some damning statistics and predictions already out there about the future of the internet under AI. One 2023 study claimed that nearly 50 news websites were AI-generated while another paper suggested that over half of the sentences on the internet may have been created and translated into other languages using large language models (LLMs). Experts are even predicting that by 2026, less than two years away, 90% of online content will be AI-generated.

Some people might see these numbers and ask what the big deal is. Does it matter if content is AI-generated if AI can do the same job as a person can? Many people are already turning to AI tools for their own content generation and even for research and fact-checking. But the big question is whether AI actually can do the same job as a person. And the answer is that, at least for now, it’s nowhere near as good. AI might be capable of carrying out a range of tasks, and it’s certainly evolving, but it’s still capable of mistakes.

A study by researchers at Stanford and UC Berkeley in 2023 showed that the accuracy of ChatGPT was worsening. Despite the fact that the AI generation tool is able to produce material that is often indistinguishable from human-written text in its tone and format, there is plenty of evidence that its accuracy is lacking. A study from NYU revealed that people had a limited ability to tell the difference between medical advice written by doctors and by chatbots, while another study suggested ChatGPT was 72% accurate in clinical decision-making. While some might be impressed by this accuracy, it means that, in this specific area, the tool is inaccurate nearly 30% of the time.

So What Can We Do About It?

This all might sound a little scary—and it should, at least to a certain extent. The internet hasn’t been completely replaced with bots and fake content, but that doesn’t mean there’s nothing to worry about. Even if you think the idea of the dead internet theory is a bit extreme, it’s impossible to deny that the use of bots, AI content, and even the need and desire to cater to search engine algorithms means that there is a risk of much less focus on the quality of online content.

AI might not have taken over just yet, but we’re seeing it start to grow. Dead internet theory might have been a conspiracy theory when it was first created, but it could now be more real and relevant than it has ever been. So what can we do about it? How can we ensure the internet remains human and still offers a way for people to connect with each other?

Some of the steps we can take include ensuring tools are available to detect false information, deep fakes, and other problematic content created using AI. One current problem with this is that while AI is still developing, so are the tools that are able to accurately detect it. It’s easy for a tool to inaccurately identify AI content as being created by a human, especially as many people find it hard to tell the difference. Similarly, AI detection tools can produce false positives, identifying human-created content as AI. However, these tools are sure to develop over time and become more accurate.

Education is also important, especially in showing internet users how they can distinguish real, quality content from fake content. Already we can see how easily many people fall for AI-generated art, video content, or text. Some people have started using ChatGPT and similar tools as if they are search engines or reference libraries that can deliver accurate answers to their questions at all times. It’s important that people, including younger and older generations, learn how to spot fake information and AI-generated content. Additionally, there have been calls for requirements to identify AI material clearly, particularly in advertising.

As content creators, it’s also important to model what we want the future of the internet to be. AI tools can be useful, but quality content also needs the human touch.

Leave a Reply

Your email address will not be published. Required fields are marked *