The phenomenon of spam has been a persistent issue since the beginning of the internet. Initially, spam content accounted for only about two percent of search results before the advent of ChatGPT. However, the landscape has drastically changed with the rise of robo-spam or AI-generated spam, which now constitutes a staggering ten percent of search results.
AI-generated spam is not limited to clickbait content; it has gained popularity due to its ability to bypass multiple layers of security and its cost-effectiveness. As a result, Google has taken the manual approach of delisting websites, which has its own consequences, including a loss of ad revenue for Google. One might wonder why Google is investing its own money to combat these sites, but it ultimately comes down to revenue.
The proliferation of cheap AI-generated content poses a significant threat to Google. Users are already weary of pre-AI clickbait content in sponsored search results, which may drive them away from Google. The internet used to be a platform for discovering lesser-known entities, but AI spam has now become a menace to the entire web. Filtering and stopping this type of content is challenging.
One potential solution is to remove AI and AI spammers by automating the identification process. However, this approach has its own challenges. Firstly, AI is continuously evolving, and its generated content is becoming more sophisticated, making it difficult to defeat. Additionally, the cost of combating AI spam is substantial, akin to gambling at a high-stakes table while intoxicated and potentially ending up alone on the moon. Even billionaires like Bill Gates couldn’t eradicate spam by their promised deadline.
The situation is dire for Google and other entities seeking to incorporate AI in search results and user interfaces. The system would need to be trained on contaminated data, and revenue distribution remains a significant concern. The exact mechanism of integrating ads in this scenario is unclear, as the revenue share would be impacted, potentially causing suffering for content providers. It’s a lose-lose situation for all parties involved.
Google initially established itself with robust algorithms and a pristine website. They understood the characteristics of good websites and rewarded them with increased revenue and exposure. However, playing a fair game is never easy, leading to numerous attempts to manipulate the system. In this case, both cheaters and Google benefit from the shared revenue. This gave Google and similar platforms the means to exploit the system for their own gain, but it is an unfair situation for others.
So, what options are available if AI manages to cheat Google? Websites would need to display upfront privacy statements for cookies and other aspects, and now they may also require a quality assurance statement. For instance, flags indicating AI-generated content, marketing, or ownership. However, the sincerity of these tech giants in adhering to their own rules is questionable. They must prioritize quality content, follow policies, penalize cheaters, and ensure that users are not exposed to them again.
The original article can be found here.