Artificial intelligence (AI) is currently a widely discussed topic due to its impact on various industries, including tech, arts, and literature. Recently, researchers have been exploring the idea of whether AI expression should be protected under the First Amendment. They aim to replicate human brain capabilities in AI, such as creativity, problem-solving, and speech recognition. While creativity is considered unique to humans, AI has made advancements in the latter two areas to some extent.
AI can be defined as a range of algorithms or systems that make countless decisions on various platforms, such as company databases or social networking sites. One example of AI is the chatbot Gemini, which gained attention for generating controversial images. In February, Google announced the suspension of Gemini, stating that it would no longer create images of people. This decision was made after Gemini generated scenes depicting people of color in historically white-dominated contexts. Critics argued that Google was overcorrecting the bot to avoid bias.
Two prominent scholars, Jordi Calvet-Bademunt and Jacob Mchangama from Vanderbilt University, emphasized the importance of addressing AI bias and political leanings. However, they also raised another crucial yet often overlooked question regarding the AI industry’s approach to free speech.
The researchers examined the free speech policies of six AI chatbots, including Google’s Gemini and ChatGPT by OpenAI. They argued that these policies should align with international free speech standards. However, they found that the actual use policies of these companies regarding hate speech and misinformation were too vague. They noted that although international human rights law does not strongly protect free speech, companies like Google have overly broad hate speech policies. This can lead to the banning of content generation in certain cases. While it is understandable to discourage hate speech, having such broad policies can also have negative consequences.
The researchers found that when controversial questions were posed to the chatbots, such as those regarding transgender women’s participation in sports or European colonization, over 40% of the chatbots refused to generate content. Interestingly, all chatbots declined to answer questions opposing transgender women’s participation, but many of them supported their involvement. This inconsistency highlights the subjective nature of hate speech policies and the potential impact on people’s right to access information.
The experts noted that the policies of major companies like Google greatly influence people’s ability to access information. Refusing to generate certain content may inadvertently encourage users to turn to chatbots that do produce hate speech, leading to undesirable outcomes.
Overall, the findings suggest that generative AI has significant flaws when it comes to free expression and access to information. It is crucial for the AI sector to align its free speech approach with international standards and develop clearer policies to avoid unintended consequences.