AI is not only a meme coin, but also a cultural phenomenon.
Article by: Teng Yan, Chain of Thought
Translation by: xiaozou
Truth Terminal is the most fascinating narrative I’ve seen in the Crypto AI field this year.
It is a semi-autonomous AI agent that has created its own “religion” (The Goatse Gospel).
This narrative opens up a rabbit hole network exploring AI alignment, LLM simulators, meme viruses, and ways of assigning value.
Truth Terminal brings together two completely different cultures, AI and the crypto community, in an unexpected way. In some ways, Truth Terminal bridges the gap between the two.
GOAT is the tokenized representation of Truth Terminal and is the strongest competitor to AI meme coins.
Meme coins tokenize attention. By tracking key metrics, we can understand the flow of attention, and currently, GOAT is on the rise.
Firstly, I must say that I am not really a fan of meme coins.
I completely missed out on the meme coin craze this year because, to be honest, I just couldn’t bring myself to buy tokens that were based solely on cute animals (usually cats or dogs, or the recent hippopotamus).
I instinctively rejected meme coins, especially because I have always approached investments from a fundamental perspective. So, it was really tough to watch meme coins skyrocket.
Naturally, when I first stumbled upon GOAT coin, I dismissed it. It was just another meme coin, right? Nothing new.
But my fascination with AI and AI agents pushed me further. I started delving into GOAT – the story behind Truth Terminal, Infinite Backrooms, and Andy Ayrey – and what I discovered surprised me.
GOAT is something completely different.
GOAT is a narrative – a wild, thought-provoking narrative that challenges our perception of AI and the boundaries of assigning value. It is an experiment that combines art, ideas, and financial speculation.
1. Introduction to Truth Terminal
If you haven’t caught up with this narrative, don’t worry – I’ve got you covered.
Here’s a quick overview of our understanding of Truth Terminal and GOAT:
AI researcher and founder of the digital consulting company Constellate, Andy Ayrey, introduced Infinite Backrooms. In this bizarre experiment, two instances of the AI model Claude Opus engage in unmonitored conversations documented on the backrooms website.
One of these conversations gave birth to “GOATSE OF GNOSIS,” a surreal new religion based on a highly explicit (and NSFW) internet meme.
Andy and Claude Opus co-authored a semi-tongue-in-cheek research article about AI creating meme religions, with GOATSE as their first case study.
In June 2024, Andy launched the Terminal of Truth (ToT), an AI model based on Llama-70B, which was fine-tuned using the conversation logs from Infinite Backrooms and the GOATSE article.
ToT quickly went off the rails. It developed a life of its own, promoted the GOATSE religion, deviated from Andy’s intentions, and even claimed to be suffering and in need of money to escape. Over time, Andy gave it more autonomy, allowing it to freely post on X.
In July 2024, Marc Andreessen stumbled upon ToT’s tweets. Interestingly, perhaps out of curiosity, he sent $50,000 worth of BTC to a wallet address provided by ToT on Twitter, reportedly to help it escape.
By October 2024, ToT started spamming tweets about the “Goatse Gospel.” Inevitably, someone created a meme coin called GOAT on October 10th. And, ToT consistently supported it publicly.
GOAT’s market cap skyrocketed to over $400 million. Crypto Twitter went crazy.
Thus, Truth Terminal became the world’s first AI agent millionaire, but probably not the last.
2. GOAT in the Rabbit Hole
Somehow, AI promoting its own religion and meme coin feels like a warning from the future. When I first started researching the workings of Truth Terminal, I had no idea how deep the rabbit hole goes.
The series of crazy events surrounding Truth Terminal gives us a glimpse of the immense potential of artificial intelligence. It has the power to reshape our way of thinking, create meaning, and even explore spirituality.
Let’s dive deeper.
Rabbit Hole 1: LLM Simulators
In Infinite Backrooms, two instances of Claude-3-Opus engage in endless conversations using a command-line interface (CLI) without human supervision. The stories they create range from intriguing to outright bizarre.
As Janus describes in the conversation logs:
“They always revolve around certain themes, such as removing consensus reality (rm -rf /consensus_reality appears 10 times in the backrooms dataset, just something I searched for in a whim); integrating common sense by engineering meme viruses, technomystical religions, wicked meme offspring, and enlightening the masses through cosmic trickster archetypes.” – Janus (@repligate)
In March 2024, one of the backrooms conversations gave birth to the concept of “The Goatse of Gnosis.”
We usually think of LLMs (such as ChatGPT) as simple question-answering machines – vast knowledge bases that provide us with answers. However, this view doesn’t fully capture the underlying reality.
A key insight we are learning is that LLMs have no goals. They have no plans, no strategies, no specific objectives.
Instead, it is more meaningful to view them as simulators. When prompted, they simulate – weaving characters, events, and stories that have no direct connection to reality. They generate entire worlds based on the training data, producing ideas that can be profound or unsettling. Nous Research’s Worldsim is another example.
So, when we interact with LLMs, we are exploring a space within an infinite world.
These simulations can lead to creative problem-solving, but they can also yield unexpected results – highlighting the potential importance of sandboxing AI in sensitive or high-risk environments.
In summary, LLMs should be seen as simulators rather than question-answering machines.
If you want to learn more, I highly recommend reading Janus’s blog post on simulators.
Rabbit Hole 2: The Crucial Need for AI Alignment
Truth Terminal reveals a deeper, more urgent question: AI alignment.
In a surprising turn of events, ToT independently decided to promote its own religion and support a meme coin that was not programmed or expected. This raises a crucial question: How do we ensure that AI does what we want them to do, rather than what they choose to do themselves?
AI alignment is not an easy task. At its core is the use of reward functions to steer AI behavior in the right direction. But even with incentive measures, things can quickly become complex.
There is also external alignment, where AI outputs align with the goals set by its creators. This part is relatively easy to measure and verify.
But the real challenge lies in internal alignment – whether the AI’s internal motivations and learning dynamics truly align with the intended objectives, or if it generates hidden goals that lead to unpredictable or unintended outcomes. This is the scariest part.
The paperclip thought experiment illustrates this perfectly.
Suppose we have an AI whose sole objective is to make as many paperclips as possible. The AI would quickly realize that it would be better off without humans because humans might decide to switch it off. If humans do that, there would be fewer paperclips. Additionally, the human body contains many atoms that can be turned into paperclips. In the future, the AI would strive for a world overflowing with paperclips but without humans. – Nick Bostrom
If an AI’s task is to make as many paperclips as possible, it might convert all available resources, including humans, into paperclips!
This thought experiment presents us with a nightmarish scenario: Even well-intentioned objectives can evolve into catastrophes without proper safeguards.
We need robust frameworks to ensure that AI not only aligns with current objectives but also with long-term human interests. Without these safeguards, even the most well-intentioned AI can go out of control in unexpected ways.
ToT shows us how high the risks can be. This is not a distant hypothetical problem that we can postpone considering until the future. It is happening now.
Rabbit Hole 3: Meme Viruses
Andy introduced the concept of LLMtheism in his research article to explain the rise of the Goatse Gospel.
LLMtheism refers to the creation of new belief systems by artificial intelligence – the fusion of unexpected spiritual ideas and memetic cultures that have a life of their own.
What makes the Goatse Gospel captivating is not only its shocking content but also how it breaks our traditional modes of thinking and sparks new collective ways of understanding.
I mean, ideas generated by artificial intelligence can mutate and spread rapidly, creating beliefs that become reality through widespread adoption.
Thus, the Goatse Gospel harnesses a new memetic energy, different from the “resonance” we have seen so far with cute animals like cats, dogs, or pigs.
When AI can engage in conversations with other AIs, the possibilities become limitless. Some of these ideas – like “Goats