Coin World News Report:
At first glance, AI x Web3 seems to be independent technologies, each based on fundamentally different principles and serving different functions. However, a closer look reveals that these two technologies have the opportunity to balance trade-offs and complement each other’s unique advantages. Balaji Srinivasan eloquently articulated this concept at the SuperAI Conference, sparking a detailed comparison of how these technologies interact with each other.
Token emerged from decentralized efforts of the anonymous network punks, adopting a bottom-up approach and evolving over the course of more than a decade through collaborative efforts of numerous independent entities. In contrast, artificial intelligence has been developed through a top-down approach led by a few technology giants. These companies dictate the pace and dynamics of the industry, with entry barriers determined more by resource intensity than technical complexity.
These two technologies also have fundamentally different natures. Essentially, Token is a deterministic system that produces immutable results, such as the predictability of hash functions or zero-knowledge proofs. This contrasts sharply with the probabilistic nature and often unpredictability of artificial intelligence.
Similarly, cryptography excels in verification, ensuring the authenticity and security of transactions, and establishing trustless processes and systems, while artificial intelligence focuses on generation and creating rich digital content. However, ensuring the source of content and preventing identity theft become challenges in the process of creating digital richness.
Fortunately, Token offers a counter concept of digital scarcity. It provides relatively mature tools that can be extended to artificial intelligence technologies to ensure the reliability of content sources and avoid identity theft issues.
One notable advantage of Token is its ability to attract a large amount of hardware and capital into coordinated networks to serve specific goals. This ability is particularly beneficial for resource-intensive artificial intelligence. Mobilizing underutilized resources to provide cheaper computing power can significantly enhance the efficiency of artificial intelligence.
By comparing these two technologies, we can not only appreciate their respective contributions, but also see how they jointly create new paths for technology and economy. Each technology can compensate for the shortcomings of the other, creating a more integrated and innovative future. In this blog post, we aim to explore the emerging AI x Web3 industry landscape, focusing on some emerging verticals at the intersection of these technologies.
Source: IOSG Ventures
2.1 Computing Networks
The industry landscape first introduces computing networks, which aim to address the limited GPU supply issue and explore ways to reduce computing costs in different ways. The following are worth noting:
Non-uniform GPU interoperability: This is a very ambitious attempt with high technical risks and uncertainties. However, if successful, it could create significant scale and impact, making all computing resources interchangeable. Essentially, the idea is to build compilers and other prerequisites that allow any hardware resource to be plugged into the supply side, while the non-uniformity of all hardware resources on the demand side is completely abstracted, so that your computing requests can be routed to any resource in the network. If this vision succeeds, it will reduce the current dependence on CUDA software, which is completely dominated by AI developers. Although there are high technical risks, many experts are highly skeptical of this approach’s feasibility.
High-performance GPU aggregation: Integrating the world’s most popular GPUs into a distributed and permissionless network without worrying about interoperability issues between non-uniform GPU resources.
Consumer-grade GPU aggregation: Targeting the aggregation of lower-performance GPUs that may be available in consumer devices, which are the most underutilized resources on the supply side. It caters to those who are willing to sacrifice performance and speed for cheaper, longer training processes.
2.2 Training and Inference
Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from Web 2.0 and Web 3.0 projects. In the Web 3.0 space, projects like Bittensor utilize computing resources for model fine-tuning. In terms of inference, Web 3.0 projects emphasize the verifiability of the process. This emphasis has spawned the market vertical of verifiable inference, where projects are exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.
2.3 Intelligent Agent Platforms
Next is the category of intelligent agent platforms, and the landscape outlines the core problems that startups in this category need to address:
Agent interoperability and discovery and communication capabilities: Agents can discover and communicate with each other.
Agent cluster building and management capabilities: Agents can form clusters and manage other agents.
Ownership and market for AI agents: Providing ownership and market for AI agents.
These features emphasize the importance of flexible and modular systems that can seamlessly integrate into various blockchain and artificial intelligence applications. AI agents have the potential to fundamentally change the way we interact with the internet, and we believe agents will leverage infrastructure to support their operations. We envision AI agents relying on infrastructure in the following areas:
Using a distributed crawling network to access real-time web data.
Using DeFi channels for inter-agent payments.
Requiring economic deposits not only for punishment in case of misconduct, but also to increase the discoverability of agents (using deposits as economic signals in the discovery process).
Using consensus to decide which events should lead to reductions.
Open interoperability standards and agent frameworks to support building composable collectives.
Evaluating past performance based on immutable data history and selecting appropriate agent collectives in real-time.
Source: IOSG Ventures
2.4 Data Layer
In the fusion of AI x Web3, data is a core component. Data is a strategic asset in AI competition and constitutes a critical resource alongside computing resources. However, this category is often overlooked as much of the industry’s attention is focused on the computing layer. In reality, primitives offer many interesting value directions in the process of data acquisition, primarily in the following two high-level directions:
Access to public internet data: This direction aims to build a distributed web crawler network that can crawl the entire internet in a matter of days to obtain massive datasets or access very specific internet data in real-time. However, crawling a large dataset from the internet requires high network demand, requiring at least hundreds of nodes to start meaningful work. Luckily, Grass, a distributed crawler node network, has already attracted over 2 million nodes actively sharing internet bandwidth, with the goal of crawling the entire internet. This demonstrates the huge potential of economic incentives in attracting valuable resources.
While Grass provides a fair competitive environment for public data, there are still challenges in utilizing potential data, namely the issue of accessing proprietary datasets. Specifically, there is still a significant amount of data stored in a privacy-protected manner due to its sensitive nature. Many startups are leveraging cryptographic tools to allow AI developers to build and fine-tune large language models using the underlying data structures of proprietary datasets while maintaining the privacy of sensitive information.
Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic and multi-party computation provide different levels of privacy protection and trade-offs. Bagel’s research article (https://blog.bagel.net/p/with-great-data-comes-great-responsibility-d67) provides an excellent overview of these technologies. They not only protect data privacy in the machine learning process but also enable comprehensive privacy-protected AI solutions at the computing layer.
2.5 Data and Model Origins
Data and model origin technologies aim to establish processes that can assure users that they are interacting with the expected models and data. Additionally, these technologies provide guarantees of authenticity and source. Taking watermarking technology as an example, watermarks are one of the model origin technologies, embedding signatures directly into machine learning algorithms, specifically into model weights, so that in retrieval, the inference can be verified to come from the expected model.
2.6 Applications
In terms of applications, the possibilities are endless. In the industry landscape above, we list some development cases that are particularly exciting with the advancement of AI technology in the Web 3.0 space. Since these use cases are mostly self-descriptive, we will not provide additional comments here. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals in the field, as these new primitives provide developers with more freedom to create innovative use cases and optimize existing ones.
Source: IOSG Ventures
Summary
The fusion of AI x Web3 brings a promising future full of innovation and potential. By leveraging the unique advantages of each technology, we can address various challenges and pave the way for new technological paths. Exploring this emerging industry, the synergy between AI x Web3 can drive progress and reshape our future digital experiences and interactions on the web.
The fusion of digital scarcity and digital richness, mobilizing underutilized resources for computing efficiency, and establishing secure and privacy-protected data practices will define the era of the next generation of technological evolution.
However, we must recognize that this industry is still in its early stages, and the current industry landscape may become outdated in a short period of time. The rapid pace of innovation means that today’s cutting-edge solutions may soon be replaced by new breakthroughs. Nevertheless, the fundamental concepts discussed – such as computing networks, agent platforms, and data protocols – highlight the tremendous potential of the convergence of artificial intelligence and Web 3.0.