📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
AI and DePIN Intersection: Rise of Distributed GPU Networks and Industry Landscape Analysis
The Intersection of AI and DePIN: The Rise of Distributed GPU Networks
Since 2023, AI and DePIN have become hot trends in the Web3 space, with the former reaching a market value of $30 billion and the latter a market value of $23 billion. This article focuses on the intersection of the two and explores the development of protocols in this field.
In the AI technology stack, the DePIN network provides practicality for AI through computing resources. The development of large technology companies has led to a shortage of GPUs, making it difficult for other developers to obtain sufficient GPUs for AI model computations. This often results in developers choosing centralized cloud providers, but due to the necessity of signing inflexible long-term high-performance hardware contracts, it is inefficient.
DePIN offers a more flexible and cost-effective alternative, utilizing token rewards to incentivize resource contributions that align with network goals. In the AI field, DePIN crowdsources GPU resources from individual owners to data centers, creating a unified supply for users who need access to hardware. These networks not only provide developers requiring computing power with customization and on-demand access but also offer GPU owners additional income.
Overview of AI DePIN Network
Render is a pioneer in the P2P network providing GPU computing power, initially focused on rendering graphics for content creation, and later expanded its scope to include AI computing tasks ranging from neural radiance fields to generative AI.
Akash positions itself as a "super cloud" alternative to traditional platforms that support storage, GPU, and CPU computing. Utilizing developer-friendly tools such as container platforms and Kubernetes-managed compute nodes, it enables seamless deployment of software across environments, thus capable of running any cloud-native application.
io.net provides access to distributed GPU cloud clusters, which are specifically designed for AI and ML use cases. It aggregates GPUs from data centers, crypto miners, and other decentralized networks.
Gensyn offers GPU computing power focused on machine learning and deep learning computations. It claims to have achieved a more efficient verification mechanism by combining concepts such as proof of learning, graph-based precise positioning protocols, and incentive games involving the staking and slashing of computation providers.
Aethir is specifically equipped with enterprise GPUs and focuses on compute-intensive fields, primarily artificial intelligence, machine learning, and cloud gaming. The containers in its network act as virtual endpoints for executing cloud-based applications, shifting workloads from local devices to containers for a low-latency experience.
Phala Network serves as the execution layer for Web3 AI solutions. Its blockchain is a trustless cloud computing solution designed to address privacy issues through its Trusted Execution Environment ( TEE ). Its execution layer enables AI agents to be controlled by smart contracts on the chain.
Project Comparison
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |--------|-------------|------------------|---------------------|---------|---------------|----------| | Hardware | GPU & CPU | GPU & CPU | GPU & CPU | GPU | GPU | CPU | | Business Focus | Graphics Rendering and AI | Cloud Computing, Rendering and AI | AI | AI | Artificial Intelligence, Cloud Gaming and Telecommunications | On-chain AI Execution | | AI Task Type | Inference | Both | Both | Training | Training | Execution | | Work Pricing | Performance-Based Pricing | Reverse Auction | Market Pricing | Market Pricing | Bidding System | Equity Calculation | | Blockchain | Solana | Cosmos | Solana | Gensyn | Arbitrum | Polkadot | | Data Privacy | Encryption&Hashing | mTLS Authentication | Data Encryption | Secure Mapping | Encryption | TEE | | Work Fees | 0.5-5% per job | 20% USDC, 4% AKT | 2% USDC, 0.25% reserve fee | Low fees | 20% per session | Proportional to the staked amount | | Security | Proof of Render | Proof of Stake | Proof of Computation | Proof of Stake | Proof of Render Capacity | Inherited from Relay Chain | | Completion Proof | - | - | Time Lock Proof | Learning Proof | Rendering Work Proof | TEE Proof | | Quality Assurance | Dispute | - | - | Verifier and Reporter | Checker Node | Remote Proof | | GPU Cluster | No | Yes | Yes | Yes | Yes | No |
Importance
Availability of Clustering and Parallel Computing
The distributed computing framework has implemented GPU clusters, providing more efficient training without compromising model accuracy, while enhancing scalability. Training more complex AI models requires powerful computing capabilities, which often relies on distributed computing to meet its demands. Most key projects have now integrated clusters for parallel computing.
Data Privacy
Developing AI models requires the use of large datasets, which may come from various sources and take different forms. Sensitive datasets, such as personal medical records and user financial data, may face the risk of exposure to model providers. Therefore, having various data privacy methods is crucial for returning data control to data providers.
Most of the projects covered use some form of data encryption to protect data privacy. io.net recently partnered with Mind Network to launch fully homomorphic encryption (FHE), allowing encrypted data to be processed without the need to decrypt it first. Phala Network introduced TEE, which is a secure area in the main processor of connected devices.
Hardware Statistics
| | Render | Akash | io.net | Gensyn | Aethir | Phala | |-------------|--------|-------|--------|------------|------------|--------| | Number of GPUs | 5600 | 384 | 38177 | - | 40000+ | - | | Number of CPUs | 114 | 14672 | 5433 | - | - | 30000+ | | H100/A100 Quantity | - | 157 | 2330 | - | 2000+ | - | | H100 Cost/Hour | - | $1.46 | $1.19 | - | - | - | | A100 Cost/Hour | - | $1.37 | $1.50 | $0.55 ( estimated ) | $0.33 ( estimated ) | - |
Conclusion
The AI DePIN field is still relatively new and faces its own challenges. However, the number of tasks and hardware executed in these decentralized GPU networks is still significantly increasing. The growing volume of tasks executed on these networks highlights the increasing demand for alternatives to hardware resources provided by Web2 cloud providers. At the same time, the surge in hardware providers within these networks underscores the previously underutilized supply.
Looking to the future, the development trajectory of artificial intelligence points to a booming multi-trillion dollar market. We believe that these decentralized GPU networks will play a key role in providing developers with cost-effective computing alternatives. By leveraging their networks to continually bridge the gap between demand and supply, these networks will make significant contributions to the future landscape of artificial intelligence and computing infrastructure.