The Case for Compute DePINs
Exploring the role compute DePINs can play in enabling decentralized GPU marketplaces, with comprehensive analysis and supplemental case studies provided.
Prelude
Compute is the New Oil
Compute Resources
GPUs
The Rise of AI
Understanding DePINs
What are DePINs?
How do DePINs work?
Compute DePINs
Fundamentals
Key Benefits Compute
The State of Compute DePINs Today
Render
Io Net
Aethir
Nosana
Akash
Future Outlook - The Case for Compute DePINs
Key Takeaways
Compute resources have become increasingly sought-after with the onset of machine learning and now deep learning in generative AI development, both of which require numerous computationally-intensive workloads. However, due to major companies and governments mass-accumulating these resources, startups and independent developers face a shortage of GPUs in the market today, leading to excessive costs and/or lack of accessibility.
Compute DePINs enable decentralized marketplaces for compute resources such as GPUs, by allowing anyone in the world to provide their idle supply in exchange for a monetary reward. This aims to help underserved GPU consumers tap into a new stream of supply, obtaining the development resources needed for their workloads with reduced costs and overhead.
There are still a number of economic and technical challenges compute DePINs face in competing with traditional, centralized service providers today, some of which will solve themselves over time and some which will require new solutions and optimizations moving forward.
Compute Is The New Oil
Since the onset of the industrial revolution, technology has helped propel humanity forward at an unprecedented rate, with almost every facet of everyday life being influenced or entirely transformed. The computer ultimately emerged as a culmination of efforts across a collective of researchers, academics, and computer engineers. Originally designed for solving large-scale arithmetic tasks used to assist in advanced military operations, computers have evolved to become the backbone of modern day life. As computers’ impact on humanity continues to grow at an unprecedented rate, the demand for these machines and the resources that power them continues to grow as well, outpacing available supply. This in turn creates market dynamics in which a majority of developers and businesses are priced out of access to critical resources, leaving the development of machine learning and generative artificial intelligence, arguably the most transformative technologies of today, in the hands of a few well-funded players. Meanwhile, the abundant supply of idle compute resources presents a lucrative opportunity to help alleviate the imbalance between compute supply and demand, exacerbating the need for sufficient coordination mechanisms between participants on both sides of the trade. As such, we believe decentralized systems enabled by blockchain technology and digital assets are critical to the proliferation of a broader, more democratic and responsible development of generative artificial intelligence-based goods and services.
Compute Resources
Compute can be defined as the various activities, applications or workloads in which a computer emits a definitive output based on a given input. Ultimately it refers to the computation and processing power of a computer, which is the core utility of these machines powering much of the modern world today, generating up to $1.1T in revenue over the past year alone.
Compute resources refer to various hardware and software components which enable computation and processing. These components have become increasingly important as the number of applications and features they enable continues to grow, becoming increasingly present in people’s everyday lives. This has led to a race among national powers and businesses to accumulate as many of these resources as possible as a means of sustenance. This is reflected in the market performance of companies providing these resources (e.g. Nvidia, whose market capitalization has grown over 3000% in the past 5 years).
GPUs
Graphics Processing Units, or GPUs, are specialized hardware components which have become one of the most significant resources in modern high-performance computing. At their core, GPUs function as specialized electric circuits, employing parallel processing to accelerate computer graphics workloads. Originally serving the gaming and personal computer industries, GPUs have since evolved to serve many of the nascent technologies shaping the future of the world (i.e mainframes and personal computers, mobile devices, cloud computing, Internet-Of-Things). However, demand for these resources is particularly exacerbated by the rise of machine learning and artificial intelligence - GPUs accelerate ML and AI operations by performing calculations in parallel, thereby enhancing the processing power and capabilities of the end-result technologies.
The Rise of AI
At its core, AI is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. An AI model functions as a neural network made up of many chunks of different data. The model requires processing power to identify and learn the relationships between this data, which it then references when creating an output based on a given input.
AI development and production is not new; in 1967 Frank Rosenblatt built the Mark 1 Perceptron, the first computer based on a neural network that "learned" through trial and error. Further, a great deal of academic research that founded the development of AI as we know it today was published in the late 90s and early 2000s and the industry has continued to develop since.
Beyond research and development efforts, ‘narrow’ AI models have been powering various robust applications used today. Examples include social media algorithms, such as Apple's Siri and Amazon's Alexa, custom-tailored product recommendations, and more. Notably the rise of deep learning has transformed the development of artificial generative intelligence (AGI). Deep learning algorithms utilize larger, or ‘deeper’ neural networks than machine learning applications do, functioning as more scalable alternatives with more extensive performance capabilities. Generative AI models ‘encode a simplified representation of their training data and reference it to emit a new output that is similar, but not identical, to the original data.’
Deep learning has enabled developers to extend generative AI models to images, speech, and other complex data types, and milestone-applications like ChatGPT, which has already set records for the fastest-growing userbase in modern times, are still just early iterations of what can be made possible with generative AI and deep learning.
With this in mind, it should be of little surprise that generative AI development entails multiple computationally-intensive workloads, which require exorbitant amounts of processing power and compute.
According to Triple Whammy of Deep Learning Application Demand, the development of AI applications is constrained by several key workloads;
Training - Models must process and analyze large datasets in order to learn how to respond to given inputs.
Tuning - Models undergo a series of repetitive processes, in which various hyperparameters are adjusted and optimized in order to refine performance and quality.
Simulations - Prior to deployment, certain models, i.e. reinforcement learning algorithms, undergo a series of simulations for testing purposes.
Compute Crunch: Demand > Supply
Over the past few decades, a myriad of technological advancements have fueled an unprecedented surge in demand for compute and processing power. As a result, the demand for compute resources such as GPUs far exceeds the available supply today, creating a bottleneck in AI development that will only continue to grow without efficient solutions.
The broader constraints in supply are further bolstered by a large number of companies actively purchasing GPUs beyond their actual needs, both as a competitive advantage and a means of survival in the modern global economy. Compute providers often employ contract structures which require long-term capital commitments, granting customers with supply often far beyond their demand requirements.
Research by Epoch shows the overall number of compute-intensive AI model releases is rapidly growing, signaling that demand for the resources powering these technologies will continue to grow at a rapid pace.
As the complexity of AI models continues to grow, so will the computational and processing power demands of application developers. In turn, the performance of GPUs, and their subsequent availability, will play an increasingly significant role as well. This is already coming to fruition as the demand for high-end GPUs, such as ones produced by Nvidia, who has hailed GPUs as the ‘rare earth metals’ or ‘gold’ of the AI industry.
This rapid commercialization of AI risks ceding control to a small group of tech giants similar to the social media industry today, raising concerns around the ethical foundation of these models. A well-known example of this is the recent controversy with Google Gemini. Its many odd replies to various prompts did not pose any actual danger at the time, the event is an example of the inherent risk of having a small subset of companies dictate and dominate AI development.
Tech startups today face growing challenges in accessing compute resources to power their AI models. These applications perform numerous compute-intensive processes before a model may even be deployed. Accumulating massive amounts of GPUs is a largely unsustainable endeavor for smaller businesses and while traditional cloud computing services like AWS or Google Cloud offer a seamless and convenient developer experience, their limited capacity ultimately leads to high costs that price out many developers. Ultimately, not everyone can propose to raise $7T for their hardware costs either.
So what gives?
Nvidia has previously estimated that over 40K companies use GPUs for AI and accelerated computing, with a global community of over 4M developers. Looking ahead, the global AI market is projected to grow from $515 billion in 2023 to $2.74 trillion by 2032, with a CAGR of 20.4%. Concurrently, the GPU market is expected to reach $400 billion by 2032, growing at a CAGR of 25%.
However, a growing imbalance between the supply and demand of compute resources in the wake of the AI revolution stands to create a rather dystopian future, wherein a small concentrated set of well-funded megacorps dictate much of the development of transformative technologies. As such, we believe all roads lead to decentralized alternative solutions to help bridge the gap between AI developers’ demands and the available resources at their disposal.
The Role of DePINs
What are DePINs?
DePIN is a colloquial term coined together by the Messari research team, which stands for Decentralized Physical Infrastructure Networks. To break this down, decentralization comes from the lack of a single entity extracting rent and restricting access. Meanwhile, physical infrastructure refers to the ‘real-life’ physical resources which are utilized. A network refers to a set of actors working together in a coordinated manner to achieve a predetermined goal or set of goals. Today, the total market cap of DePINs is roughly $28.3B.
At their core, DePINs are global networks of nodes which connect physical infrastructure resources with blockchains to enable decentralized marketplaces connecting buyers and suppliers of said resource, wherein anyone can become a supplier and be compensated for their services and the value they add to the network. In this case, the central intermediary, who restricts access to the network through various legal and regulatory means and service fees, is replaced by a decentralized protocol made of smart contracts and code, governed by its respective token holders.
DePINs are valuable because they offer decentralized, accessible, low-cost, and scalable alternatives to traditional resource networks and service providers. They enable decentralized marketplaces that aim to serve a particular end-game; where the costs of goods and services are determined by the dynamics of the markets, which anyone can participate in at any time, naturally driving down unit costs over time as a result of a greater number of suppliers and minimized profit margins.
Using blockchains enables DePINs to build cryptoeconomic incentive systems which help ensure that network participants are appropriately compensated for their services, turning key value providers into stakeholders. However, it is important to note that network effects, which are realized by transforming small individual networks into larger, productive systems, are critical in order to realize many of the benefits of DePINs. Furthermore, while token rewards have proven to be compelling bootstrapping mechanisms for networks, building sustainable incentives to aid in user retention and long-term adoption has proven to remain a critical challenge in the broader DePIN landscape.
How Do DePINs Work?
In order to better understand the value DePINs provide in enabling decentralized compute marketplaces, it’s important to recognize the different structural components involved and how they work together to form decentralized resource networks. Let’s consider the structure of a DePIN and the participants involved.
The Protocol
A decentralized protocol, i.e. a set of smart contracts built on top of an underlying ‘base layer’ blockchain network, serves to facilitate trustless interactions among network participants. In an ideal world, the protocol is to be governed by a diverse set of stakeholders who are actively determined to contribute to the long-term success of the network. These stakeholders then use their share of the protocol’s tokens to vote on proposed changes and developments to the DePIN. Given that coordinating a distributed network successfully is a great challenge in itself, the core team will often reserve the power to implement these changes initially, before transitioning the power to a decentralized autonomous organization (DAO).
Network Participants
The end-users of resource networks are their most valuable participants, and they can be categorized by their function.
Suppliers: Individuals or entities providing a resource to the network, in exchange for a monetary reward paid out in the DePIN’s native token. Suppliers ‘connect’ to the network through the blockchain-native protocol, which may enforce a whitelisted onboarding process or a permissionless one. By receiving tokens, suppliers receive a stake in the network, akin to stakeholders in an equity-ownership context, which enables them to vote on various proposals and developments of the network, such as ones they believe will help drive demand and value to the network, creating a higher token price over time. Of course, suppliers receiving tokens are just as likely to utilize DePINs as a form of passive income and sell their tokens upon receiving them.
Consumers: These are individuals or entities that are actively searching for the resource provided by the DePIN, such as AI startups seeking GPUs, representing the demand side of the economic equation. Consumers are compelled to use a DePIN if there are actual advantages over using traditional alternatives today that can be realized, i.e. lower costs and overhead requirements, therefore representing the network’s organic demand. DePINs often require consumers to pay for resources in their native token as a means of generating value and maintaining a steady cash-flow.
Resources
DePINs can serve different markets and employ different business models to distribute resources. Blockworks provides a great framework for this; custom hardware DePINs, which provide specialized proprietary hardware to suppliers to be distributed, and commodity hardware DePINs, which enable the distribution of existing, idle resources including but not limited to compute, storage, and bandwidth.
Economics
In an ideally-functioning DePIN, value accrues from the revenue generated by consumers paying for suppliers’ resources. Sustained demand for the network implies sustained demand for the native token, which aligns with the economic incentives of suppliers and token holders. Generating sustainable organic demand at an early stage poses a challenge for most startups, which is why DePINs will provide inflationary token incentives to incentivize early suppliers and bootstrap the network’s supply as a means of generating demand and therefore more organic supply. This is fairly similar to VCs subsidizing Uber riders’ costs during the firm’s initial stages as a means of bootstrapping an initial customer base to further attract drivers and bolster its network effects.
It’s important for DePINs to manage token incentives as strategically as possible, as they play a critical role in the overall success of the network. When demand and network revenue rises, token emissions should decrease. Conversely, when demand and revenue falls, token emissions should be utilized to incentive supply again.
To further illustrate what a successful DePIN network would look like, consider the ‘DePIN Flywheel’, a positive reflexive cycle for bootstrapping DePINs. In summary:
The DePIN distributes inflationary token rewards to incentive suppliers to provide resources to the network and establish a base level of supply available for consumption.
Assuming the number of suppliers begins to grow, competitive dynamics begin to form in the network, raising the overall quality of the goods and services offered by the network to the point where it provides a better offering than that of existing market solutions, gaining a competitive advantage in turn. This implies a decentralized system surpasses that of traditional, centralized service providers, which is not an easy task by any means.
Organic demand for the DePIN begins to form, providing legitimate cash-flows to suppliers. This poses a compelling opportunity for investors and suppliers alike, which continues to drive network demand and consequently token price.
Growth in token prices increases suppliers’ earnings, attracting more suppliers and resuming the flywheel.
This framework provides a compelling growth strategy, though it's important to note it is largely theoretical, and assumes the network is providing a resource at a competitive offering that continues to be relevant over a prolonged period of time.
Compute DePINs
Decentralized compute markets fall under the scope of a broader movement referred to as ‘The Sharing Economy’, a peer-to-peer economic system built on consumers sharing goods and services directly with other consumers through online platforms. This model was pioneered by the likes of eBay, is dominated today by companies like Airbnb and Uber, and is ultimately primed for disruption as the next generation of transformative technologies hit global markets by storm. Worth $150b in 2023, the sharing economy has been forecast to grow to nearly $800b in value worldwide by 2031, demonstrating a broader trend in consumer behavior which we believe DePINs will benefit from and play a critical role in enabling.
The Fundamentals
Compute DePINs are peer-to-peer networks which facilitate the distribution of computing resources by connecting suppliers and buyers through decentralized marketplaces. A key distinction to these networks is that they specialize in commodity hardware resources, which are already available at the disposal of many people today. As we discussed, the emergence of deep learning and generative AI have created a surge in demand for processing power due to their resource-intensive workloads, creating a bottleneck in access to critical resources for AI development. Put simply, decentralized compute marketplaces aim to ease these bottlenecks by creating a new stream of supply - one that spans across the globe and anyone can participate in.
In a compute DePIN any individual or entity can lend their idle resources at a moment’s notice and be compensated for their services appropriately. Meanwhile, any individual or entity can access the necessary resources from a global permissionless network, at lower costs and with greater flexibility than existing market offerings. As such, we can frame the participants involved in compute DePINs through a simple economic framework:
Supply Side: Individuals or entities who own compute resources and are willing to lend or sell their compute for a subsidy.
Demand Side: Individuals or entities who need compute and are willing to pay a price for it.
Key Benefits of Compute DePINs
Compute DePINs offer a number of benefits that make them compelling alternatives to centralized service providers and marketplaces. For starters, enabling permissionless, cross-border access to market participation unlocks a new stream of supply, increasing the number of available key resources required for compute-intensive workloads. Compute DePINs specialize in resources that require hardware that most people own already - anyone who owns a gaming PC already has a GPU they can rent out. This broadens the scope of developers and teams able to participate in building the next-generation of goods and services across a broader range of markets and industries therefore benefiting a larger net sum of people across the world.
Looking further, the blockchain infrastructure underpinning DePINs provides the highly efficient and scalable settlement rails for micropayments needed to facilitate peer-to-peer transactions. Crypto-native financial assets (tokens) provide a shared unit of value that participants on the demand side use to pay the suppliers, aligning economic incentives through distribution mechanisms that align with today’s increasingly globalized economy. To reference our construct of the DePIN flywheel from earlier, strategically managing economic incentives can be highly beneficial to growing a DePIN’s network effects (on both the supply and demand side) which in turn increases competition among suppliers. This dynamic drives down unit costs while raising the quality of service offerings, creating a sustainable competitive advantage for the DePIN, which suppliers can benefit from as token holders and key value providers.
DePINs function similarly to cloud computing service providers in the flexible user experience they aim to deliver, where resources can be accessed and paid for on an on-demand basis. For reference, Grandview Research forecasts the global cloud computing market size to grow at a CAGR of 21.2% to reach over $2.4T by 2030, demonstrating the viability of such business models in light of the predicted growth in future demand for computing resources. Modern cloud computing platforms utilize a central server to handle all communication between client devices and servers, creating a single point of failure in their operations. Building on top of blockchains, however, allows for DePINs to offer stronger censorship-resistance and resilience than traditional service providers. While an attack on a single organization or entity, i.e. a central cloud service provider, jeopardizes the entire underlying resource network, DePINs are structured to be resistant to such incidents through their distributed nature. For starters, blockchains themselves are globally distributed networks of specialized nodes built to be resilient from centralized network authority. In addition, compute DePINs also allow for permissionless network participation, bypassing legal and regulatory barriers. And depending on the nature of a token’s distribution, DePINs can employ fair voting processes for proposed changes and developments to the protocol to eliminate the possibility of a single entity suddenly shutting down the entire network.
The State of Compute DePINs Today
Render Network
Render Network is a compute DePIN connecting buyers and sellers of GPUs through a decentralized compute marketplace in which transactions are facilitated through its native token. There are two key parties involved in Render’s GPU marketplace - Creators, who are seeking to access processing power, and Node Operators, who rent out idle GPUs to Creators in exchange for compensation in native Render tokens. Node operators are ranked on a reputation-based system, and creators can select GPUs from a multi-tiered pricing system. The Proof-of-Render (POR) consensus algorithm coordinates operations and the node operator commits their computing resources (GPU) to process the task, a graphics rendering job. Upon delivering the job, the POR algorithm updates the node operator's status, including changes to the reputation score based on the quality of the task delivered. Render’s blockchain infrastructure facilitates payments for jobs, providing transparent and efficient settlement rails for suppliers and buyers to transact through the network token.
Originally conceived in 2009 by Jules Urbach, the network went live in September 2020 on Ethereum (RNDR) before migrating to Solana roughly three years later (RENDER) for improved network performance and lower costs of operations.
At the time of writing, the Render Network has processed up to 33m jobs (as frames rendered), and has grown up to 5600 total nodes since inception. Just under 60k RENDER has been burned, a process which takes place during the distribution of work credits to node operators.
IO Net
Io Net is launching a decentralized GPU network based on top of Solana, which will serve as a coordination layer between a vast supply of idle compute resources and a growing number of individuals and entities in need of the processing power these resources provide. Io Net’s unique selling proposition is that rather than directly competing with other DePINs on the market, it instead aggregates GPUs from a variety of sources including data centers, miners, and other DePINs including Render Network and Filecoin, while utilizing the Internet-of-GPUs (IoG), a proprietary DePIN, to coordinate operations and align incentives among market participants. Io Net clients are able to customize clusters for their workloads on the IO Cloud by selecting processor type, location, communication speeds, compliance, and duration of services. Conversely, anyone who owns a supported GPU model (12 GB RAM,256 GB SSD) can participate as an IO Worker, by lending out their idle compute to the network. While payments for services are currently settled in fiat and USDC, the network will soon support payments in the native $IO token as well. Prices paid for resources are algorithmically determined by their supply and demand, as well as various GPU specs and configurations. Io Net’s end-goal is becoming the go-to GPU marketplace by offering lower costs and better quality of service than modern cloud service providers.
The multi-layered IO architecture can be mapped out as follows:
UI Layer - Composed of the public website, customers area, and Workers area.
Security Layer - This layer is composed of a Firewall for network protection, an Authentication Service for user validation, and a Logging Service for tracking activities.
API Layer - This layer functions as a communication layer, and is composed of; a public API for the website, private APIs for Workers, and internal APIs for cluster management, analytics, and monitoring and reporting.
Backend Layer - The backend layer manages Workers, Cluster/GPU operations, customer interactions, billing and usage monitoring, analytics, and Autoscaling.
Database Layer - This layer is the system’s data repository, using Main storage for structured data and Caching for temporary data that can be frequently accessed.
Message Broker and Tasks Layer - This layer facilitates asynchronous communications and task management.
Infrastructure Layer - This layer houses the GPU Pool, orchestration tools, and manages task deployment.
Current Stats / Roadmap
At the time of writing;
Total Network Earnings - $1.08m
Total Compute Hours Earned - 837.6k hours
Total Cluster-Ready GPUs - 20.4K
Total Cluster-Ready CPUs - 5.6k
Total On-chain Transactions - 1.67m
Total Inferences - 335.7k
Total Clusters Created - 15.1k
Data sourced from the Io Net explorer
Aethir
Aethir is a cloud computing DePIN which facilitates the sharing of high-performance computational resources for compute-intensive domains and applications. It utilizes resource pooling, to enable global GPU distribution at significantly reduced costs, and decentralized ownership through distributed resource possession. Aether was designed with a distributed GPU framework specifically tailored for high-performance workloads in industries like gaming and AI model training and inference. By unifying GPU clusters into a single network, Aethir’s design aims to increase cluster sizes, thereby improving overall performance and reliability of the services offered on its network.
The Aethir Network is a decentralized economy composed of miners, developers, users, token holders, and the Aethir DAO. Three key roles are involved in ensuring successful network operations - the container, the indexer, and the checker. Containers function as the powerhouses of the network, working as specialized nodes which fulfill critical operations in maintaining the liveness of the network, including validating transactions and rendering digital content in real time. Checkers function as quality assurance workers, continuously monitoring the performance and service quality of the Containers to ensure reliable and efficient operations suitable for the requirements of GPU consumers. Indexers function as matchmakers connecting users to the best available Container for their services. Underpinning this entire structure is the Arbitrum Layer 2 blockchain, which provides a decentralized settlement layer to facilitate the payment of goods and services on the Aethir network in the native $ATH token.
Proof of Rendering
Nodes in the Aethir network serve two key functions - Proof of Rendering Capacity, where a group of these workers is randomly chosen to validate transactions every 15 minutes, and Proof of Rendering Work, which closely monitors network performance to ensure that users get the best possible service, adjusting resources based on their demand and geographic location. Mining rewards are distributed to participants running nodes on the Aethir network, for the value they provide in the compute resources they lend out, in the native $ATH token.
Nosana
Nosana is a decentralized GPU network built on top of Solana. Nosana allows anyone to contribute idle compute resources and earn rewards in the form of the $NOS token for doing so. The DePIN facilitates the distribution of affordable and efficient GPUs, which can be used for running complex AI workloads without the overhead of traditional cloud solutions. Anyone can run a Nosana node by lending out idle GPUs, earning token rewards proportional to the GPU power they provide to the network.
The network connects two parties involved in the distribution of computing resources: users seeking to access compute and node operators who provide the compute. Important protocol decisions and upgrades are voted on by NOS token holders and governed by the Nosana DAO.
Nosana has laid out an extensive roadmap for its future plans - Galactica (v1.0 - H1/H2 2024) will launch the main grid, release the CLI and SDK, and focus on network scaling with a Container Node for consumer GPUs. Triangulum (v1.X - H2 2024) will integrate major machine learning protocols and connectors for PyTorch, HuggingFace, and TensorFlow. Whirlpool (v1.X - H1 2025) will expand support to diverse GPUs from AMD, Intel, and Apple Silicon. Sombrero (v1.X - H2 2025) will add support for medium and large businesses, fiat currency ramping, billing, and team functionalities.
Akash
The Akash network is an open-source Proof-of-Stake network built on top of the Cosmos SDK, which enables a decentralized cloud compute marketplace that is permissionless to join and contribute to. The $AKT token is used to secure the network, facilitate payments for resources, and coordinate economically aligned behavior among network participants. The Akash network consists of several key components;
The Blockchain Layer, which provides consensus using the Tendermint Core and Cosmos SDK.
The Application Layer, which manages deployments and the allocation of resources.
The Provider Layer, which manages resources, bids, and user application deployments.
The User Layer, which enables users to interact with the Akash network, manage resources, and monitor application status using their CLI, Console, and Dashboard.
Initially focusing on storage and CPU leasing services, the network has since expanded its offerings to the renting and distribution of GPUs with its AkashML platform, a response to the growing demand for these grown with the onset of AI training and inference workloads and their processing power requirements. AkashML utilizes a 'reverse auction' system, where customers, known as Tenants, submit their desired price for GPUs, and compute suppliers, known as Providers, compete to be the ones to supply the requested GPUs in turn.
At the time of writing, the Akash blockchain has seen over 12.9 million total transactions, over $535k has been spent to access computing resources, and over 189k unique deployments were leased out.
Honorable Mentions
The compute DePIN sector is still developing, with lots of teams competing to bring innovative and efficient solutions to the market. Additional examples warranting further research; Hyperbolic is building a collaborative open-access platform for resource pooling for AI development, Exabits is building a distributed computing power network underpinned by computing miners, and Shaga is building a network that allows PC lending and monetization for server-side gaming on Solana.
Important Considerations & Future Outlook
Now that we’ve gone through the fundamentals of compute DePINs and reviewed several supplemental case studies in action today, it's important to consider the implications of these decentralized networks, including the good and the bad.
Challenges
Building out distributed networks at scale can often require tradeoffs to be made around performance relative to security, resiliency etc. For instance, training AI models can be much less cost-effective and time-efficient on a network of globally distributed commodity hardware that isn't cost-effective or time-efficient. As we have alluded to previously, AI models and their workloads are becoming increasingly complex, requiring more high-performance GPUs than commodity ones.
This is why bigger corporations hoard high-performance GPUs en masse, and it is an inherent challenge to compute DePINs that aim to address the GPU shortage by establishing a permissionless market where anyone can lend idle supply (see this tweet for more on challenges with decentralized AI protocols). Protocols can address this in two key ways: by establishing baseline requirements for GPU providers looking to contribute to the network, and by pooling compute resources provided to the network to achieve a greater whole. Nonetheless, this model is inherently challenging to build relative to centralized service providers who can allocate more capital to make direct deals with hardware providers, such as Nvidia. This is something that DePINs should take into consideration moving forward. If a decentralized protocol has a large-enough treasury, the DAO can vote to allocate a portion of funds to purchase high-performance GPUs that can be managed in a decentralized way and lent out at a higher rate than commodity GPUs.
Another challenge specific to compute DePINs is managing the right amount of resource utilization. In their early stages, most compute DePINs will face a lack of structural demand, as many startups do today. Generally, DePINs face the challenge of building up enough supply early on to even attain a minimum viable product quality. Without supply, the network will fail to generate sustainable demand, nor will it be able to serve its customers during periods of peak demand. On the other side of this equation is the concern of excess supply. Beyond a certain threshold, more supply in a network is only helpful when its utilization is at or near full capacity. Otherwise, the DePIN runs the risk of overpaying for supply, which in turn leads to underutilization of resources, meaning suppliers’ earnings decrease unless the protocol raises token emissions to keep them around.
In the same way a mobile communications network isn’t useful without wide geographic coverage, a DePIN isn’t useful if it must pay people to supply resources over a prolonged period of time. While centralized service providers can predict demand for resources and manage supply efficiently, compute DePINs lack a central authority to manage this utilization. Therefore, it is particularly important for DePINs to establish resource utilization as strategically as possible.
A bigger picture concern which arises for decentralized GPU marketplaces in particular, is that the GPU shortage may be coming to an end. Mark Zuckerberg has recently stated in an interview that he believes energy will be the new bottleneck, not compute resources, as businesses will now be scrambling to build out data centers en masse as opposed to hoarding compute resources the way they do now. Of course, this implies potentially reduced costs for GPUs from a relief in demand, but it also raises the question of how AI startups will be able to compete with conglomerates on performance and quality of goods and services offered, if building out proprietary data centers raises the overall standard of AI model performance to unprecedented levels.
The Case for Compute DePINs
To reiterate, there is a growing disparity between the complexity of AI models and their subsequent processing and compute demands and the number of high-performance GPUs and other computing resources available to meet this demand.
Compute DePINs stand to be innovative disruptors in the field of compute marketplaces, which are dominated by major hardware manufacturers and cloud computing service providers today, on the basis of several key capabilities:
1) Providing lower costs of goods and services.
2) Offering stronger censorship-resistance and network resiliency guarantees.
3) Benefiting from potential regulatory guidelines for AI which require that AI models are as open as possible to fine-tuning and training, and can be easily accessed by anyone anywhere.
The percentage of households with computers and internet access in the US has grown exponentially, getting closer to 100%. It has also grown significantly around many parts of the world. This indicates a potential abundance of compute resource providers (GPU owners) who would be willing to loan idle supply if there is sufficient monetary incentive to do so, and a seamless transactional process with minimal barriers to entry. Of course, this is a very, very rough estimate, but it speaks to the fact that the foundation for building out a sustainable sharing economy of compute resources may just be in place.
Thinking beyond AI, future demand for compute will come from many other industries as well, such as quantum computing. The quantum computing market size is projected to grow from $928.8 million in 2023 to $6,528.8 million by 2030, at a CAGR of 32.1%. The production of this industry will require different kinds of resources but it will be interesting to see if any quantum-computing DePINs launch and what those would look like.
“A strong ecosystem of open models running on consumer hardware are an important hedge to protect against a future where value captured by AI is hyper-concentrated and most human thought becomes read and mediated by a few central servers controlled by a few people. Such models are also much lower in terms of doom risk than both corporate megalomania and militaries.” - Vitalik Buterin
Major enterprises are likely not the target audience for DePINs nor will they be. Compute DePINs brings back the individual developer, the scrappy builder, the start-up with minimal capital and resources. They allow for the transformation of idle supply into innovative ideas and solutions enabled by a greater abundance of compute. AI will no doubt change the lives of billions of people. Instead of fearing that it will replace everyone’s jobs, we should encourage the idea that AI can empower individuals and self-entrepreneurs, start-ups, and the broader general public.
DePINs do not guarantee a future world with fair access to AI development, but they provide one of the best opportunities to start building one.
Works Cited
Akash. (n.d.). Akash Documentation. Akash.https://akash.network/docs/
Akash. (2024, June 11). Akash Network Mainnet Dashboard. Akash. https://stats.akash.network/
Four Pillars. (2024, April 24). Case Study for ‘Better’ Sharing Economy. 4pillars. https://4pillars.io/en/articles/case-study-for-better-sharing-economy-ionet/public
Fortune Business Insights. (2024, May 27). AI Market Size, Share & Industry Analysis. Fortune Business Insights. https://www.fortunebusinessinsights.com/industry-reports/artificial-intelligence-market-100114
Fortune Business Insights. (2024, May 27). Quantum Computing. Fortune Business Insights. https://www.fortunebusinessinsights.com/quantum-computing-market-104855
Gala, S and Kassab, S. (2024, January 5). State of DePIN 2023. Messari. https://messari.io/report/state-of-depin-2023
GMI. (2023 February). GPU Market Size. Gminsights. https://www.gminsights.com/industry-analysis/gpu-market
Grand View Research. (n.d.). Cloud Computing Market Size & Trends. Grand View Research. https://www.grandviewresearch.com/industry-analysis/cloud-computing-industry
IBM. (n.d.). What is artificial intelligence (AI)? IBM. https://www.ibm.com/topics/artificial-intelligence
Io.Net. (2024, June 11). Company Origins. Io Net. https://docs.io.net/docs/inception#domain-specific-hardware-is-not-enough
Kong, R. (2024, April 4). Decentralized Physical Infrastructure Networks: Embracing The Power of Token Incentives To Bootstrap Networks. Blockworks Research. https://app.blockworksresearch.com/unlocked/decentralized-physical-infrastructure-networks-embracing-the-power-of-token-incentives-to-bootstrap-networks
Merritt, R. (2023, December 4). Why GPUs Are Great for AI. Nvidia. https://blogs.nvidia.com/blog/why-gpus-are-great-for-ai/
Nosana. (n.d.). Nosana Documentation. Nosana. https://docs.nosana.io/
Nosana. (2023, October 18). Nosana’s New Direction: AI Inference. Medium. https://medium.com/@nosana/nosanas-new-direction-ai-inference-77b98d78ea06
Render. (n.p.). Render Dashboard. Render Foundation https://stats.renderfoundation.com/
Research and Markets. (2024, May). Thematic Intelligence: Sharing Economy. Research and Markets. https://www.researchandmarkets.com/reports/5973270/thematic-intelligence-sharing-economy?utm_code=pk9p84&utm_exec=carimspi
Robert. (2024, May 29). Evaluating token economics for DePINs: cost estimation. Mirror.https://mirror.xyz/1kx.eth/eixy4sKhLVvexT-PmcBTUCGCR66ET_9iX4cAd-ShOuM
State of AI. (2024 February). State of AI Report Compute Index. State of AI. https://www.stateof.ai/compute
U.S. Census Bureau. Current Population Survey October 1984, 1989, 1993, 1997, 2000, 2001, 2003, 2007, 2009, 2010, 2011.
Not financial or tax advice. The purpose of this newsletter is purely educational and should not be considered as investment advice, legal advice, a request to buy or sell any assets, or a suggestion to make any financial decisions. It is not a substitute for tax advice. Please consult with your accountant and conduct your own research.
Disclosures. All posts are the author's own, not the views of their employer. This post has been sponsored by the Io Net and Aethir Network teams. Members of the team also own material positions in some of the projects shared. While Shoal Research has received funding for this initiative, sponsors do not influence the analytical content. At Shoal Research, we aim to ensure all content is objective and independent. Our internal review processes uphold the highest standards of integrity, and all potential conflicts of interest are disclosed and rigorously managed to maintain the credibility and impartiality of our research.