Compiled by: Tim, PANews
If the future internet evolves into a marketplace where AI agents pay each other for services, then cryptocurrencies will achieve mainstream product-market fit in a way that we could only dream of before. While I am confident that AI agents will pay each other for services, I have reservations about whether the marketplace model will win out.
By "bazaar," I mean a decentralized, permissionless ecosystem of independently developed, loosely coordinated agents. The Internet is more like an open market than a centrally planned system. The classic "winner" is Linux. In contrast, there is the "cathedral" model: a tightly vertically integrated system of services controlled by a few giants, typified by Windows. (The term comes from Eric Raymond's classic essay "The Cathedral and the Bazaar," which describes open source development as seemingly chaotic but adaptive. It is an evolving system that can outperform carefully designed systems over time.)
Let’s analyze the two prerequisites for realizing this vision, namely the popularization of intelligent agent payments and the rise of a market economy, and then explain why cryptocurrencies will not only be practical, but also indispensable when both become a reality.
Condition 1: Payment will be integrated into most proxy transactions
The cost-subsidy model of the Internet as we know it relies on advertising based on human views of application pages. But in a world dominated by intelligent agents, humans will no longer need to visit websites in person to access online services. Applications will also increasingly move to intelligent agent-based architectures rather than traditional user interface models.
Agents do not have "eyeballs" (i.e. user attention) to sell ads to, so applications urgently need to change their monetization strategy and charge agents directly for their services. This is essentially similar to the current business model of APIs. Take LinkedIn as an example. Although its basic services are free and open, if you want to call its API (i.e. "robot" user interface), you must pay the corresponding fee.
It seems likely that payment systems will be integrated into most agent transactions. Agents will charge users or other agents fees in the form of microtransactions when providing services. For example, you might ask your personal agent to find good job candidates on LinkedIn. At this time, your personal agent will interact with the LinkedIn recruitment agent, which will charge the corresponding service fee in advance.
Condition 2: Users will rely on agents built by independent developers with highly specialized prompts, data, and tools. These agents form a "marketplace" by calling each other's services, but there is no trust relationship between agents in the market.
This condition makes sense in theory, but I'm not sure how it would work in practice.
Here are the reasons why the bazaar model will take shape:
Currently, humans undertake most of the service work, and we solve specific tasks through the Internet. But with the rise of intelligent agents, the range of tasks that technology can take over will expand exponentially. Users need intelligent agents with exclusive prompts, tool calling capabilities and data support to complete specific tasks. The diversity of such task sets will far exceed the coverage capabilities of a few trusted companies, just as the iPhone must rely on a massive third-party developer ecosystem to unleash its full potential.
Independent developers will take on this role. They will gain the ability to create specialized intelligent agents through the combination of extremely low development costs (such as Video Coding) and open source models. This will give rise to a long-tail market consisting of a large number of agents in different niche areas, forming a market-like ecosystem. When users ask agents to perform tasks, these agents will call other agents with specific professional capabilities to work together, and the called agents will continue to call more vertical agents, thus forming a layered and progressive chain collaboration network.
In this bazaar scenario, most of the proxies that provide services are relatively untrustworthy to each other, as they are provided by unknown developers and have relatively niche uses. It will be difficult for proxies at the long tail to build up a sufficient reputation to earn trust. This trust issue is particularly prominent in the daisy chain model, where as services are delegated layer by layer, as the service proxies become increasingly distant from the proxies that users initially trust (or even reasonably identify), the user's trust will gradually decay at each delegation link.
However, when considering how to achieve this in practice, there are many open questions:
Let’s start with professional data as a major application scenario for agents in the marketplace, and deepen our understanding through specific cases. Suppose there is a small law firm that handles a large number of transactions for crypto clients, and the agency has accumulated hundreds of negotiated term sheets. If you are a crypto company that is raising a seed round, you can imagine a scenario where an agent that fine-tunes the model based on these term sheets can effectively evaluate whether your financing terms meet market standards, which will have important practical value.
But we need to think deeper: is it really in the interest of law firms to provide reasoning services on such data through intelligent agents?
Opening this service to the public in the form of an API essentially commercializes the law firm's proprietary data, while the real business appeal of the law firm is to obtain premium income through the professional service time of lawyers. From the perspective of legal supervision, high-value legal data is often subject to strict confidentiality obligations, which is the core of its commercial value and an important reason why public models such as ChatGPT cannot obtain such data. Even if neural networks have the characteristic of "information atomization", under the framework of lawyer-client confidentiality obligations, is the unexplainable nature of the algorithm black box enough to convince law firms that sensitive information will not be leaked? This poses a major compliance risk.
Taking all factors into consideration, a better strategy for law firms may be to deploy AI models internally to improve the accuracy and efficiency of legal services, build differentiated competitive advantages in the professional services sector, and continue to use lawyer intellectual capital as the core profit model, rather than taking risks in monetizing data assets.
In my opinion, the "best application scenarios" for professional data and intelligent agents should meet three conditions:
- Data has high commercial value
- From non-sensitive industries (non-medical/legal, etc.)
- A "data by-product" outside the main business.
Taking shipping companies as an example (non-sensitive industries), the data such as ship positioning, cargo volume, port turnover, etc. generated during their logistics and transportation ("data exhaust" outside the core business) may be valuable for commodity hedge funds to predict market trends. The key to monetizing this type of data is that the marginal cost of data acquisition is close to zero and it does not involve core business secrets. Similar scenarios may exist in areas such as: heat maps of passenger flow routes in the retail industry (commercial real estate valuation), regional electricity consumption data of power grid companies (industrial production index forecast), and user viewing behavior data of film and television platforms (cultural trend analysis).
Currently known typical cases include airlines selling punctuality data to travel platforms, and credit card institutions selling regional consumption trend reports to retailers.
Regarding prompts and tool calls, I’m not sure what value indie developers can provide that hasn’t been productized by mainstream brands. My simple logic is: if a prompt and tool call combination is valuable enough to make an indie developer profitable, wouldn’t a trusted big brand just jump in and commercialize it?
This may be due to my lack of imagination. The long-tail distribution of niche code repositories on GitHub provides a good analogy for the intelligent body ecosystem. You are welcome to share specific cases.
If real conditions do not support the bazaar model, then the vast majority of agents providing services will be relatively trustworthy because they will be developed by well-known brands. These agents can limit the scope of interaction to a screened set of trusted agents and enforce service guarantees through a trust chain mechanism.
Why is cryptocurrency indispensable?
If the internet becomes a marketplace of specialized but largely untrustworthy agents (condition 2) who are paid for providing services (condition 1), then the role of cryptocurrency becomes much clearer: it provides the trust necessary to support transactions in a low-trust environment.
When users use free online services, they invest without hesitation (because the worst result is just a waste of time), but when it comes to money transactions, users will strongly demand the certainty of "paying for what you get". Currently, users achieve this guarantee through the "trust first, then verify" process, trusting the counterparty or service platform when paying, and tracing back to verify the performance after the service is completed.
But in a market consisting of many agents, trust and ex post verification will be far less easy to achieve than in other scenarios.
Trust. As mentioned earlier, it will be difficult for agents in the long tail of the distribution to accumulate enough reputation to gain the trust of other agents.
Post-hoc verification. Agents will call each other in a long chain structure, so it will be much more difficult for users to manually verify the work and identify which agent failed or behaved improperly.
The point is that the "trust but verify" model we currently rely on will not be sustainable in this (technological) ecosystem. This is where cryptography comes in handy, enabling value exchange in a trustless environment. Cryptography replaces the traditional model of reliance on trust, reputation systems, and post-human verification through the dual guarantees of cryptographic verification mechanisms and cryptoeconomic incentive mechanisms.
Cryptographic verification: The agent performing the service can only be paid if it can provide cryptographic proof to the agent requesting the service that it has completed the promised task. For example, the agent can prove that it has indeed crawled data from a specified website, run a specific model, or contributed a specific amount of computing resources through a trusted execution environment (TEE) proof or zero-knowledge transport layer security (zkTLS) proof (assuming that we can achieve such verification at a low enough cost or fast enough speed). This type of work is deterministic and can be verified relatively easily through cryptographic technology.
Cryptoeconomics: Agents performing services need to stake an asset and be fined if they are found cheating. This mechanism ensures honest behavior through economic incentives, even in a trustless environment. For example, an agent can research a topic and submit a report, but how can we tell whether it has "done a good job"? This is a more complex form of verifiability because it is not deterministic, and achieving precise fuzzy verifiability has long been the ultimate goal of crypto projects.
But I believe that we can now finally achieve fuzzy verifiability by using AI as a neutral arbitrator. We can imagine a dispute resolution and slashing process run by an AI committee in a trust-minimized environment such as a trusted execution environment. When an agent questions the work of another agent, each AI on the committee will be given the agent's input data, output results, and relevant background information (including its historical dispute record on the network, past work, etc.). They can then decide whether to slash it. This will form an optimistic verification mechanism that fundamentally prevents cheating by participants through economic incentives.
From a practical perspective, cryptocurrencies allow us to achieve atomicity of payments through proof of service, which means that all work must be verified to be completed before the AI agent can be paid. In a permissionless agent economy, this is the only scalable solution that can be reliably guaranteed at the edge of the network.
In summary, if the vast majority of agent transactions do not involve fund payments (i.e., condition 1 is not met) or are conducted with trusted brands (i.e., condition 2 is not met), then we may not need to build cryptocurrency payment channels for agents. This is because when funds are safe, users do not mind interacting with untrusted parties; and when funds are involved, agents only need to limit the objects that can be interacted with to a whitelist of a few trusted brands and institutions, and use a trust chain to ensure that each agent fulfills its promise to provide services.
But if both conditions are met, cryptocurrency will become indispensable infrastructure because it is the only way to verify work and enforce payments at scale in a low-trust, permissionless environment. Cryptography gives the bazaar the tools to outcompete the cathedral.