Author**:Lyn AldenCompiler:**GaryMa Wu said blockchain
Note: The following excerpts are from a newsletter on the topic of artificial intelligence published by Lyn Alden in August, which involves parts related to Bitcoin. Due to space limitations or other reasons, some details or information may not be fully translated or deleted. We recommend that readers refer to the original text when reading this article for more comprehensive information.
**What is the best currency for AI? **
Which form of currency is best for AI agents? Based on what I’ve seen so far, I think Bitcoin is the main answer.
In a future world where payments between humans and machines will occur in large numbers, open source digital native currencies are naturally suitable.
AI agents can create a Bitcoin/cryptocurrency wallet, earn funds and spend the funds.
The AI can start from scratch, be assigned a task, or simply “decide” to complete a task. As it evaluates the best way to accomplish a task, it determines that certain paid services are beneficial or critical to it. So it generates a Bitcoin/Lightning wallet (unlike a bank account, which doesn’t need permission since it’s open source), then convinces its creator to give it funds, or otherwise earn money online (maybe it Post some clever content on Nostr and earn tips). Now, with the funds, it can pay for what it needs.
Efficient open source digital native currencies can be used faster, without permission, and can conduct smaller transactions (micro-transactions) than existing fiat currencies, thus bringing more possibilities for machine-to-machine and human-to-machine payments. . This is a natural form of currency used by AI agents.
This trend has continued. At the end of June, Bitcoin companies Fedi and Stak collaborated to organize an AI hackathon. Fedi and Stak are calling all developers, builders, and pioneers who believe in empowering people and communities with technology through Bitcoin, Lightning Network, and Fedimint: Join us and help democratize artificial intelligence with Bitcoin, Lightning Network, and Fedimint.
In early July, Lightning Labs (the main Lightning Network infrastructure provider) announced a new set of developer tools to support this emerging Bitcoin + AI intersection.
Briefly, there are two main aspects of Lightning Labs’ tools:
It is easier to build AI agents that can hold and use Bitcoin to complete tasks;
Using the Lightning Network and the long-standing “HTTP 402 Payment Required” status code makes it easier to pay for resource-intensive computing per use.
To see the latter in action, you can watch this two-minute video shared by Lightning Labs CTO Olaoluwa Osuntokun.
Link:
In the video, a user (Kody Low, a developer and product support director at Fedi) lets GPT-4 write code for him and pays per issue using Lightning (about $0.01 per issue). Ironically, the code he asked the AI to write was exactly what he had already demonstrated as a working service: the ability to use the Lightning Network to pay for API calls.
Specifically, Kody asked the AI: “How do I create a reverse proxy to pay for my API calls using Lightning Network? Use the LND backend and write the reverse proxy for me in golang.” He then paid the Lightning Network payment request. The AI understood the request, accepted the payment, and provided well-organized and annotated code to answer the question. Kody then asked it to rewrite the code using the rust programming language instead of golang and pay again. AI rewrote the code and explained in more detail in English what parts of the code were doing and clarified the basic outline of the program, but for a fully implemented version it would need to include more error handling and There are other subtleties, but this is the basic outline of the program. Kody then thanked it and said the AI and Lightning Network were a perfect match and paid the fee again. AI said Kody is welcome and agrees that both AI and the Lightning Network are powerful technologies that together can create advanced solutions.
I have personally tested one of these programs where I can use ChatGPT on my phone without using a subscription and ask questions by paying with Lightning for the equivalent of one or two cents, the price fluctuates dynamically based on each question, maybe Depends on some kind of supply and demand algorithm running in the background.
Just for fun, I asked ChatGPT for some examples of situations where an AI agent might need to pay a human. Here is its answer:
Great question! AI agents may need to pay humans in the course of completing various tasks. Here are some potential scenarios:
The gig economy and microtasks: AI may be asked to find humans to complete small tasks, such as labeling images or transcribing audio, which often happens on gig platforms. Once the task is completed, the AI can send the agreed upon payment to the worker.
E-commerce transactions: If the AI manages an e-commerce platform or online marketplace, it may need to send payments to sellers or service providers. This may be in response to a product sale, refund, or service fee.
Content creators and online influencers: AI agents might manage platforms or channels where people post content or influence others. They can be programmed to distribute payments or rewards to these creators based on views, likes, shares or other metrics.
Crowdsourcing and idea generation: Companies can use AI to manage innovation or idea generation activities, encouraging people to submit ideas. Once an idea is selected, the AI can send payment or rewards to the person who submitted the idea.
Customer Refunds: If AI handles customer service for a company, it may be required to issue refunds for returned products, canceled services, or any inconvenience caused by the company.
Survey participation: AI may administer surveys for market research or other data collection purposes. Participants are compensated for their time and input.
Charitable Donations: AI can be used to manage payments from charitable organizations to beneficiaries.
Freelancer payment: AI may be hired to manage freelance or contract workers, paying them upon completion of their work.
Prizes and rewards: If the AI manages a contest or game, it may need to send prizes or rewards to the winner.
These are just examples of the many potential applications. The common thread is that any task that requires compensation to humans for providing service or value is likely to be managed by AI.
I then asked it some examples where an AI agent might need to pay another AI agent in the course of completing its assigned tasks, and its response was:
There are indeed many potential situations where an AI agent might need to transact with another AI agent. This may occur where the system is designed to interact and trade resources or services autonomously, for example:
Self-driving vehicles: In a future of self-driving cars, AI agents controlling individual vehicles may pay rights or secure parking spaces to vehicles controlling other AIs.
AI market: In the AI market, one AI may pay another AI a fee for specific services, such as data analysis, machine learning model training, or optimizing complex tasks.
Content or data acquisition: One AI may pay another AI for proprietary data or content, such as proprietary research, market analysis, or digital assets.
Cloud Computing Resources: An AI managing a server or cloud resource may pay another AI to use its excess computing power or storage space.
Digital advertising: An AI managing an ad campaign might pay another AI to place its ad in the best place or time on a digital platform.
AI-modulated auction: In an online advertising real-time bidding system, one AI agent may pay another AI agent for advertising space targeted to a specific audience group.
Gaming: In an online gaming scenario, one AI may need to pay another AI for virtual goods, services, or game progress.
Internet of Things (IoT): In a smart home or smart city, one IoT device (managed by AI) might pay another device to prioritize its tasks. For example, a smart refrigerator could pay a smart oven to postpone its energy-consuming cooling cycle until the refrigerator completes its energy-consuming cooling cycle.
These examples are speculative and depend on the complex infrastructure that allows secure AI-to-AI transactions. But as technology continues to evolve, we may see more systems designed to allow for this kind of autonomous interaction.
Now, a legitimately skeptical question that readers of this article may be asking when they get to this part is, “Why use Lightning Network payments for AI usage instead of just paying in USD?”
The answer is that Bitcoin/Lightning Network is a uniquely suitable way to work with AI for several reasons:
● Non-Refundable: Unlike many software applications, AI is very hardware intensive on a per-use basis. When you use AI, you are using very power-hungry GPU hardware. Once developed, the software itself is cheap, but each use of the software incurs an actual power/hardware resource cost, unlike most software types. If an API is left open for payment via conventional means (such as a credit card), then the merchant becomes a target for fraud. Users can send tens of thousands of uses to GPU and have their payments reversed. In this case, the merchant bears the cost of the power/hardware but then loses the payment. Lightning, on the other hand, is a method of final settlement of bearer assets.
● Microtransactions: Most payment methods have limits on the size of a single transaction. You cannot pay $0.03 with a Visa card, for example. PayPal’s lowest fixed fee for merchants is $0.49 per transaction. The closest thing you can do is to pre-fund your account and then make micropayments every time you use it within your account. This gets more complicated globally and across borders and currencies; if someone in Malaysia wants to pay someone in California some AI royalties, what payment method should they use? Even within the subscription payment backend on my website, my operations manager and I sometimes have issues accepting foreign credit cards, and in many countries they use mobile payment services instead of credit cards. The Lightning Network can process microtransactions on a global scale. Users can pay pennies per use from anywhere, with no subscription required and no pre-authorized funds required. This is very relevant for machine-to-machine payments, where subscriptions may not be ideal and they just need to interact with each other on a real-time basis.
● Permissionless and Programmable: AI agents can be created to perform a variety of operations. As an entity without a legal identity, it is easier for an AI agent to hold and use open source programmable currency than it is to use a bank account or bank payment product. AI agents may not have access to bank accounts, but they can run Lightning nodes, or operate Lightning Network wallets. The programmable and real-time instant nature of the Lightning Network allows for dynamic pricing. In other words, the price of each API call can be programmed to change every second depending on the number of incoming API requests they receive at the same time.
Proof of work to prevent spam
Nowadays, because artificial intelligence has almost reduced the cost of creating pictures, videos, texts, programs, etc., it is becoming more and more difficult to distinguish which content is real and which content is high-quality fake video or virtual personality. How can we be sure that an image or video we see is the product of real footage and not a digital forgery generated by artificial intelligence, when it may matter from a corporate, social, political or geopolitical perspective? Now imagine what it will be like in five or ten years as technology improves.
Even more worrisome is the convincing nature of the bots. Whenever I use Twitter (now called “X”), my posts get inundated with bots. Luckily, they’re obviously low-quality, poorly programmed robots. However, there are also human impersonators who manually create an account similar to mine and start trying to phish people for scamming. What if AI bots on social media were really quite convincing? What if they could exist in uncountable numbers, constantly being created, just as they could be identified and blocked?
In addition to bots spamming people as a form of deception, a sophisticated bot strategy can involve building an entire fake community to promote an idea and make it appear popular. Many different AI personalities may be able to operate online and be able to convince many people. Imagine someone writes a program that creates an army of bots, each unique and a few years more advanced than the current ChatGPT, that work in concert to promote a certain political candidate, a certain social point of view, or just Grow an influencer’s audience. Now imagine that this program is open source, available to anyone who wants to use it. Pandora’s box has been opened.
Imagine interacting online:
Most entities that look human are not human;
It is difficult to determine which ones are truly human.
One way to handle this is for social media platforms to enforce authentication upon registration. In this case, the new entrant needs to provide sufficient evidence that they are a real person. Elon Musk’s Twitter/X seems to be moving in this direction, as do a number of other platforms. While this may indeed work for some platforms, the obvious disadvantage of this approach is that in many cases anonymity is an important component of security and authenticity. People in certain professions do not have the freedom to be themselves online and use anonymous accounts to relax socially. What’s more, in many countries certain types of speech are illegal. Leaders cannot be criticized publicly, certain topics are prohibited from discussion, etc. In fact, some of America’s Founding Fathers initially used pseudonyms to gain protection when they published revolutionary criticisms of the British monarchy. Totalitarians love mandatory authentication to participate in social media, and a serious bot problem would give them a good excuse to implement these measures. Moreover, they can easily force various platforms to provide them with data.
Alternatively, social media could remain open and anonymous, but use some kind of persistent proof or payment check to prevent a flood of countless bots. Back in the 1990s, cryptographer Adam Barker invented proof-of-work in the form of Hashcash to prevent various types of spam or denial-of-service attacks. Ultimately, Satoshi Nakamoto used it in a core part of the Bitcoin protocol, but it was not widely implemented in social media or other parts of the internet.
Bitcoin’s ledger has a history that is nearly impossible to falsify. Every time a block is updated on the blockchain, energy is required, and the complete history of the ledger contains a large amount of energy encoded in it, making it difficult to forge. In addition to being used as a monetary asset, the Bitcoin blockchain now exists as a global, open-source, immutable time capsule into which arbitrary data (including documents, pictures, books, songs, or times from other systems) can be inserted. stamp), and can prove that the inserted data has not been modified since that date. No closed-source, opaque, or centralized server can replicate this function.
Could Bitcoin’s proof-of-work be used to protect other types of systems from spam in the same way that Adam Barker originally created Hashcash? There are some avenues already being explored along these lines, which are likely to become increasingly important in the coming years.
● Payments: Bitcoin is currently used by Stacker News as a form of distribution incentive and spam prevention. Users have to pay a small amount of bitcoin to post and can tip each other with small amounts of bitcoin. The problem with traditional payment methods like Visa is that they identify the person, whereas Bitcoin is proof of payment but does not require the identity to be known, which is acceptable for preventing spam. This can be adjusted in various ways. If Twitter/X implemented a similar feature, then for example I could set it up to only allow people who have some micro-cost association with their identity to comment on my posts, to prevent spam.
● Identity with costs: Someone could create a public/private key pair (like on Nostr), but then go further and embed the public key into the Bitcoin blockchain, incurring real-world costs due to transaction fees Resource costs, various ways to verify to people that this key is indeed theirs, and then use a private key that matches that public key to sign any digital content they create or authorize. The same is true for any large news organization with known standards of integrity. The platform can check which key pairs are embedded or not. From that point on, this allowed people to distinguish fakes from authentic content (created, at least in part, by that individual or organization and not modified since publication, not necessarily objectively true) and enabled hundreds of Imposters cannot use similar key pairs that are already embedded in the Bitcoin blockchain and incur real-world fees.
While we haven’t reached this stage yet, in a world where artificial intelligence is advanced enough that imitation or forgery is virtually costless, we may learn to require proofs of continuity (encrypted public/private key pairs) and proofs of work (proofs of authenticity The world’s resources are used to increase the importance of a given public/private key pair or online identity) before certain types of digital content are taken seriously.
In the potential future world I described earlier:
Most entities that look human are not human;
It is difficult to determine which ones are truly human, and we then need various methods to classify which entities have some kind of real-world tangibility or cost associated with them, to find signal in a near-infinite sea of noise.
Identity checking or the use of real resource costs are two main directions platforms can take.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Lyn Alden: Why Bitcoin is the best currency for artificial intelligence
Author**:Lyn Alden Compiler:**GaryMa Wu said blockchain
Note: The following excerpts are from a newsletter on the topic of artificial intelligence published by Lyn Alden in August, which involves parts related to Bitcoin. Due to space limitations or other reasons, some details or information may not be fully translated or deleted. We recommend that readers refer to the original text when reading this article for more comprehensive information.
**What is the best currency for AI? **
Which form of currency is best for AI agents? Based on what I’ve seen so far, I think Bitcoin is the main answer.
In a future world where payments between humans and machines will occur in large numbers, open source digital native currencies are naturally suitable.
AI agents can create a Bitcoin/cryptocurrency wallet, earn funds and spend the funds.
The AI can start from scratch, be assigned a task, or simply “decide” to complete a task. As it evaluates the best way to accomplish a task, it determines that certain paid services are beneficial or critical to it. So it generates a Bitcoin/Lightning wallet (unlike a bank account, which doesn’t need permission since it’s open source), then convinces its creator to give it funds, or otherwise earn money online (maybe it Post some clever content on Nostr and earn tips). Now, with the funds, it can pay for what it needs.
Efficient open source digital native currencies can be used faster, without permission, and can conduct smaller transactions (micro-transactions) than existing fiat currencies, thus bringing more possibilities for machine-to-machine and human-to-machine payments. . This is a natural form of currency used by AI agents.
This trend has continued. At the end of June, Bitcoin companies Fedi and Stak collaborated to organize an AI hackathon. Fedi and Stak are calling all developers, builders, and pioneers who believe in empowering people and communities with technology through Bitcoin, Lightning Network, and Fedimint: Join us and help democratize artificial intelligence with Bitcoin, Lightning Network, and Fedimint.
In early July, Lightning Labs (the main Lightning Network infrastructure provider) announced a new set of developer tools to support this emerging Bitcoin + AI intersection.
Briefly, there are two main aspects of Lightning Labs’ tools:
It is easier to build AI agents that can hold and use Bitcoin to complete tasks;
Using the Lightning Network and the long-standing “HTTP 402 Payment Required” status code makes it easier to pay for resource-intensive computing per use.
To see the latter in action, you can watch this two-minute video shared by Lightning Labs CTO Olaoluwa Osuntokun.
Link:
In the video, a user (Kody Low, a developer and product support director at Fedi) lets GPT-4 write code for him and pays per issue using Lightning (about $0.01 per issue). Ironically, the code he asked the AI to write was exactly what he had already demonstrated as a working service: the ability to use the Lightning Network to pay for API calls.
Specifically, Kody asked the AI: “How do I create a reverse proxy to pay for my API calls using Lightning Network? Use the LND backend and write the reverse proxy for me in golang.” He then paid the Lightning Network payment request. The AI understood the request, accepted the payment, and provided well-organized and annotated code to answer the question. Kody then asked it to rewrite the code using the rust programming language instead of golang and pay again. AI rewrote the code and explained in more detail in English what parts of the code were doing and clarified the basic outline of the program, but for a fully implemented version it would need to include more error handling and There are other subtleties, but this is the basic outline of the program. Kody then thanked it and said the AI and Lightning Network were a perfect match and paid the fee again. AI said Kody is welcome and agrees that both AI and the Lightning Network are powerful technologies that together can create advanced solutions.
I have personally tested one of these programs where I can use ChatGPT on my phone without using a subscription and ask questions by paying with Lightning for the equivalent of one or two cents, the price fluctuates dynamically based on each question, maybe Depends on some kind of supply and demand algorithm running in the background.
Just for fun, I asked ChatGPT for some examples of situations where an AI agent might need to pay a human. Here is its answer:
Great question! AI agents may need to pay humans in the course of completing various tasks. Here are some potential scenarios:
The gig economy and microtasks: AI may be asked to find humans to complete small tasks, such as labeling images or transcribing audio, which often happens on gig platforms. Once the task is completed, the AI can send the agreed upon payment to the worker.
E-commerce transactions: If the AI manages an e-commerce platform or online marketplace, it may need to send payments to sellers or service providers. This may be in response to a product sale, refund, or service fee.
Content creators and online influencers: AI agents might manage platforms or channels where people post content or influence others. They can be programmed to distribute payments or rewards to these creators based on views, likes, shares or other metrics.
Crowdsourcing and idea generation: Companies can use AI to manage innovation or idea generation activities, encouraging people to submit ideas. Once an idea is selected, the AI can send payment or rewards to the person who submitted the idea.
Customer Refunds: If AI handles customer service for a company, it may be required to issue refunds for returned products, canceled services, or any inconvenience caused by the company.
Survey participation: AI may administer surveys for market research or other data collection purposes. Participants are compensated for their time and input.
Charitable Donations: AI can be used to manage payments from charitable organizations to beneficiaries.
Freelancer payment: AI may be hired to manage freelance or contract workers, paying them upon completion of their work.
Prizes and rewards: If the AI manages a contest or game, it may need to send prizes or rewards to the winner.
These are just examples of the many potential applications. The common thread is that any task that requires compensation to humans for providing service or value is likely to be managed by AI.
I then asked it some examples where an AI agent might need to pay another AI agent in the course of completing its assigned tasks, and its response was:
There are indeed many potential situations where an AI agent might need to transact with another AI agent. This may occur where the system is designed to interact and trade resources or services autonomously, for example:
Self-driving vehicles: In a future of self-driving cars, AI agents controlling individual vehicles may pay rights or secure parking spaces to vehicles controlling other AIs.
AI market: In the AI market, one AI may pay another AI a fee for specific services, such as data analysis, machine learning model training, or optimizing complex tasks.
Content or data acquisition: One AI may pay another AI for proprietary data or content, such as proprietary research, market analysis, or digital assets.
Cloud Computing Resources: An AI managing a server or cloud resource may pay another AI to use its excess computing power or storage space.
Digital advertising: An AI managing an ad campaign might pay another AI to place its ad in the best place or time on a digital platform.
AI-modulated auction: In an online advertising real-time bidding system, one AI agent may pay another AI agent for advertising space targeted to a specific audience group.
Gaming: In an online gaming scenario, one AI may need to pay another AI for virtual goods, services, or game progress.
Internet of Things (IoT): In a smart home or smart city, one IoT device (managed by AI) might pay another device to prioritize its tasks. For example, a smart refrigerator could pay a smart oven to postpone its energy-consuming cooling cycle until the refrigerator completes its energy-consuming cooling cycle.
These examples are speculative and depend on the complex infrastructure that allows secure AI-to-AI transactions. But as technology continues to evolve, we may see more systems designed to allow for this kind of autonomous interaction.
Now, a legitimately skeptical question that readers of this article may be asking when they get to this part is, “Why use Lightning Network payments for AI usage instead of just paying in USD?”
The answer is that Bitcoin/Lightning Network is a uniquely suitable way to work with AI for several reasons:
● Non-Refundable: Unlike many software applications, AI is very hardware intensive on a per-use basis. When you use AI, you are using very power-hungry GPU hardware. Once developed, the software itself is cheap, but each use of the software incurs an actual power/hardware resource cost, unlike most software types. If an API is left open for payment via conventional means (such as a credit card), then the merchant becomes a target for fraud. Users can send tens of thousands of uses to GPU and have their payments reversed. In this case, the merchant bears the cost of the power/hardware but then loses the payment. Lightning, on the other hand, is a method of final settlement of bearer assets.
● Microtransactions: Most payment methods have limits on the size of a single transaction. You cannot pay $0.03 with a Visa card, for example. PayPal’s lowest fixed fee for merchants is $0.49 per transaction. The closest thing you can do is to pre-fund your account and then make micropayments every time you use it within your account. This gets more complicated globally and across borders and currencies; if someone in Malaysia wants to pay someone in California some AI royalties, what payment method should they use? Even within the subscription payment backend on my website, my operations manager and I sometimes have issues accepting foreign credit cards, and in many countries they use mobile payment services instead of credit cards. The Lightning Network can process microtransactions on a global scale. Users can pay pennies per use from anywhere, with no subscription required and no pre-authorized funds required. This is very relevant for machine-to-machine payments, where subscriptions may not be ideal and they just need to interact with each other on a real-time basis.
● Permissionless and Programmable: AI agents can be created to perform a variety of operations. As an entity without a legal identity, it is easier for an AI agent to hold and use open source programmable currency than it is to use a bank account or bank payment product. AI agents may not have access to bank accounts, but they can run Lightning nodes, or operate Lightning Network wallets. The programmable and real-time instant nature of the Lightning Network allows for dynamic pricing. In other words, the price of each API call can be programmed to change every second depending on the number of incoming API requests they receive at the same time.
Proof of work to prevent spam
Nowadays, because artificial intelligence has almost reduced the cost of creating pictures, videos, texts, programs, etc., it is becoming more and more difficult to distinguish which content is real and which content is high-quality fake video or virtual personality. How can we be sure that an image or video we see is the product of real footage and not a digital forgery generated by artificial intelligence, when it may matter from a corporate, social, political or geopolitical perspective? Now imagine what it will be like in five or ten years as technology improves.
Even more worrisome is the convincing nature of the bots. Whenever I use Twitter (now called “X”), my posts get inundated with bots. Luckily, they’re obviously low-quality, poorly programmed robots. However, there are also human impersonators who manually create an account similar to mine and start trying to phish people for scamming. What if AI bots on social media were really quite convincing? What if they could exist in uncountable numbers, constantly being created, just as they could be identified and blocked?
In addition to bots spamming people as a form of deception, a sophisticated bot strategy can involve building an entire fake community to promote an idea and make it appear popular. Many different AI personalities may be able to operate online and be able to convince many people. Imagine someone writes a program that creates an army of bots, each unique and a few years more advanced than the current ChatGPT, that work in concert to promote a certain political candidate, a certain social point of view, or just Grow an influencer’s audience. Now imagine that this program is open source, available to anyone who wants to use it. Pandora’s box has been opened.
Imagine interacting online:
Most entities that look human are not human;
It is difficult to determine which ones are truly human.
One way to handle this is for social media platforms to enforce authentication upon registration. In this case, the new entrant needs to provide sufficient evidence that they are a real person. Elon Musk’s Twitter/X seems to be moving in this direction, as do a number of other platforms. While this may indeed work for some platforms, the obvious disadvantage of this approach is that in many cases anonymity is an important component of security and authenticity. People in certain professions do not have the freedom to be themselves online and use anonymous accounts to relax socially. What’s more, in many countries certain types of speech are illegal. Leaders cannot be criticized publicly, certain topics are prohibited from discussion, etc. In fact, some of America’s Founding Fathers initially used pseudonyms to gain protection when they published revolutionary criticisms of the British monarchy. Totalitarians love mandatory authentication to participate in social media, and a serious bot problem would give them a good excuse to implement these measures. Moreover, they can easily force various platforms to provide them with data.
Alternatively, social media could remain open and anonymous, but use some kind of persistent proof or payment check to prevent a flood of countless bots. Back in the 1990s, cryptographer Adam Barker invented proof-of-work in the form of Hashcash to prevent various types of spam or denial-of-service attacks. Ultimately, Satoshi Nakamoto used it in a core part of the Bitcoin protocol, but it was not widely implemented in social media or other parts of the internet.
Bitcoin’s ledger has a history that is nearly impossible to falsify. Every time a block is updated on the blockchain, energy is required, and the complete history of the ledger contains a large amount of energy encoded in it, making it difficult to forge. In addition to being used as a monetary asset, the Bitcoin blockchain now exists as a global, open-source, immutable time capsule into which arbitrary data (including documents, pictures, books, songs, or times from other systems) can be inserted. stamp), and can prove that the inserted data has not been modified since that date. No closed-source, opaque, or centralized server can replicate this function.
Could Bitcoin’s proof-of-work be used to protect other types of systems from spam in the same way that Adam Barker originally created Hashcash? There are some avenues already being explored along these lines, which are likely to become increasingly important in the coming years.
● Payments: Bitcoin is currently used by Stacker News as a form of distribution incentive and spam prevention. Users have to pay a small amount of bitcoin to post and can tip each other with small amounts of bitcoin. The problem with traditional payment methods like Visa is that they identify the person, whereas Bitcoin is proof of payment but does not require the identity to be known, which is acceptable for preventing spam. This can be adjusted in various ways. If Twitter/X implemented a similar feature, then for example I could set it up to only allow people who have some micro-cost association with their identity to comment on my posts, to prevent spam.
● Identity with costs: Someone could create a public/private key pair (like on Nostr), but then go further and embed the public key into the Bitcoin blockchain, incurring real-world costs due to transaction fees Resource costs, various ways to verify to people that this key is indeed theirs, and then use a private key that matches that public key to sign any digital content they create or authorize. The same is true for any large news organization with known standards of integrity. The platform can check which key pairs are embedded or not. From that point on, this allowed people to distinguish fakes from authentic content (created, at least in part, by that individual or organization and not modified since publication, not necessarily objectively true) and enabled hundreds of Imposters cannot use similar key pairs that are already embedded in the Bitcoin blockchain and incur real-world fees.
While we haven’t reached this stage yet, in a world where artificial intelligence is advanced enough that imitation or forgery is virtually costless, we may learn to require proofs of continuity (encrypted public/private key pairs) and proofs of work (proofs of authenticity The world’s resources are used to increase the importance of a given public/private key pair or online identity) before certain types of digital content are taken seriously.
In the potential future world I described earlier:
Most entities that look human are not human;
It is difficult to determine which ones are truly human, and we then need various methods to classify which entities have some kind of real-world tangibility or cost associated with them, to find signal in a near-infinite sea of noise.
Identity checking or the use of real resource costs are two main directions platforms can take.