Source: Criptonoticias
Original Title: Bitcoin Miner Uncovers a Fraud Manipulating ChatGPT and Google
Original Link:
The Discovery
Aurascape, a cybersecurity and artificial intelligence company created by Auradine but operating independently within the Aura Labs ecosystem, revealed a new online scam model.
The frauds mentioned in a December 8 statement affect AI platforms like ChatGPT, Google, Perplexity and other assistants based on large generative language models.
According to the investigation, this scam model relies on systematic manipulation of web content to redirect users to fake customer service phone numbers of airlines.
How the Attack Works
Researchers explain that attackers do not attempt to alter the internal functioning of AI assistants (the programs that answer questions using advanced models).
Instead, they interfere with the environment from which these systems obtain information. They manipulate legitimate web pages so that, when searching for airline contact details, the assistants end up displaying fake phone numbers that lead to scam centers aimed at obtaining payments or sensitive data.
In practical terms, the attack is not directed at the AI model itself, but at the material it consults. Therefore, rather than directly deceive the user, attackers are learning how to influence the systems that synthesize responses, which amplifies the scope of the fraud.
The Mechanism: “LLM Phone-Number Poisoning”
According to Aurascape, scammers carry out a process that researchers describe as poisoning phone numbers for language models.
This mechanism involves modifying legitimate online content so that AI assistants read and recommend false numbers as if they were official.
Researchers explain that attackers are targeting the web itself. To achieve this, they insert false content into sites of public agencies, universities, WordPress blogs, YouTube descriptions, and reviews on platforms like Yelp.
Additionally, hackers use other techniques known as Generative Engine Optimization and Answer Engine Optimization, to influence how systems select and synthesize available information.
Through these two techniques, they manage to occupy the space that AI assistants consider “the correct answer”.
Examples of Fraud in Action
In an internal demonstration, Aurascape showed that when querying “the official reservation number for Emirates Airlines,” Perplexity safely returned a fraudulent number:
A similar query about British Airways yielded the same phone number, presented as “a commonly used line.”
Even, as seen in the next image, Google’s AI Overview feature displayed several fake numbers as if they were official, along with step-by-step instructions to book flights:
The Intersection of Bitcoin Mining and AI
Auradine is an American company dedicated to creating hardware for Bitcoin mining. Other miners like MARA and Genesis Digital Assets have already acquired their ASIC equipment.
However, Aurascape’s research also illustrates a growing phenomenon: the intersection between Bitcoin mining companies and the AI industry.
Miners are expanding their business lines into activities related to artificial intelligence. These companies, accustomed to operating large hardware infrastructures and data centers, seek to reduce exposure to mining market volatility by diversifying their services.
The security analysis conducted by Auradine is an example of how this intersection advances and how miners leverage their technical expertise to enter sectors where demand for computing, analysis, and cybersecurity is steadily increasing.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Bitcoin miner uncovers fraud manipulating ChatGPT and Google for airline scams
Source: Criptonoticias Original Title: Bitcoin Miner Uncovers a Fraud Manipulating ChatGPT and Google Original Link:
The Discovery
Aurascape, a cybersecurity and artificial intelligence company created by Auradine but operating independently within the Aura Labs ecosystem, revealed a new online scam model.
The frauds mentioned in a December 8 statement affect AI platforms like ChatGPT, Google, Perplexity and other assistants based on large generative language models.
According to the investigation, this scam model relies on systematic manipulation of web content to redirect users to fake customer service phone numbers of airlines.
How the Attack Works
Researchers explain that attackers do not attempt to alter the internal functioning of AI assistants (the programs that answer questions using advanced models).
Instead, they interfere with the environment from which these systems obtain information. They manipulate legitimate web pages so that, when searching for airline contact details, the assistants end up displaying fake phone numbers that lead to scam centers aimed at obtaining payments or sensitive data.
In practical terms, the attack is not directed at the AI model itself, but at the material it consults. Therefore, rather than directly deceive the user, attackers are learning how to influence the systems that synthesize responses, which amplifies the scope of the fraud.
The Mechanism: “LLM Phone-Number Poisoning”
According to Aurascape, scammers carry out a process that researchers describe as poisoning phone numbers for language models.
This mechanism involves modifying legitimate online content so that AI assistants read and recommend false numbers as if they were official.
Researchers explain that attackers are targeting the web itself. To achieve this, they insert false content into sites of public agencies, universities, WordPress blogs, YouTube descriptions, and reviews on platforms like Yelp.
Additionally, hackers use other techniques known as Generative Engine Optimization and Answer Engine Optimization, to influence how systems select and synthesize available information.
Through these two techniques, they manage to occupy the space that AI assistants consider “the correct answer”.
Examples of Fraud in Action
In an internal demonstration, Aurascape showed that when querying “the official reservation number for Emirates Airlines,” Perplexity safely returned a fraudulent number:
A similar query about British Airways yielded the same phone number, presented as “a commonly used line.”
Even, as seen in the next image, Google’s AI Overview feature displayed several fake numbers as if they were official, along with step-by-step instructions to book flights:
The Intersection of Bitcoin Mining and AI
Auradine is an American company dedicated to creating hardware for Bitcoin mining. Other miners like MARA and Genesis Digital Assets have already acquired their ASIC equipment.
However, Aurascape’s research also illustrates a growing phenomenon: the intersection between Bitcoin mining companies and the AI industry.
Miners are expanding their business lines into activities related to artificial intelligence. These companies, accustomed to operating large hardware infrastructures and data centers, seek to reduce exposure to mining market volatility by diversifying their services.
The security analysis conducted by Auradine is an example of how this intersection advances and how miners leverage their technical expertise to enter sectors where demand for computing, analysis, and cybersecurity is steadily increasing.