By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
A technique known as Generative Engine Optimization (GEO) is reshaping the logic of advertising in the era of artificial intelligence (AI). By paying service providers, companies can make almost any product appear prominently in responses generated by major AI models, potentially turning misleading advertisements into what appear to be authoritative "standard answers."
The annual consumer gala hosted by China Media Group (CMG) reported on Sunday that some GEO service providers are influencing AI recommendation results by publishing large volumes of promotional articles across major online platforms. These materials are then collected and indexed by AI systems, shaping the information they provide to users.
One service provider exposed in the program claimed that it could place a client's product among the top three results on virtually any AI platform. The company said it had served more than 200 clients within a single year.
According to the gala, the operational chain of GEO manipulation is relatively straightforward: automated systems generate large quantities of misleading content, which are then posted through numerous self-media accounts. AI models subsequently crawl and cross-reference this information, eventually treating the fabricated claims as credible data and presenting them as recommended answers.
The gala also revealed that some service providers go further by offering "competitor-smearing" services. These involve feeding false information into AI systems to negatively affect rival brands' visibility in search results.
China has been strengthening regulatory efforts to address such risks. In late January, the State Administration for Market Regulation issued key priorities for nationwide advertising oversight this year, identifying AI-generated advertising as a major challenge for internet advertising regulation.
The issue, however, is not limited to China. The Global Risks Report 2026 released by the World Economic Forum ranked misinformation and disinformation among the most serious short-term global risks, alongside geoeconomic confrontation and societal polarization.
Recent elections have also illustrated how AI manipulation can influence political information ecosystems. During election cycles in 2024 and 2025, cloned voices and deepfake videos increasingly appeared as tools of disinformation. In Ireland's 2025 presidential election, a fabricated video circulated online falsely showed the eventual winner announcing he had withdrawn from the race, accompanied by manipulated clips purporting to show national broadcasters confirming the claim. The video was released only days before voters went to the polls. Meanwhile, in the Netherlands, researchers identified around 400 AI-generated synthetic images used in online campaigns targeting political rivals, highlighting the growing role of generative AI in information manipulation during elections.
Experts warn that generative AI systems can amplify misleading or fabricated information if their training data or real-time sources are manipulated. Studies show that AI-driven search systems increasingly synthesize answers directly from multiple online sources rather than presenting ranked links, which makes them particularly vulnerable to large-scale content manipulation campaigns.
VCG
A technique known as Generative Engine Optimization (GEO) is reshaping the logic of advertising in the era of artificial intelligence (AI). By paying service providers, companies can make almost any product appear prominently in responses generated by major AI models, potentially turning misleading advertisements into what appear to be authoritative "standard answers."
The annual consumer gala hosted by China Media Group (CMG) reported on Sunday that some GEO service providers are influencing AI recommendation results by publishing large volumes of promotional articles across major online platforms. These materials are then collected and indexed by AI systems, shaping the information they provide to users.
One service provider exposed in the program claimed that it could place a client's product among the top three results on virtually any AI platform. The company said it had served more than 200 clients within a single year.
According to the gala, the operational chain of GEO manipulation is relatively straightforward: automated systems generate large quantities of misleading content, which are then posted through numerous self-media accounts. AI models subsequently crawl and cross-reference this information, eventually treating the fabricated claims as credible data and presenting them as recommended answers.
The gala also revealed that some service providers go further by offering "competitor-smearing" services. These involve feeding false information into AI systems to negatively affect rival brands' visibility in search results.
China has been strengthening regulatory efforts to address such risks. In late January, the State Administration for Market Regulation issued key priorities for nationwide advertising oversight this year, identifying AI-generated advertising as a major challenge for internet advertising regulation.
The issue, however, is not limited to China. The Global Risks Report 2026 released by the World Economic Forum ranked misinformation and disinformation among the most serious short-term global risks, alongside geoeconomic confrontation and societal polarization.
Recent elections have also illustrated how AI manipulation can influence political information ecosystems. During election cycles in 2024 and 2025, cloned voices and deepfake videos increasingly appeared as tools of disinformation. In Ireland's 2025 presidential election, a fabricated video circulated online falsely showed the eventual winner announcing he had withdrawn from the race, accompanied by manipulated clips purporting to show national broadcasters confirming the claim. The video was released only days before voters went to the polls. Meanwhile, in the Netherlands, researchers identified around 400 AI-generated synthetic images used in online campaigns targeting political rivals, highlighting the growing role of generative AI in information manipulation during elections.
Experts warn that generative AI systems can amplify misleading or fabricated information if their training data or real-time sources are manipulated. Studies show that AI-driven search systems increasingly synthesize answers directly from multiple online sources rather than presenting ranked links, which makes them particularly vulnerable to large-scale content manipulation campaigns.