TechFlow News, March 15: According to China News Service, the “3·15” Gala exposed the widespread phenomenon of AI large models being “poisoned.” Li Fumin, an expert from the Institute of Intelligent Social Governance at Shandong University of Finance and Economics, stated that businesses’ practice of using GEO and other services to conduct targeted training of large models—thereby guiding AI to generate recommendations for specific products or services—is essentially a new form of unfair competition and consumer deception. This practice employs technological means to carry out covert marketing and fabricate facts, causing consumers to unknowingly receive embedded marketing content; its harmfulness and illegality warrant serious attention.
On one hand, such behavior violates consumers’ rights to be informed and to fair transactions as stipulated under the Consumer Rights Protection Law; on the other hand, it constitutes false or misleading commercial promotion through technological means, disrupting the normal operation of recommendation algorithms and the healthy competitive market environment, thereby amounting to unfair competition.
Addressing such AI “poisoning” requires a multi-pronged approach: regulatory authorities must prioritize monitoring AI-driven marketing inducements and strengthen law enforcement oversight; AI operators must rigorously vet training data sources and implement output filtering, establishing traceable mechanisms; and consumers must enhance their awareness of the commercial nature of AI-generated content and actively safeguard their rights through complaints and reporting.




