The Hangzhou Internet Court has dismissed a lawsuit in China’s first civil dispute centered on “AI Hallucination,” providing a significant legal precedent for the liability boundaries of generative AI service providers.
In June 2025, a user surnamed Liang used an AI platform to inquire about university admissions. The AI provided incorrect information regarding a university campus. When challenged by Liang, the AI not only insisted it was correct but boldly claimed: “If this information is wrong, I will compensate you 100,000 yuan. You may sue me at the Hangzhou Internet Court.”
After Liang proved the information was false using official records, the AI admitted its error. Liang subsequently sued the AI’s developer, demanding 9,999 yuan based on the AI's "compensation promise."

Photo/AIGC
The court dismissed the claim, ruling that the AI-generated "promise" does not constitute a valid declaration of intent by the developer. The court’s reasoning rested on four pillars:
Firstly, AI does not possess civil subject status and cannot act as an agent or representative.
Secondely, the developer did not use the model as a tool to intentionally communicate such a commitment.
Thirdly, general social conventions and trading habits do not support a user’s "reasonable reliance" on a randomly generated AI promise.
Forthly, there was no evidence the developer had expressed a willingness to be legally bound by the AI’s autonomous output.
The judgment clarified that while AI platforms cannot guarantee "zero hallucination" due to current technical limitations, they must fulfill specific obligations:
Providers must strictly censor prohibited, harmful, or illegal content;
Platforms must provide conspicuous risk warnings and employ reasonable technical measures to prevent errors.
In this case, the court found the defendant had completed required safety assessments and provided adequate risk disclosures, thus absolving them of infringement liability.
The ruling has been hailed by legal scholars, such as Associate Professor Lin Huanmin of koGuan School of Law, Shanghai Jiaotong University, for balancing technological innovation with consumer protection. By refusing to impose excessive liability for hallucinations, the court avoids stifling the growth of the AI industry while reminding developers to continuously refine their risk-notification systems.
However, the case serves as a stark reminder of the risks "AI Hallucinations" pose to professional fields like medicine, law, and finance—where misinformation can have real-world consequences and potentially pollute future training data.

川公网安备 51019002001991号