Dek: In its March 9 work report, China’s Supreme People’s Court cited a generative-AI case where developers were not found liable because they had fulfilled their duty of care and no actual harm to the plaintiff was shown, while also stressing that harmful AI misuse will still be regulated.
China’s Supreme People’s Court used its March 9 annual work report to send one of the clearest official signals yet on how it wants at least some AI disputes to be understood: not every model mistake should automatically become an infringement finding.
According to the court’s report, a generative-AI service made an error in a cited case, but the developer had already fulfilled its duty of care and the plaintiff’s rights had not been actually harmed. On that basis, the court said it found no infringement. In the same passage, however, the report also said courts would resolutely regulate conduct that uses AI to infringe lawful rights or disrupt social order.
That combination is the real story. The message is not that China’s top court has created a blanket exemption for AI developers. It has not. The narrower and more defensible reading is that the court is signaling a limited room for error when two conditions are present: the developer exercised due care, and no actual harm occurred.
What the work report actually said
The signal came in the Supreme People’s Court’s work report delivered on March 9 at the second plenary meeting of China’s National People’s Congress. In the section on promoting the sound development of artificial intelligence, the court said it would properly handle AI-related cases and “accurately grasp” the room for error in technological innovation.
The report then cited a specific example rather than announcing a new rule in the abstract. It said a generative AI service made an error during service provision, but because the developer had fulfilled its duty of care and the plaintiff suffered no actual harm, the court found that the conduct did not constitute infringement.
That example matters because it gives global readers a more concrete way to understand the court’s framing. The top court is not saying AI systems must be error-free before they can exist. It is saying that, at least in the example it chose to highlight, liability did not attach simply because a generative-AI output went wrong.
Just as important is the sentence that followed. The same paragraph said courts would firmly regulate the use of AI to infringe other people’s lawful rights or to disrupt social order, and that this was part of promoting technology for good. That second half prevents the passage from being read as a pro-industry free pass.
Why the liability signal matters
For international readers, the significance is not that China suddenly published a full new AI liability doctrine. The significance is that the country’s highest court chose to elevate this exact framing in a national work report.
That suggests Chinese judicial messaging is trying to balance two goals at once.
First, it wants to avoid signaling that every AI mistake will be punished in the same way, especially when developers have already taken reasonable precautions. That matters in an industry where imperfect outputs are often part of the real technical risk profile.
Second, it wants to make equally clear that AI misuse remains punishable when it crosses into concrete rights violations or broader social harm. In other words, the court is not positioning innovation and regulation as opposites. It is trying to define a boundary between tolerable error and actionable abuse.
That is a notable choice at a moment when AI liability is becoming a more global policy question. Developers, regulators, and courts in many markets are all wrestling with some version of the same problem: how much legal exposure should attach to model failures, and under what conditions should fault, harm, and diligence matter most?
China’s March 9 court language does not answer those questions comprehensively. But it does indicate that the official judicial tone is moving toward a more conditional standard rather than a zero-error expectation.
This is not a blanket exemption
The limits of the story matter as much as the headline.
The court’s public materials do not present this as a dedicated judicial interpretation, a standalone AI guideline, or a newly published landmark judgment. Instead, the work report cited a case in summary form. The sources reviewed for this article did not include a case number, a full judgment text, or a more detailed explanation of how the court analyzed duty of care, causation, or harm.
That means downstream coverage needs to stay disciplined.
The safest wording is that the work report said the court had cited a generative-AI case in which no infringement was found because due care had been taken and no actual harm occurred. It is much less safe to write that China’s top court has created a general AI safe harbor, granted blanket immunity, or released a formal nationwide rule shielding developers from liability.
This distinction is especially important because the same passage explicitly preserved enforcement against harmful AI use. The court’s message is not “AI developers are protected.” It is closer to “some errors may be tolerated when diligence was shown and no real injury resulted, but abuse will still be dealt with under the law.”
A broader message: encourage innovation, punish abuse
Placed in context, the court’s language fits a broader pattern in China’s recent AI messaging. The state has been signaling that it wants faster commercialization, wider deployment, and more confidence around AI adoption, while still keeping governance language close at hand.
That wider pattern is visible in earlier 1M Reviews coverage such as China Studies How AI Can Create Jobs and Upgrade Work, which showed policymakers framing AI as a labour-market and industrial-upgrading tool rather than only a disruption risk. It also connects with Pointer-CAD Shows China AI Entering 3D Design, where the story was about AI moving into engineering workflows, and China’s AI+Manufacturing Push Targets 1,000 Industrial Agents by 2027, which highlighted the push to take AI from demos into factory operations.
Against that backdrop, the Supreme People’s Court’s March 9 wording looks less like an isolated legal curiosity and more like part of a broader national signal: China wants AI developers and adopters to keep moving, but it does not want that momentum mistaken for a license to ignore harm.
That is why the phrase “room for error” matters. It is really a policy-judicial way of saying that innovation should have some operating space. But it is also why the court paired that phrase with a warning about regulating infringements and social disorder. The operating space is limited, conditional, and bounded by harm.
Bottom line
China’s top court did not unveil a sweeping AI liability rule on March 9. What it did do was more subtle and still important: in its annual work report, it highlighted a generative-AI case where an error did not lead to infringement because the developer had fulfilled its duty of care and no actual harm to the plaintiff was shown.
That points to a more nuanced official stance than either extreme reading. China is not signaling zero tolerance for every AI mistake, but it is also not offering blanket protection. The court’s message is a narrower one: there may be limited room for error in AI innovation, as long as diligence was exercised and no real damage was done — and harmful misuse remains firmly within regulatory reach.
Sources
- Supreme People’s Court of the People’s Republic of China: https://www.court.gov.cn/zixun/xiangqing/492681.html
- China National Radio summary: https://news.cnr.cn/native/gd/20260309/t20260309_527546894.shtml