The burgeoning wave of generative AI presents a significant challenge for businesses: balancing value extraction with risk mitigation.

During the 2025 World Artificial Intelligence Conference, Cheng Zhong, Deloitte China Technology, Media & Telecommunications Industry Leader, emphasized in an exclusive interview with National Business Daily (NBD) that generative AI governance is not an option to be deferred; businesses must act swiftly. He advocated for a three-pronged approach: clarifying responsibilities and enhancing personnel literacy, integrating risk control and compliance across the entire lifecycle, and leveraging platforms to demystify the "black box" of AI. This synergistic approach, he believes, is crucial for gaining a competitive edge.

Overcoming Hallucinations and the Black Box: Investing in Trustworthy AI

NBD: After embedding AI into processes, how can enterprises' daily decisions avoid the negative impacts of large model "hallucinations"? Which type of hallucination do you believe should be prioritized for resolution? How does Deloitte help enterprises quantify the hidden costs caused by hallucinations?

Cheng Zhong: "Hallucinations" manifest as fluent but factually incorrect or logically inconsistent outputs, potentially undermining business judgment and compliance. Deloitte's "Trustworthy AI Framework" proposes that enterprises build multi-layered defenses, such as defining clear human review roles, establishing structured fact-checking procedures, and implementing logging and continuous learning mechanisms.

We believe that structural hallucinations should be prioritized for resolution. These often appear when AI outputs tables, charts, summaries, or data analysis results. They may look logically clear and neatly formatted on the surface, but the underlying data could be fabricated, or the reasoning chain broken. This can easily mislead decision-makers, posing a higher risk than open-ended text hallucinations.

As you correctly noted, the costs associated with hallucinations are often "hidden," leading to serious consequences such as misguided customers, operational errors, redundant content creation, and even reputational or legal risks. To quantify these hidden costs, we can evaluate metrics like model output accuracy and duplication rates; model potential impacts using actual operational data; and calculate the time and human efficiency losses from rework, to form an ROI comparative analysis.

NBD: In scenarios with near-zero tolerance for error, such as medical diagnostics, financial compliance, and legal documentation, how can AI risks be mitigated?

Cheng Zhong: We recommend building a systemic hallucination circuit-breaker mechanism to ensure AI outputs are not directly used for high-risk decisions. Technology choices must satisfy "certainty + controllability." It's advisable to prioritize a hybrid architecture of small models and expert rules, especially in areas with clear regulations like law and compliance, where their reliability surpasses black-box large models. Furthermore, fine-tuning models with industry-specific data and integrating proprietary knowledge bases can enhance output accuracy and interpretability. When selecting technology, it's essential to ensure robust logging capabilities that support the traceability of model outputs and accountability.

NBD: As the development of AI agents rises to a strategic level for enterprises, how should boards drive employees to better train and apply these agents? For media outlets deploying AI agents for basic writing tasks, what risk warnings and strategic approaches would you suggest?

Cheng Zhong: We advise boards to redefine the value boundaries between humans and machines, emphasizing that AI agents are "enhancement tools" rather than "replacements." Providing training and incentives is crucial. For example, in the media industry, allow editors to participate in prompt tuning, content review, and standard setting for AI agents. It's also important to establish "AI usage red lines," such as requiring human review for sensitive content.

Currently, AI-generated content still carries risks such as a lack of depth, factual inaccuracies, copyright disputes, and public controversy. Strategically, I recommend starting with basic scenarios like news summaries or data-driven reports. Then, train AI agents specifically, leveraging proprietary data to enhance their adaptability to the media's style and content requirements. Employ a "testing sandbox + feedback loop" mechanism, gradually expanding from low-risk scenarios and continuously optimizing during application, so that AI agents progressively become reliable collaborative partners for the team.

Cheng Zhong Photo/Provided to NBD

Generative AI Enters the Crucial Value Creation Phase

NBD: If the board demands the complete elimination of hallucinations and the demystification of the "black box" problem, how would you advise them? What do you consider an effective AI governance paradigm?

Cheng Zhong: To eliminate hallucinations and demystify the black box, enterprises must not only clarify the boundaries of AI usage but also enhance technological transparency and interpretability, thereby operationalizing AI governance.

An effective AI governance paradigm should be a shift from reactive to proactive. For example: at the strategic level, formulate a clear AI development strategy and objectives, defining AI's position and role within the enterprise; at the organizational level, establish a dedicated AI governance team responsible for overseeing AI application compliance and risk management; at the technical level, adopt explainable AI technologies and data governance tools to ensure transparent and controllable AI systems; and at the cultural level, cultivate AI literacy among employees and foster a correct attitude toward AI usage.

NBD: Most enterprises still view AI security as a burden. How can this mindset be changed? What are the specific ways in which security investments directly drive revenue?

Cheng Zhong: Unlike fragmented solutions that simply layer AI onto legacy systems, enterprises should first build a unified architecture that makes AI a core pillar and integrates governance models across all processes. Deloitte's blueprint can help enterprises effectively coordinate cybersecurity strategy with enterprise AI transformation. Businesses should also build AI-driven cybersecurity teams, enhancing their collaboration capabilities with AI tools.

Ensuring data security can prevent customer churn due to data breaches, thereby maintaining or even increasing revenue. Meeting compliance requirements allows enterprises to avoid substantial fines and ensures business continuity. A secure and reliable AI system can boost customer satisfaction and loyalty, fostering repeat business and positive word-of-mouth, indirectly increasing revenue.

In practice, one financial services company integrated AI into its cybersecurity architecture, automating threat detection and reducing incident response time by 60%. Another manufacturing enterprise, leveraging Deloitte's governance framework, reduced downtime by 40%, with the direct increase in production efficiency driving revenue growth.