Back to blog

Sunzian versus Mohist Thought on Medical AI Openness

How classical Chinese philosophy frames the open-source versus closed-source debate for large language models in medicine.

Artificial intelligence (AI), within the past several years, has emerged as both one of the most influential and controversial technologies of our lifetimes. Recent advances seem to position this technology as one with immense potential to reshape social relations, accelerate scientific discovery, and alter existing political hierarchy. Notably, the development of this technology has been split amongst those supporting closed-source models and open-source models. In medicine, a 2024 npj Digital Medicine commentary titled "The path forward for large language models in medicine is open" argues that open, transparent models are essential for safe and accountable clinical AI (Riedemann et al. 2024). At the same time, debates over open-source versus closed-source models reveal a deep tension: open systems promise wider access and greater innovation, but they also weaken centralized control (e.g., destroying hegemony) and make enforcing safe use more difficult. With the stakes being incredibly high in medicine, it becomes imperative for us to choose carefully in how to invest in the development of this technology. In this paper on medical AI openness, I first reconstruct Sunzi's rationale for secrecy and strategic control, then turn to Mozi's doctrine of impartial care as a challenge to that stance, and finally argue that responsible medical AI policy must negotiate between these two perspectives.

The case for openness

The aforementioned article provides a clear open-source position in the medical domain. Riedemann and co-authors argue that large language models (LLMs) in medicine should be transparent and controllable, and that openness is the best way to achieve this. In their view, open weights and code would allow hospital systems, regulators, and independent researchers to inspect how models behave, adapt them to local needs, and continuously monitor safety and quality. They also worry that closed models, controlled by a small number of corporations, make it harder to detect bias or error and to hold systems accountable when harms occur. In a context where medical AI is increasingly proposed for documentation support and even clinical decision support, they conclude that "the future is open"; i.e., that openness is an ethical and practical requirement for medical AI.

Sunzi: warfare, deception, and strategic control

Set against this claim, Sunzi's Art of War offers a powerful conceptual framework for arguing that some forms of technological secrecy are strategically necessary (Tzu 2020). Sunzi famously claims that "Warfare is the art of deception" (Tzu 2020, pg. 43). In the modern state of political interactions, the development of artificial intelligence acts as an "AI arms race" of sorts, perhaps even analogous to previous projects such as the Manhattan project or the Apollo program. In fact, the US government recently released a memo detailing the "Genesis Mission" for the development of US-first AI systems and drawing strong parallels to the Manhattan project (The White House 2025). Given this, there exists a need to think about the strategic implications of developing models in an open-source method in which any nation-state could access this technology. The Sunzi counterargument for the article would likely be as follows: total open-sourcing of the development of AI systems will enable bad actors to misuse technologies and shatter modern political hegemony and stability.

Another central theme in the Art of War is that of prior calculation. Sunzi writes that "if you've calculated the advantages, and so have come to see the wisdom of heeding my advice, then create a favorable strategic disposition which will, in turn, assist you with matters beyond it." (Tzu 2020, pg. 43) These prior calculations (in this context) may refer to accurate advance information about terrain, supply lines, alliances, and troop movements; i.e., knowledge that allows one to shape the battlefield before conflict even begins. Medical AI, especially predictive systems that can forecast disease risk or hospital capacity, can be seen as a form of foreknowledge: it allows health systems (and potentially governments) to anticipate outbreaks, allocate resources, and manage populations more effectively. While this may be an abstraction of sorts, it is clear that there is significant predictive power for these technologies. From a Sunzian standpoint, any technology that grants such predictive power is inherently strategic and powerful. It would therefore be surprising if a Sunzian policymaker were comfortable simply releasing the full underlying models to the world. Guarding the means of foreknowledge (e.g., how the system works, what data it was trained on, how it can be adapted) would be part of maintaining an advantage relative to rival states or malicious actors.

Finally, Sunzi emphasizes the importance of maintaining a secret system of information. In his discussion of spies, he describes the importance and cost-effectiveness of building a network of agents whose coordinated activities are invisible to the enemy (Goldin 2020). The point is that information can circulate freely within a trusted inner circle while remaining opaque to outsiders. This image maps onto contemporary closed AI ecosystems, in which a company or government tightly controls access to model weights, training data, and evaluation procedures. Within that closed circle, engineers, regulators, and allied institutions might have substantial transparency; outside it, the system is experienced as a black box. If advanced medical LLMs can be repurposed for harmful ends (e.g., assisting in biological design, extracting private information, or automating sophisticated cyber-bioattacks), Sunzi's logic suggests that opacity toward potential adversaries is a core safety mechanism. Whether or not such a system is truly feasible becomes a counterargument to the stability of opaque model development.

Taken as a whole, a Sunzian perspective drawn from the Art of War would likely urge the deployment of medical AI systems through a lens of strategic vulnerability and advantage. In doing so, opaqueness would act as a powerful tool in enforcing the necessity for safety and maintaining socioeconomic hegemony. The central question is not whether openness feels intuitively fair or just, but whether it increases or decreases a state's ability to safeguard its population and prevail in conflict. In the cases of states leading the charge for this deployment, a Sunzi would likely err on the side of caution and opaqueness to keep their strategic edge. It is important to note that a Sunzian perspective on the matter would also be dependent on several other factors of the political climate, such as strong allies or shared standards.

Mozi: impartial care and the demand for access

From a Mohist standpoint, the potential benefits of medical AI are exactly the kinds of goods that matter most: preventing premature death, reducing illness, and making scarce medical expertise more widely available. While Mozi is suspicious of luxurious projects that consume resources without clear public benefit, he is strongly in favor of technical innovations that evidently save lives or reduce hardship. This is evidenced by Mozi's statement "The business of a benevolent person is to promote what is beneficial to the world and eliminate what is harmful." Norden and Ivanhoe 2023, 65). Conversely, in criticizing extravagant musical displays, he writes that such practices impose "three hardships upon the people": "those who are hungry are unable to get food, those who are cold are unable to obtain clothing, and those who toil are not afforded a chance to rest" (Van Norden and Ivanhoe 2023, 98). In this light of favoring projects with clear public benefit, the argument made by Riedemann et al. (that open medical LLMs enable developers to better control safety and quality, and allow healthcare professionals to hold systems accountable) would be highly attractive. A utilitarian argument could easily be made as the following: If open models allow small hospitals, under-resourced clinics, or health systems in poorer countries to deploy and adapt powerful tools that would otherwise be inaccessible due to licensing costs or geopolitical barriers, Mohist impartial care would seem to favor openness.

Mozi is critical of partiality in the distribution of benefits, per his concept of impartial caring (Van Norden and Ivanhoe 2023). In fact, he states "The business of a benevolent person is to promote what is beneficial to the world and eliminate what is harmful." (Van Norden and Ivanhoe 2023, pg. 65). He asks his readers to imagine what would happen if people regarded the states and families of others as they regard their own. Would they still support aggressive war or tolerate policies that enrich their own states by imposing harm on others? Extending this thought experiment to medical AI, the question morphs into, if developers and policymakers regarded the health systems of other countries as they regard their own, could they endorse a regime in which cutting-edge models remain proprietary, restricted to wealthy institutions in rich countries, while patients elsewhere rely on inferior tools? From a Mohist perspective, a world in which a child's likelihood of receiving accurate diagnosis or timely treatment depends on whether their country can afford to license closed models is a clear case of harmful partiality that could be solved with openness of models.

Synthesis: negotiating between security and justice

When considering Sunzi and Mozi together on the npj Digital Medicine article, there are noticeable disagreements and points of overlap. At a most fundamental level, they disagree about whose interests should govern technological policy. Sunzi is primarily concerned with the survival and victory of a particular state in a competitive environment. Mozi, by contrast, evaluates actions from the standpoint of all under heaven: partiality to one's own state at the expense of others is, for him, the root of much suffering. In the open versus closed debate, Sunzi therefore begins from national security and strategic competition, while Mozi begins from global health equity and aggregate human welfare. While both points are valid and seemingly contrasting from a first glance, there is certainly a quasi-synergy to be understood. Idealistically, both frameworks work with the interest of preventing harm for populations.

One way to reconcile Sunzi's concern for strategic security with Mozi's demand for impartial benefit is to distinguish between subcomponents of openness. On one potential Sunzian-Mohist compromise view, the inner core of highly capable, general-purpose foundation models might be developed and evaluated within tightly controlled environments (e.g., national labs, regulated consortia, or accredited medical institutions) so that safety is thoroughly tested and misuse risk is minimized. At the same time, once these models have been validated and adapted for specific medical purposes, versions of them could be made widely available under strong governance frameworks: open weights or open APIs for health systems around the world, accompanied by open documentation, auditing tools, and safety protocols. This would be an attempt to merge Sunzi's emphasis on knowing and controlling one's own capabilities supports strict internal oversight and Mozi's impartial care which highlights broad external access to the fruits of that oversight. It may be important to note that this compromise inherently favors a modern interpretation of a closed-source development of models. This is because, without access to the most cutting-edge AI technologies, true contribution on a global scale will necessarily lag behind the optimal.

Limits of the classical frameworks

There are also important limits to how far these classical Chinese frameworks can be applied to the governance of modern medical AI. Neither Sunzi nor Mozi confronted technologies that can be reproduced at essentially zero marginal cost and disseminated globally with such scale. The scalability and opacity of contemporary machine learning models introduce forms of risk that differ significantly from the siege engines and crossbows of the Warring States. For Sunzi, the main danger of sharing military technology is that a rival might use it against you on the battlefield; in the AI case, the danger may be unanticipated emergent behaviors, cascading failures in socio-technical systems, or global-scale misuse. Mozi's proto-utilitarian test of promoting benefit and eliminating harm becomes harder to apply in a world where long-term, low-probability catastrophic harms are very difficult to estimate. In whole, the sheer magnitude of the implications urges deeper and more intricate thought on the matter than what Sunzi or Mozi had to deal with during the Warring States period.

Conclusion

From such an analysis, the most basic conclusion is that the tension between the ideas of Mozi and Sunzi frames the space of responsible policy. Sunzi urges us to treat medical AI as an instrument whose distribution must be carefully controlled for strategic risk. On the other hand, Mozi urges us to measure that control against its consequences for all people and not just for one's own state or a specific subpopulation of people. Together, they suggest that the right question is perhaps deeper than whether medical LLMs should be open or closed. Rather, it ought to be focused on how we can design institutions that combine strong safety governance with a serious commitment to impartial access. The referenced article attempts to do so in a preliminary fashion by breaking down the idea of openness into 14 subcomponents, but these partitions remain rather superficial and not tied to the implementation risk that follows for this technology and its impact on human lives. For contemporary debates about medical AI openness, Sunzi and Mozi offer a warning against ignoring either security or justice, and an aspiration toward an arrangement in which our most powerful tools are neither recklessly abandoned to the world nor jealously guarded for the few, but deployed in ways that genuinely promote benefit and reduce harm for all.

References

Goldin, Paul. 2020. The Art of Chinese Philosophy. Princeton University Press.

Riedemann, Lars, Maxime Labonne, and Stephen Gilbert. 2024. "The Path Forward for Large Language Models in Medicine Is Open." npj Digital Medicine 7 (1): 339.

The White House. 2025. "Launching the Genesis Mission." November 24. https://www.whitehouse.gov/presidential-actions/2025/11/launching-the-genesis-mission/.

Tzu, Sun. 2020. The Art of War. Michael Nylan. WW Norton.

Van Norden, Bryan W., and Philip J. Ivanhoe. 2023. Readings in Classical Chinese Philosophy. 3rd ed. Philip J. Ivanhoe and Bryan W. Van Norden. Hackett Publishing.

All posts