Convergence Fellowship Program
Toward ASI Stability
A Treaty Framework for US–China Cooperation on Artificial Superintelligence
By
Marjia Siddik
September 12, 2025
This project explores the idea of a phased, enforceable treaty between the U.S. and China to help prevent a dangerous race toward Artificial Superintelligence (ASI).
Author
Key Advisors
Originally Published
September 12, 2025
Research program
International Security Fellowship
This project was conducted as part of SPAR Spring 2025, a research fellowship connecting rising talent with experts in AI safety or policy.
Abstract
As artificial superintelligence (ASI) becomes more technically viable, the risk of an arms race between the United States and China increases. This paper proposes a phased, enforceable treaty designed to reduce that risk by using mutual vulnerability to align incentives. Drawing on lessons from nuclear arms control, the treaty includes verifiable limits on compute and model training, telemetry-based inspections, and bilateral emergency protocols. Unlike frameworks based on voluntary norms, this model integrates oversight into national security infrastructure, making coordination possible even without trust. It addresses near-term safety risks and long-term shifts in global power, offering a strategy that reflects current geopolitical conditions. The treaty is built to function under rivalry, not consensus, and includes mechanisms for adaptability in the face of political change or external disruption. While focused on the U.S. and China, the structure could support future multilateral expansion as other actors approach ASI capabilities.
Executive Summary
Artificial superintelligence (ASI) could reshape global power in ways that current policy frameworks are not ready to handle. If the United States and China develop it in isolation, both face the risk of entering a rapid and unstable race. This kind of competition increases the chance of mistakes, rushed deployment, and systems that may become uncontrollable. ASI is likely to be built by a small number of labs with access to rare computing resources. These labs will need large amounts of energy, advanced chips, and powerful cloud systems. Such activity leaves behind signals that governments can monitor. This gives policymakers a chance to act early, before these systems become too complex to oversee. This paper proposes a treaty between the U.S. and China to reduce the risks tied to ASI development. The treaty would begin with small steps, such as improving transparency and opening communication channels during emergencies. Over time, it would expand to include inspections, shared safety procedures, and clear limits on certain kinds of training activity. The agreement would rely on tools like chip tracking, cloud usage reports, and audit rights for large training runs. A neutral commission would oversee the process and help resolve disputes. The treaty is designed for conditions where trust is low. It focuses on practical steps that both countries could agree to, based on their shared interest in avoiding disaster. Historical arms control efforts have worked under similar circumstances, where each side agreed to limits because the alternative was too risky.
To make the treaty stable over time, it would include recurring safety meetings, crisis simulations, and review mechanisms. These steps would help keep the agreement in place during leadership changes or periods of tension. The pace of AI development is fast, and political systems are moving more slowly. If no agreement is in place before ASI becomes viable, it may be too late to put rules around it. Voluntary pledges and general principles are unlikely to stop the pressures that come with a high-stakes race. A treaty with clear stages and enforcement tools offers one of the few realistic ways to reduce those risks. This proposal does not seek to end competition. It aims to make sure that competition stays within limits that both countries can live with.
Get research updates from Convergence
Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.
You may opt out at any time. View our Privacy Policy.