scenario research program

Evaluating strategies across plausible futures for AI

A research program by Convergence that explores potential scenarios and evaluates strategies for controlling the trajectory of AI.

scenario research program

Evaluating strategies across plausible futures for AI

A research program by Convergence that explores potential scenarios and evaluates strategies for controlling the trajectory of AI.

scenario research program

Evaluating strategies across plausible futures for AI

A research program by Convergence that explores potential scenarios and evaluates strategies for controlling the trajectory of AI.

What is Scenario Planning?
Why is it important?

Scenario Planning

Scenario planning is an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty.

For instance, an AI scenario can refer to a possible pathway of AI development and deployment, encompassing both technical and societal aspects of this evolution.

A highly specific scenario could, for example, describe a particular pathway to transformative AI, detailing what happens each year and at every key junction of development. A more general scenario could be “transformative AI is reached through comprehensive AI services”.

AI systems are starting to revolutionize the way we work, rapidly improve our technological progress in key research areas, and demonstrate specific capabilities that compare to or exceed human potential.

However, there is massive uncertainty about the future development of AI, from experts to policymakers to laymen. There is no clear consensus or predictions around how rapidly transformative AI might develop, whether TAI models are likely to be agentic, whether technically aligned AI models are possible, and many more questions.

There is also no clear consensus on how AI models may impact society by causing existential threats, from enabling biochemical weapons production, to destabilizing financial systems, to opening new cybersecurity risks.

As a result, scenario research is the primary investigatory tool to answer questions like the following:

What are the possible paths of development for AI technology, and how likely is each path? What inputs will alter which path of development ends up happening?

What are the most likely set of negative societal outcomes (threat models) to result? How can we avoid them? Which are currently neglected by the current discourse?

What types of strategies (theories of victory) might best prevent or mitigate the likelihood of these global risks?

In conducting scenario planning, we must rely on a sociotechnical framework - considering not just the technical details of AI development, but the ways in which these technical details impact the society in which AI is deployed. As a result, scenario planning is massively interdisciplinary, tapping into fields like technical alignment, political science, ethics, statistics, and interaction research.

What projects are we focusing on in our Scenario Research program?

The main defining feature of our research approach is our application of scenario planning to AI safety and AI governance.

Though there is no one standard methodology, scenario research can be seen as combining two major activities:

  1. Exploring scenarios. Within a specified domain, the landscape of relevant pathways that the future might take is systematically charted. Key parameters which shape or differentiate these future outcomes are identified. The evidentiary status and consequences of parametric assumptions are analyzed.

  2. Evaluating strategies. Strategies are developed to mitigate the potential negative consequences which have surfaced through the study, steering towards positive future outcomes. Key interventions are identified and evaluated for effectiveness.

In essence, scenario research provides a structured way to envision divergent futures. This allows for the preparation and implementation of more robust, flexible strategies that can adapt to a variety of potential future environments.

Scenario Research Initiative

How are we exploring AI scenarios?

In this area of research, we're exploring different pathways of AI development. We are considering which AI scenarios are plausible and highly consequential. We are explicating the parametric assumptions that predict or differentiate important scenarios, and the evidentiary status of those assumptions.

We will also highlight key uncertainties and point to areas where further research is required.

Our research will include the following components:

Collecting AI Scenarios

We are undertaking a thorough review of existing AI safety literature to understand the landscape of publicly-proposed AI scenarios and parameters of key interest.

By examining a wide range of sources, we're ensuring a well-rounded understanding of the current state of AI safety considerations.

Identifying Threat Models

We are mapping out a range of possible AI scenarios, identifying those with existential or catastrophic outcomes (which we call ‘threat models’).

Special focus will be given to exploring how threat models with short timelines might unfold.

Identifying Key Parameters

We are identifying the relevant set of parameters that differentiate important AI scenarios or otherwise significantly shape the trajectory of AI development.

These parameters may include technological advancements, ethical considerations, societal impact, and governance mechanisms.

Finally, we're evaluating the plausibility of different scenarios and examine the evidentiary basis for specific parametric assumptions. We will also describe and analyze the consequences associated with different pathways of AI development.

Particular focus will be given to evaluating threat models with short timelines.

Convergence will release a technical report in mid-2024 exploring transformative AI scenarios.

Scenario Research Initiative

How are we evaluating AI strategies?

In this initiative, we're conducting research developing and evaluating strategies for controlling the trajectory of AI. We aim to describe prudent decision-making across the range of plausible and consequential AI scenarios discussed in the previous initiative. This involves examining proposed strategies, establishing which specific interventions are most effective, and identifying crucial points to intervene.

This research will include the following components:

Collecting AI Strategies

Collecting AI Strategies

Collecting AI Strategies

We will undertake a thorough review of existing AI safety and governance literature to understand the landscape of strategies and specific interventions that have already been proposed or implemented.

Strategy Development and Mitigation

Strategy Development and Mitigation

Strategy Development and Mitigation

We will develop strategies to mitigate the potential negative consequences associated with a range of important scenarios and parametric assumptions, as identified through our exploration of scenarios.

This involves assessing the efficacy of different interventions and tailoring strategies to the specific characteristics of each scenario.

Identification of Intervention Points:

Identification of Intervention Points:

Identification of Intervention Points:

A crucial part of our work will be identifying high-impact intervention points. These are moments in or aspects of AI development where strategic actions can significantly shift the likelihood of threat models being realized.

In evaluating these strategies, we intend to produce a set of "Theories of Victory" - feasible paths of AI development towards a safe and flourishing future. We aim to provide actionable, concrete guidelines for key actors, to ensure effective and coherent decision-making in AI safety and governance.

Convergence will release a technical report in late 2024 developing and evaluating strategies for controlling the trajectory of AI.

Other Convergence Programs

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.

You may opt out at any time. View our Privacy Policy.