Science of Trustworthy AI RFP
Convoca: Schmidt Sciences
Termini: 17/05/2026
Schmidt Sciences invites proposals for the Science of Trustworthy AI program, which supports technical research that improves our ability to understand, predict, and control risks from frontier AI systems while enabling their trustworthy deployment.
This Request for Proposals is grounded in our Research Agenda, which defines the scientific scope and priorities. The questions in each subsection guide what we consider in scope; they are not an exhaustive checklist. Proposals need not match any question(s) verbatim, but should clearly advance the underlying scientific objectives of our research agenda and explain why the work advances the science of trustworthy AI. We expect strong proposals—especially at funding Tier 2—to take a clear stand on a small number of core questions and pursue them deeply, rather than addressing many agenda items superficially.
The research agenda has three connected aims:
- Aim 1: Characterize and forecast misalignment in frontier AI systems: Understand why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
- Aim 2: Develop generalizable measurements and interventions: Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
- Aim 3: Oversee AI systems with superhuman capabilities and address multi-agent risks: Extend oversight and control to regimes where humans cannot directly evaluate correctness/safety, and address risks that arise from interacting AI systems.

Deixa un comentari