Open Philanthropy donations made (filtered to cause areas matching AI safety)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy

Brief history: Open Philanthropy (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell as the "Open Philanthropy Project" in June 2017. In 2020, it started going by "Open Philanthropy" dropping the "Project" word.

Brief notes on broad donor philosophy and major focus areas: Open Philanthropy is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more.

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval.

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy database are not listed on Donations List Website as being under Open Philanthropy. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by open philanthropy that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by Open Philanthropy are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 202 250,000 1,641,695 370 25,000 66,000 101,187 166,500 250,000 364,893 575,000 1,430,000 2,728,319 55,000,000
AI safety 200 250,000 1,382,612 370 25,000 63,839 101,187 166,500 250,000 343,235 562,128 1,337,600 2,652,500 38,920,000
Global catastrophic risks 1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
Security 1 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2023 2022 2021 2020 2019 2018 2017 2016 2015
AI safety (filter this donor) 200 105 276,522,475.00 57,959,108.00 57,955,777.00 81,661,316.00 15,571,349.00 8,243,500.00 4,160,392.00 43,221,048.00 6,563,985.00 1,186,000.00
Security (filter this donor) 1 1 55,000,000.00 0.00 0.00 0.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Global catastrophic risks (filter this donor) 1 1 100,000.00 0.00 0.00 0.00 0.00 0.00 0.00 100,000.00 0.00 0.00
Total 202 105 331,622,475.00 57,959,108.00 57,955,777.00 81,661,316.00 15,571,349.00 63,243,500.00 4,160,392.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2023 2022 2021 2020 2019 2018 2017 2016 2015
AI safety/technical research 105 52 113,117,260.00 23,708,065.00 20,227,923.00 29,683,390.00 12,918,204.00 8,243,500.00 2,914,122.00 9,366,506.00 6,055,550.00 0.00
AI safety 24 22 85,470,118.00 600,959.00 723,796.00 47,882,116.00 0.00 0.00 746,270.00 33,854,542.00 476,435.00 1,186,000.00
Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety 1 1 55,000,000.00 0.00 0.00 0.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
AI safety/governance 26 11 33,999,836.00 18,412,276.00 10,255,800.00 3,425,686.00 1,506,074.00 0.00 400,000.00 0.00 0.00 0.00
AI safety/strategy 20 13 24,422,918.00 7,399,026.00 16,278,241.00 78,000.00 535,651.00 0.00 100,000.00 0.00 32,000.00 0.00
AI safety/technical research/movement growth 2 1 9,185,729.00 4,025,729.00 5,160,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI safety/technical research/talent pipeline 7 4 5,353,395.00 245,000.00 4,703,395.00 405,000.00 0.00 0.00 0.00 0.00 0.00 0.00
AI safety/technical research/governance 1 1 1,535,480.00 1,535,480.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI safety/technical research/strategy 1 1 1,433,000.00 1,433,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI safety/movement growth 9 9 1,020,195.00 330,573.00 606,622.00 66,000.00 17,000.00 0.00 0.00 0.00 0.00 0.00
AI safety/governance/talent pipeline 1 1 594,420.00 0.00 0.00 0.00 594,420.00 0.00 0.00 0.00 0.00 0.00
AI safety/strategy/forecasting 2 2 271,124.00 150,000.00 0.00 121,124.00 0.00 0.00 0.00 0.00 0.00 0.00
Global catastrophic risks/AI safety 1 1 100,000.00 0.00 0.00 0.00 0.00 0.00 0.00 100,000.00 0.00 0.00
AI safety/tecnical research 1 1 80,000.00 80,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI safety/governance/forecasting 1 1 39,000.00 39,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Classified total 202 105 331,622,475.00 57,959,108.00 57,955,777.00 81,661,316.00 15,571,349.00 63,243,500.00 4,160,392.00 43,321,048.00 6,563,985.00 1,186,000.00
Unclassified total 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 202 105 331,622,475.00 57,959,108.00 57,955,777.00 81,661,316.00 15,571,349.00 63,243,500.00 4,160,392.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2023 2022 2021 2020 2019 2018 2017 2016 2015
Center for Security and Emerging Technology (filter this donor) 101,920,000.00 0.00 0.00 46,920,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
OpenAI (filter this donor) AI safety FB Tw WP Site TW 30,000,000.00 0.00 0.00 0.00 0.00 0.00 0.00 30,000,000.00 0.00 0.00
Redwood Research (filter this donor) 25,420,000.00 5,300,000.00 10,700,000.00 9,420,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for Human-Compatible AI (filter this donor) AI safety WP Site TW 17,110,796.00 0.00 0.00 11,355,246.00 0.00 200,000.00 0.00 0.00 5,555,550.00 0.00
RAND Corporation (filter this donor) FB Tw WP Site 16,030,751.00 16,000,000.00 0.00 0.00 30,751.00 0.00 0.00 0.00 0.00 0.00
Massachusetts Institute of Technology (filter this donor) FB Tw WP Site 14,982,692.00 0.00 13,277,348.00 1,430,000.00 275,344.00 0.00 0.00 0.00 0.00 0.00
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 14,756,250.00 0.00 0.00 0.00 7,703,750.00 2,652,500.00 150,000.00 3,750,000.00 500,000.00 0.00
Center for AI Safety (filter this donor) 10,618,729.00 5,458,729.00 5,160,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Epoch (filter this donor) 9,071,123.00 7,111,123.00 1,960,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Open Phil AI Fellowship (filter this donor) 8,900,000.00 0.00 1,840,000.00 1,300,000.00 2,300,000.00 2,325,000.00 1,135,000.00 0.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 6,750,495.00 175,000.00 4,661,605.00 405,000.00 150,000.00 955,000.00 0.00 403,890.00 0.00 0.00
OpenMined (filter this donor) 6,028,320.00 6,000,000.00 28,320.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for a New American Security (filter this donor) 5,058,991.00 0.00 4,816,710.00 101,187.00 141,094.00 0.00 0.00 0.00 0.00 0.00
National Science Foundation (filter this donor) WP Site 5,000,000.00 5,000,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of California, Berkeley (filter this donor) FB Tw WP Site 4,453,845.00 0.00 0.00 747,829.00 0.00 1,111,000.00 1,145,000.00 1,450,016.00 0.00 0.00
Centre for the Governance of AI (filter this donor) 4,057,332.00 1,000,000.00 69,732.00 2,537,600.00 450,000.00 0.00 0.00 0.00 0.00 0.00
Stanford University (filter this donor) FB Tw WP Site 3,869,275.00 0.00 153,820.00 2,239,584.00 6,500.00 0.00 106,771.00 1,362,600.00 0.00 0.00
Rethink Priorities (filter this donor) Cause prioritization Site 3,681,204.00 457,200.00 2,728,319.00 495,685.00 0.00 0.00 0.00 0.00 0.00 0.00
The Wilson Center (filter this donor) FB Tw WP Site 3,579,516.00 0.00 2,023,322.00 291,214.00 864,980.00 0.00 400,000.00 0.00 0.00 0.00
Ought (filter this donor) AI safety Site 3,118,333.00 0.00 0.00 0.00 1,593,333.00 1,000,000.00 525,000.00 0.00 0.00 0.00
Mila (filter this donor) AI capabilities/AI safety Site 2,687,931.00 50,000.00 0.00 237,931.00 0.00 0.00 0.00 2,400,000.00 0.00 0.00
Eleuther AI (filter this donor) 2,642,273.00 2,642,273.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
FAR AI (filter this donor) 2,620,493.00 1,006,500.00 1,188,193.00 425,800.00 0.00 0.00 0.00 0.00 0.00 0.00
AI Safety Support (filter this donor) 2,023,716.00 443,716.00 1,580,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Future of Humanity Institute (filter this donor) Global catastrophic risks/AI safety/Biosecurity and pandemic preparedness FB Tw WP Site TW 1,994,000.00 0.00 0.00 0.00 0.00 0.00 0.00 1,994,000.00 0.00 0.00
UCLA School of Law (filter this donor) Tw WP Site 1,536,222.00 0.00 0.00 0.00 0.00 0.00 0.00 1,536,222.00 0.00 0.00
Apollo Research (filter this donor) 1,535,480.00 1,535,480.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alignment Research Center (filter this donor) 1,515,000.00 0.00 1,515,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Tübingen (filter this donor) 1,465,000.00 575,000.00 0.00 890,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Hofvarpnir Studios (filter this donor) 1,443,540.00 0.00 1,443,540.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Future of Life Institute (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 1,286,000.00 0.00 0.00 0.00 0.00 0.00 0.00 100,000.00 0.00 1,186,000.00
Longview Philanthropy (filter this donor) 770,076.00 770,076.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Guide Labs (filter this donor) 750,000.00 750,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Washington (filter this donor) FB Tw WP Site 730,000.00 0.00 0.00 730,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Conjecture (filter this donor) 702,380.00 245,000.00 457,380.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI Impacts (filter this donor) AI safety Site 696,893.00 150,000.00 364,893.00 0.00 50,000.00 0.00 100,000.00 0.00 32,000.00 0.00
Northeastern University (filter this donor) FB Tw WP Site 678,200.00 116,072.00 562,128.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Carnegie Mellon University (filter this donor) FB Tw WP Site 673,235.00 0.00 343,235.00 330,000.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Toronto (filter this donor) FB Tw WP Site 600,000.00 80,000.00 0.00 0.00 520,000.00 0.00 0.00 0.00 0.00 0.00
Carnegie Endowment for International Peace (filter this donor) FB Tw WP Site 597,717.00 0.00 597,717.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Study and Training Related to AI Policy Careers (filter this donor) 594,420.00 0.00 0.00 0.00 594,420.00 0.00 0.00 0.00 0.00 0.00
WestExec (filter this donor) 540,000.00 0.00 0.00 0.00 540,000.00 0.00 0.00 0.00 0.00 0.00
Stiftung Neue Verantwortung (filter this donor) 444,000.00 0.00 444,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Oxford (filter this donor) FB Tw WP Site 429,770.00 0.00 0.00 0.00 0.00 0.00 429,770.00 0.00 0.00 0.00
Modulo Research (filter this donor) 408,255.00 408,255.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of California, Santa Cruz (filter this donor) 379,000.00 114,000.00 0.00 265,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Daniel Dewey (filter this donor) 350,000.00 0.00 175,000.00 175,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Cornell University (filter this donor) FB Tw WP Site 342,645.00 342,645.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Southern California (filter this donor) FB Tw WP Site 320,000.00 0.00 0.00 320,000.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Maryland (filter this donor) FB Tw WP Site 312,959.00 312,959.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
National Academies of Sciences, Engineering, and Medicine (filter this donor) 309,441.00 0.00 309,441.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Yale University (filter this donor) FB Tw WP Site 299,320.00 0.00 0.00 0.00 0.00 0.00 0.00 299,320.00 0.00 0.00
AI Safety Hub (filter this donor) 298,839.00 0.00 298,839.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI Safety Communications Centre (filter this donor) 288,000.00 288,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
George Mason University (filter this donor) FB WP Site 277,435.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 277,435.00 0.00
University of Cambridge (filter this donor) FB Tw WP Site 250,000.00 0.00 0.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Centre for Effective Altruism (filter this donor) Effective altruism/movement growth FB Site 250,000.00 0.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Chicago (filter this donor) FB Tw WP Site 250,000.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Georgetown Universty (filter this donor) 246,564.00 0.00 0.00 246,564.00 0.00 0.00 0.00 0.00 0.00 0.00
Georgetown University (filter this donor) FB Tw WP Site 239,061.00 0.00 239,061.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Leap Labs (filter this donor) 230,000.00 230,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Université de Montréal (filter this donor) FB Tw WP Site 210,552.00 0.00 0.00 210,552.00 0.00 0.00 0.00 0.00 0.00 0.00
Electronic Frontier Foundation (filter this donor) FB Tw WP Site 199,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 199,000.00 0.00
University of Utah (filter this donor) FB Tw WP Site 171,773.00 171,773.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Purdue University (filter this donor) FB Tw WP Site 170,000.00 0.00 170,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI Scholarships (filter this donor) 159,000.00 0.00 0.00 0.00 0.00 0.00 159,000.00 0.00 0.00 0.00
Berryville Institute of Machine Learning (filter this donor) 150,000.00 0.00 0.00 150,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Forecasting Research Institute (filter this donor) 150,000.00 150,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Surge AI (filter this donor) 123,750.00 123,750.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Hypermind (filter this donor) 121,124.00 0.00 0.00 121,124.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for Strategic and International Studies (filter this donor) 118,307.00 0.00 0.00 0.00 118,307.00 0.00 0.00 0.00 0.00 0.00
Mordechai Rorvig (filter this donor) 110,000.00 0.00 110,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jérémy Scheurer (filter this donor) 110,000.00 0.00 110,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Pennsylvania (filter this donor) FB Tw WP Site 110,000.00 110,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Brian Christian (filter this donor) 103,903.00 37,903.00 0.00 66,000.00 0.00 0.00 0.00 0.00 0.00 0.00
University of British Columbia (filter this donor) FB Tw WP Site 100,375.00 100,375.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jacob Steinhardt (filter this donor) 100,000.00 0.00 100,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Responsible AI Collaborative (filter this donor) 100,000.00 100,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Apart Research (filter this donor) 89,000.00 0.00 89,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Illinois (filter this donor) 80,000.00 80,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Legal Priorities Project (filter this donor) 75,000.00 75,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
AI Alignment Awards (filter this donor) 70,000.00 0.00 70,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Neel Nanda (filter this donor) 70,000.00 70,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for International Security and Cooperation (filter this donor) WP 67,000.00 0.00 0.00 0.00 67,000.00 0.00 0.00 0.00 0.00 0.00
Johns Hopkins University (filter this donor) FB Tw WP Site 55,000.00 0.00 0.00 0.00 55,000.00 0.00 0.00 0.00 0.00 0.00
Michael Page (filter this donor) 52,500.00 0.00 52,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Swiss AI Safety Summer Camp (filter this donor) 51,248.00 51,248.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
World Economic Forum (filter this donor) FB Tw WP Site 50,000.00 0.00 0.00 0.00 50,000.00 0.00 0.00 0.00 0.00 0.00
California State University, San José (filter this donor) 39,000.00 39,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Rice, Hadley, Gates & Manuel LLC (filter this donor) 25,000.00 0.00 0.00 0.00 25,000.00 0.00 0.00 0.00 0.00 0.00
Distill (filter this donor) AI capabilities/AI safety Tw Site 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00 25,000.00 0.00 0.00
Center for Long-Term Cybersecurity (filter this donor) 20,000.00 0.00 20,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Adam Jermyn (filter this donor) 19,231.00 19,231.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alignment Research Engineer Accelerator (filter this donor) 18,800.00 18,800.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Press Shop (filter this donor) 17,000.00 0.00 0.00 0.00 17,000.00 0.00 0.00 0.00 0.00 0.00
Andrew Lohn (filter this donor) 15,000.00 0.00 0.00 0.00 15,000.00 0.00 0.00 0.00 0.00 0.00
Foundation Model Tracker (filter this donor) 15,000.00 0.00 15,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
OxAI Safety Hub (filter this donor) 11,622.00 0.00 11,622.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
GoalsRL (filter this donor) AI safety Site 7,500.00 0.00 0.00 0.00 0.00 0.00 7,500.00 0.00 0.00 0.00
Simon McGregor (filter this donor) 7,000.00 0.00 7,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Egor Krasheninnikov (filter this donor) 6,526.00 0.00 6,526.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Usman Anwar (filter this donor) 6,526.00 0.00 6,526.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
International Conference on Learning Representations (filter this donor) 3,500.00 0.00 0.00 0.00 3,500.00 0.00 0.00 0.00 0.00 0.00
Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai (filter this donor) 2,351.00 0.00 0.00 0.00 0.00 0.00 2,351.00 0.00 0.00 0.00
Smitha Milli (filter this donor) 370.00 0.00 0.00 0.00 370.00 0.00 0.00 0.00 0.00 0.00
Total -- -- 331,622,475.00 57,959,108.00 57,955,777.00 81,661,316.00 15,571,349.00 63,243,500.00 4,160,392.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2023 2021 2020 2019 2018 2017
Luke Muehlhauser 24 16 124,730,450.00 16,000,000.00 50,603,554.00 2,726,896.00 55,000,000.00 400,000.00 0.00
Nick Beckstead 8 8 25,595,336.00 0.00 21,016,246.00 0.00 0.00 429,770.00 4,149,320.00
Daniel Dewey 27 16 13,640,498.00 0.00 1,550,000.00 77,370.00 5,591,000.00 3,180,622.00 3,241,506.00
Claire Zabel|Committee for Effective Altruism Support 2 1 10,356,250.00 0.00 0.00 7,703,750.00 2,652,500.00 0.00 0.00
Catherine Olsson|Daniel Dewey 5 4 4,540,000.00 0.00 2,240,000.00 2,300,000.00 0.00 0.00 0.00
Committee for Effective Altruism Support 2 2 2,043,333.00 0.00 0.00 2,043,333.00 0.00 0.00 0.00
Helen Toner 1 1 1,536,222.00 0.00 0.00 0.00 0.00 0.00 1,536,222.00
Catherine Olsson|Nick Beckstead 4 3 1,475,000.00 0.00 1,475,000.00 0.00 0.00 0.00 0.00
Catherine Olsson 3 2 991,584.00 0.00 991,584.00 0.00 0.00 0.00 0.00
Daniel Dewey|Catherine Olsson 1 1 520,000.00 0.00 0.00 520,000.00 0.00 0.00 0.00
Claire Zabel 3 2 510,000.00 0.00 210,000.00 150,000.00 0.00 150,000.00 0.00
Tom Davidson|Ajeya Cotra 1 1 50,000.00 0.00 0.00 50,000.00 0.00 0.00 0.00
Classified total 81 46 185,988,673.00 16,000,000.00 78,086,384.00 15,571,349.00 63,243,500.00 4,160,392.00 8,927,048.00
Unclassified total 121 80 145,633,802.00 41,959,108.00 3,574,932.00 0.00 0.00 0.00 34,394,000.00
Total 202 105 331,622,475.00 57,959,108.00 81,661,316.00 15,571,349.00 63,243,500.00 4,160,392.00 43,321,048.00

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

If you hover over a cell for a given disclosures and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Disclosures Number of donations Number of donees Total 2017 2016 2015
Paul Christiano 2 2 30,500,000.00 30,000,000.00 500,000.00 0.00
Dario Amodei 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Holden Karnofsky 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Nick Beckstead 4 4 3,957,435.00 1,994,000.00 777,435.00 1,186,000.00
Daniel Dewey 3 3 2,771,435.00 1,994,000.00 777,435.00 0.00
Carl Shulman 1 1 1,994,000.00 1,994,000.00 0.00 0.00
Unknown, generic, or multiple 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Helen Toner 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Luke Muehlhauser 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Ben Hoffman 1 1 1,186,000.00 0.00 0.00 1,186,000.00
Jacob Steinhardt 1 1 500,000.00 0.00 500,000.00 0.00
Classified total 5 5 33,957,435.00 31,994,000.00 777,435.00 1,186,000.00
Unclassified total 197 102 297,665,040.00 11,327,048.00 5,786,550.00 0.00
Total 202 105 331,622,475.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by disclosures and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by disclosures and year (cumulative)

Graph of spending should have loaded here

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (28 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Some thoughts on recent Effective Altruism funding announcements. It's been a big week in Effective Altruism2022-03-03James Ozden Open Philanthropy FTX Future Fund FTX Community Fund FTX Climate Fund Mercy For Animals Charity Entrepreneurship Miscellaneous commentaryLongtermism|Animal welfare|Global health and development|AI safety|Climate changeIn this blog post, cross-posted at https://forum.effectivealtruism.org/posts/Wpr5ssnNW5JPDDPvd/some-thoughts-on-recent-effective-altruism-funding (GW, IR) to the EA Forum, James Ozden discusses recent increases in funding by donors aligned with effective altruism (EA) and makes forecasts for the amount of annual money moved by 2025. Highlights of the post: 1. The entry of the FTX Future Fund is expected to increase the proportion of funds allocated to longtermist causes to increase to become more in line with what EA leaders think it should be (based on the data that https://80000hours.org/2021/08/effective-altruism-allocation-resources-cause-areas/ compiles). 2. Grantmaking capacity needs to be scaled up to match the increase in available funds. 3. The EA movement may need to shift from marginal thinking to coordination dynamics, as their funding amounts are no longer as marginal. 4. Entrepreneurs, founders, and incubators are needed. 6. We need to be more ambitious.
Our Progress in 2020 and Plans for 20212021-04-29Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyAI safety|Biosecurity and pandemic preparedness|Criminal justice reform|Animal welfare|Scientific research|Effective altruism|COVID-19The post compares progress made by Open Philanthropy in 2020 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2019-and-plans-2020 and then lays out plans for 2021. The post notes that grantmaking, including grants to GiveWell top charities, was over $200 million. The post reviews the following from 2020: continued grantmaking, worldview investigations, other cause prioritization work, hiring and other capacity building, impact evaluation, outreach to external donors, and plans for 2021.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
Our Progress in 2019 and Plans for 20202020-05-08Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyCriminal justice reform|Animal welfare|AI safety|Effective altruismThe post compares progress made by the Open Philanthropy Project in 2019 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019 and then lays out plans for 2020. The post notes that grantmaking, including grants to GiveWell top charities, was over $200 million. The post reviews the following from 2019: continued grantmaking, growth of the operations team, impact evaluation (with good progress in evaluation of giving in criminal justice reform and animal welfare), worldview investigations (that was harder than anticipated, resulting in slower progress), other cause prioritization work, hiring and other capacity building, and outreach to external donors.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open PhilanthropyChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.
Co-funding Partnership with Ben Delo2019-11-11Holden Karnofsky Open PhilanthropyOpen Philanthropy Ben Delo PartnershipAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|Effective altruismBen Delo, co-founder of the cryptocurrency trading platform BitMEX, recently signed the Giving Pledge. He is entering into a partnership with the Open Philanthropy Project, providing funds, initially in the $5 million per year range, to support Open Phil's longtermist grantmaking, in areas including AI safety, biosecurity and pandemic preparedness, global catastrophic risks, and effective altruism. Later, the Machine Intelligence Research Institute (MIRI) would reveal at https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ that, of a $7.7 million grant from Open Phil, $1.46 million is coming from Ben Delo.
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)2019-09-10Ryan Carey Effective Altruism ForumFounders Pledge Open Philanthropy OpenAI Machine Intelligence Research Institute Broad donor strategyAI safety|Global catastrophic risks|Scientific research|PoliticsRyan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security2019-03-20Tate Williams Inside PhilanthropyOpen Philanthropy Center for Security and Emerging Technology Third-party coverage of donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|SecurityThe article focuses on grantmaking by the Open Philanthropy Project in the areas of global catastrophic risks and security, particularly in AI safety and biosecurity and pandemic preparedness. It includes quotes from Luke Muehlhauser, Senior Research Analyst at the Open Philanthropy Project and the investigator for the $55 million grant https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology to the Center for Security and Emerging Technology (CSET). Muehlhauser was previously Executive Director at the Machine Intelligence Research Institute. It also includes a quote from Holden Karnofsky, who sees the early interest of effective altruists in AI safety as prescient. The CSET grant is discussed in the context of the Open Philanthropy Project's hits-based giving approach, as well as the interest in the policy space in better understanding of safety and governance issues related to technology and AI.
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.2018-02-27Robert Wiblin Kieran Harris Holden Karnofsky 80,000 HoursOpen Philanthropy Broad donor strategyAI safety|Global catastrophic risks|Biosecurity and pandemic preparedness|Global health and development|Animal welfare|Scientific researchThis interview, with full transcript, is an episode of the 80,000 Hours podcast. In the interview, Karnofsky provides an overview of the cause prioritization and grantmaking strategy of the Open Philanthropy Project, and also notes that the Open Philanthropy Project is hiring for a number of positions.
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
The Open Philanthropy Project AI Fellows Program2017-09-12Open PhilanthropyOpen Philanthropy Broad donor strategyAI safetyThis annouces an AI Fellows Program to support students doing Ph.D. work in AI-related fields who have interest in AI safety. See https://www.facebook.com/vipulnaik.r/posts/10213116327718748 and https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussions.
A major grant from the Open Philanthropy Project2017-09-08Malo Bourgon Machine Intelligence Research InstituteOpen Philanthropy Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
My current thoughts on MIRI’s highly reliable agent design work (GW, IR)2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin.
Our Progress in 2016 and Plans for 20172017-03-14Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2016 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2015-and-plans-2016 and then lays out plans for 2017. The post notes success in scaling up grantmaking, as hoped for in last year's plan. The spinoff from GiveWell is still not completed because it turned out to be more complex than expected, but it is expected to be finished in mid-2017. Open Phil highlights the hiring of three Scientific Advisors (Chris Somerville, Heather Youngs, and Daniel Martin-Alarcon) in mid-2016, as part of its scientific research work. The organization also plans to focus more on figuring out how to decide how much money to allocate between different cause areas, with Karnofsky's worldview diversification post https://www.openphilanthropy.org/blog/worldview-diversification also highlighted. There is no plan to scale up staff or grantmmaking (unlike 2016, when the focus was to scale up hiring, and 2015, when the focus was to scale up staff).
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas.
Some Key Ways in Which I've Changed My Mind Over the Last Several Years2016-09-06Holden Karnofsky Open Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Reasoning supplementAI safetyIn this 16-page Google Doc, Holden Karnofsky, Executive Director of the Open Philanthropy Project, lists three issues he has changed his mind about: (1) AI safety (he considers it more important now), (2) effective altruism community (he takes it more seriously now), and (3) general properties of promising ideas and interventions (he considers feedback loops less necessary than he used to, and finding promising ideas through abstract reasoning more promising). The document is linked to and summarized in the blog post https://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06).
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email.
Here are the biggest things I got wrong in my attempts at effective altruism over the last ~3 years.2016-05-24Buck Shlegeris Buck Shlegeris Open Philanthropy Vegan Outreach Machine Intelligence Research Institute Broad donor strategyGlobal health|Animal welfare|AI safetyBuck Shlegeris, reflecting on his past three years as an effective altruist, identifies two mistakes he made in his past 3 years as an effective altruist: (1) "I thought leafleting about factory farming was more effective than GiveWell top charities. [...] I probably made this mistake because of emotional bias. I was frustrated by people who advocated for global poverty charities for dumb reasons. [...] I thought that if they really had that belief, they should either save their money just in case we found a great intervention for animals in the future, or donate it to the people who were trying to find effective animal right interventions. I think that this latter argument was correct, but I didn't make it exclusively." (2) "In 2014 and early 2015, I didn't pay as much attention to OpenPhil as I should have. [...] Being wrong about OpenPhil's values is forgivable, but what was really dumb is that I didn't realize how incredibly important it was to my life plan that I understand OpenPhil's values." (3) "I wish I'd thought seriously about donating to MIRI sooner. [...] Like my error #2, this is an example of failing to realize that when there's an unknown which is extremely important to my plans but I'm very unsure about it and haven't really seriously thought about it, I should probably try to learn more about it."
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority.
Our Progress in 2015 and Plans for 20162016-04-29Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2015 against plans laid out in https://www.openphilanthropy.org/blog/open-philanthropy-project-progress-2014-and-plans-2015 and then lays out plans for 2016. The post notes the following in relation to its 2015 plans: it succeeded in hiring and expanding the team, but had to scale back on its scientific research ambitions in mid-2015. For 2016, Open Phil plans to focus on scaling up its grantmaking and reducing its focus on hiring. AI safety is declared as an intended priority for 2016, with Daniel Dewey working on it full-time, and Nick Beckstead and Holden Karnofsky also devoting significant time to it. The post also notes plans to continue work on separating the Open Philanthropy Project from GiveWell.
Potential Global Catastrophic Risk Focus Areas2014-06-26Alexander Berger Open PhilanthropyOpen Philanthropy Broad donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risksIn this blog post originally published at https://blog.givewell.org/2014/06/26/potential-global-catastrophic-risk-focus-areas/ Alexander Berger goes over a list of seven types of global catastrophic risks (GCRs) that the Open Philanthropy Project has considered. He details three promising areas that the Open Philanthropy Project is exploring more and may make grants in: (1) Biosecurity and pandemic preparedness, (2) Geoengineering research and governance, (3) AI safety. For the AI safety section, there is a note from Executive Director Holden Karnofsky saying that he sees AI safety as a more promising area than Berger does.
Thoughts on the Singularity Institute (SI) (GW, IR)2012-05-11Holden Karnofsky LessWrongOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit
Singularity Institute for Artificial Intelligence2011-04-30Holden Karnofsky GiveWellOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyIn this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI.

Full list of donations in reverse chronological order (202 donations)

Graph of top 10 donees (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DoneeAmount (current USD)Amount rank (out of 202)Donation dateCause areaURLInfluencerNotes
Eleuther AI (Earmark: Nora Belrose)2,642,273.00232023-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/eleuther-ai-interpretability-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support the work of Nora Belrose. Nora will conduct research on AI interpretability and hire other researchers to assist her in this work."

Other notes: Intended funding timeframe in months: 48.
Berkeley Existential Risk Initiative70,000.001572023-10AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-university-collaboration-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [BERI's] university collaboration program. Selected applicants become eligible for support and services from BERI that would be difficult or impossible to obtain through normal university channels. BERI will use these funds to increase the size of its 2024 cohort." The page https://existence.org/2023/07/27/trial-collaborations-2023.html is linked.

Other notes: Intended funding timeframe in months: 12.
RAND Corporation (Earmark: Jason Matheny)10,500,000.0072023-10AI safety/governancehttps://www.openphilanthropy.org/grants/rand-corporation-emerging-technology-initiatives/Luke Muehlhauser Donation process: This is a followup grant to the grant https://www.openphilanthropy.org/grants/rand-corporation-emerging-technology-fellowships-and-research/ to the same grantee for similar purposes.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page lists the following initiatives to be funded by the grant: "(1) A technology policy training program. (2) Support for the Pardee RAND Graduate School. (3) A new research center focused on China studies. (4) A research fund that will help to produce information for policymakers about emerging technology and security priorities."
Berkeley Existential Risk Initiative70,000.001572023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-scalable-oversight-dataset/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant " to support the creation of a scalable oversight dataset. The purpose of the dataset is to collect questions that non-experts can’t answer even with the internet at their disposal; these kinds of questions can be used to test how well AI systems can lead humans to the right answers without misleading them."
FAR AI166,500.001222023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/far-ai-alignment-workshop/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a two-day alignment workshop in advance of NeurIPS 2023, a major machine learning and computational neuroscience conference."

Other notes: Intended funding timeframe in months: 1.
Northeastern University (Earmark: David Bau|Sam Marks)116,072.001372023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/northeastern-university-mechanistic-interpretability-research/-- Donation process: This is a followup grant to the grant https://www.openphilanthropy.org/grants/northeastern-university-large-language-model-interpretability-research/ to David Bau's lab.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a postdoctoral position for Sam Marks in Professor David Bau’s lab, where Sam will conduct research on mechanistic interpretability." The webpages https://baulab.info/ and https://www.neelnanda.io/mechanistic-interpretability/quickstart are linked.
OpenMined6,000,000.00122023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/openmined-software-for-ai-audits/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work on developing software that facilitates access to advanced AI systems for external researchers and auditors while preserving privacy, security, and intellectual property."
University of Pennsylvania (Earmark: Peter Conti-Brown)110,000.001392023-09AI safety/governancehttps://www.openphilanthropy.org/grants/university-of-pennsylvania-ai-governance-roundtables/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a series of roundtables led by Professor Peter Conti-Brown. At these events, experts will discuss how insights from financial regulation might inform emerging discussions on AI governance." The webpage https://lgst.wharton.upenn.edu/profile/petercb/ is linked.

Other notes: Intended funding timeframe in months: 24.
Surge AI (Earmark: Gabriel Recchia)123,750.001332023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/surge-ai-data-production-for-ai-safety-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Gabriel Recchia in producing data points for a research project on sandwiching experiments and capability evaluations of large language models." The webpage https://uk.linkedin.com/in/gabriel-recchia-38575b10 and the LessWrong post section https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models#Potential_near_future_projects___sandwiching_ (GW, IR) are linked.

Other notes: The grant https://www.openphilanthropy.org/grants/modulo-research-ai-safety-research/ to Modulo Research made around the same time also supports work by the same person (Gabriel Recchia) on a "research project on sandwiching experiments and capability evaluations of large language models.". Intended funding timeframe in months: 24.
AI Impacts150,000.001262023-08AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-expert-survey-on-progress-in-ai/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support an expert survey on progress in artificial intelligence. AI Impacts works to answer questions about the future of artificial intelligence."

Other notes: AI Impacts previously did expert surveys on the state of AI, including https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/ in 2016 and (a rerun) https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ in 2022. This survey is likely a followup/rerun of those surveys.
University of Utah (Earmark: Daniel Brown)31,773.001782023-08AI safety/movement growthhttps://www.openphilanthropy.org/grants/university-of-utah-course-on-human-ai-alignment/-- Donation process: The grant is part of Open Philanthropy Course Development Grants https://www.openphilanthropy.org/open-philanthropy-course-development-grants/ for which applications can be submitted online. The grant page https://www.openphilanthropy.org/grants/university-of-utah-course-on-human-ai-alignment/ says: "We sought applications for this funding to support the development of courses on a range of topics that are relevant to certain areas of Open Philanthropy’s grantmaking."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Daniel Brown in developing a course on human-AI alignment."

Donor reason for donating that amount (rather than a bigger or smaller amount): The online application form linked from https://www.openphilanthropy.org/open-philanthropy-course-development-grants/ requires applicants to include an estimated budget. The amount granted was likely based on the budget submitted by the applicant.

Donor reason for donating at this time (rather than earlier or later): The timing was likely determined by the timing of the grant application, as well as the academic year cycle. Details are not publicly available.

Donor thoughts on making further donations to the donee: The "Grantee expectations" section at https://www.openphilanthropy.org/open-philanthropy-course-development-grants/ does not specifically talk about followup grants. It says: "We would like grantees to continue teaching the developed course in the future (at least three times), but this is not a requirement of a grant. Grantees are required to provide us, after completion of the course, with a copy of the course syllabus, a copy of the final exam/final paper (if permitted of by the relevant university’s policies), enrollment statistics, student evaluations, and a brief summary (roughly half a page in length) describing their own experience teaching the course. We will strongly encourage grantees to make their syllabi available online, but we won’t require this." This suggests that the grantee is not expected to receive followup grants for teaching the same course, since the grant is for course *development*; further grants for different courses may be possible.

Other notes: On the Open Philanthropy website, the cause area is listed as global catastrophic risks rather than AI safety. We're using AI safety on the donations list website, so this may result in inconsistencies in some totals between the Open Philanthropy website and the donations list website.
Legal Priorities Project75,000.001562023-08AI safety/governancehttps://www.openphilanthropy.org/grants/legal-priorities-project-law-ai-summer-research-fellowship/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [grantee's] Summer Research Fellowship in Law & AI. Participants will work with researchers at LPP on projects at the intersection of law and risks from advanced AI."
Modulo Research (Earmark: Gabriel Recchia)408,255.00772023-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/modulo-research-ai-safety-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research — led by Gabriel Recchia — into large language model sandwiching experiments, dataset development, and capability evaluations." The webpage https://uk.linkedin.com/in/gabriel-recchia-38575b10 and the LessWrong blog post section https://www.lesswrong.com/posts/PZtsoaoSLpKjjbMqM/the-case-for-aligning-narrowly-superhuman-models#Potential_near_future_projects___sandwiching_ (GW, IR) are linked.

Other notes: The grant https://www.openphilanthropy.org/grants/surge-ai-data-production-for-ai-safety-research/ to Surge AI made around the same time also supports work by the same person (Gabriel Recchia) on a "research project on sandwiching experiments and capability evaluations of large language models.". Intended funding timeframe in months: 24.
AI Safety Communications Centre288,000.00962023-08AI safetyhttps://www.openphilanthropy.org/grants/effective-ventures-foundation-ai-safety-communications-centre/-- Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "This project provides the AI safety community with communications support, and connects journalists to AI safety experts and resources." https://aiscc.org/ is the linked grantee website.

Other notes: Grant via the Effective Ventures Foundation.
Guide Labs750,000.00542023-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/guide-labs-open-access-interpretability-project/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a project developing and testing AI error diagnostics and model guiding tools. To support AI safety and alignment, these tools will be made freely available to the general public."

Other notes: Intended funding timeframe in months: 18.
Swiss AI Safety Summer Camp51,248.001662023-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/swiss-ai-safety-summer-camp-ai-safety-bootcamp/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Swiss AI Safety Summer Camp to support its 2023 bootcamp. The program offers a multidisciplinary learning experience through activities such as deep learning courses, paper readings, discussions, presentations, and lectures." Although not linked from the grant page, the summer camp webpage is at https://www.aisafetycamp.ch/ and gives a timeframe of 4th to 16th September 2023 for the camp.

Other notes: Currency info: donation given as 45,165.00 CHF (conversion done via donor calculation); intended funding timeframe in months: 1.
Berkeley Existential Risk Initiative (Earmark: Anca Dragan)35,000.001762023-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-lab-retreat/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a retreat for Anca Dragan’s BAIR lab group, where members will discuss potential risks from advanced artificial intelligence."

Other notes: Intended funding timeframe in months: 1.
FAR AI460,000.00702023-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/far-ai-general-support-2023/-- Intended use of funds (category): Organizational general support
Redwood Research5,300,000.00152023-06AI safety/technical researchhttps://www.openphilanthropy.org/grants/redwood-research-general-support-2023/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. Redwood Research is a nonprofit research institution focused on aligning advanced AI with human interests."
Apollo Research1,535,480.00362023-06AI safety/technical research/governancehttps://www.openphilanthropy.org/grants/apollo-research-startup-funding/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for startup costs. Apollo Research is a new organization that will conduct research on how to evaluate whether AI models are aligned and safe, with a focus on interpretability and detecting whether models are deceptive. Apollo also plans to do research on AI governance."

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the timing of the start of the funded organization (Apollo Research).
Centre for the Governance of AI1,000,000.00512023-05AI safety/governancehttps://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-general-support-2/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "to the Centre for the Governance of AI (GovAI) for general support. GovAI conducts research on AI governance and works to develop a talent pipeline for those interested in entering the field."
Mila (Earmark: Jacob Steinhardt)50,000.001682023-05AI safety/technical researchhttps://www.openphilanthropy.org/grants/mila-workshop-on-human-level-ai/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a workshop on human-level artificial intelligence, led by Professor Jacob Steinhardt, that will bring together experts on AI and AI alignment."

Other notes: Intended funding timeframe in months: 1.
University of Chicago (Earmark: Chenhao Tan)250,000.001022023-05AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-chicago-research-on-complementary-ai/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research, led by Professor Chenhao Tan, on how to train AI systems to complement human efforts." Chenhao Tan's website https://chenhaot.com/ is linnked from the grant page.
Rethink Priorities302,390.00922023-04AI safety/governancehttps://www.openphilanthropy.org/grants/rethink-priorities-ai-governance-workshop/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support an in-person workshop bringing together professionals working on AI governance."

Other notes: Intended funding timeframe in months: 1.
Rethink Priorities154,810.001242023-04AI safety/governancehttps://www.openphilanthropy.org/grants/rethink-priorities-ai-governance-research-2023/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research on AI governance, with a focus on hardware security features."

Other notes: As of 2023-10-14, the text of the grant page states the amount to be $154,801, but the metadata on top and on the grants list page states the amount to be $154,810.
Center for AI Safety4,025,729.00192023-04AI safety/technical research/movement growthhttps://www.openphilanthropy.org/grants/center-for-ai-safety-general-support-2023/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. The Center for AI Safety works on research, field-building, and advocacy to reduce existential risks from artificial intelligence."
Epoch6,922,565.00112023-04AI safety/strategyhttps://www.openphilanthropy.org/grants/epoch-general-support-2023/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. Epoch researches trends in machine learning to better understand the pace of progress in artificial intelligence, and to help forecast the development of advanced AI and its subsequent economic impacts."
Conjecture (Earmark: SERI-MATS program)245,000.001072023-04AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [Conjecture's] collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment. This grant will support a London-based extension of the MATS program’s third cohort, which we supported last year."

Donor reason for selecting the donee: This grant is a followup to the grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the original SERI-MATS cohort (the third cohort). Conjecture was likely selected for the grant due to its interest, willingness and ability to manage the logistics of the extension in London, and its success at running a similar extension for the second cohort; that extension was also funded by Open Philanthropy (see https://www.openphilanthropy.org/grants/conjecture-seri-mats-program-in-london/ for details).

Donor reason for donating that amount (rather than a bigger or smaller amount): While no reason is provided for the amount, the amount is a little over 10% of the amount for the SERI-MATS grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ and a little over half the amount of the previous extension grant. The previous extension grant had been about half the corresponding SERI-MATS cohort grant. The reason for the reduced amount of this grant is not clear.

Donor reason for donating at this time (rather than earlier or later): The grant is made six months after the grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the SERI-MATS cohort whose extension is being funded by this grant. This makes sense since the extension happens after the program, whose duration (including application steps) is about 6 months.
University of Utah (Earmark: Daniel Brown)140,000.001322023-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-utah-ai-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research led by Professor Daniel Brown on ways to verify the extent to which an AI system is aligned with human values."
AI Safety Support (Earmark: Owain Evans)443,716.00742023-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/ai-safety-support-situational-awareness-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Three grants "to support research led by Owain Evans to evaluate whether machine learning models have situational awareness. These grants were made to AI Safety Support, Effective Ventures Foundation USA, and the Berkeley Existential Risk Initiative, and will support salaries, office space, and compute for this research project."

Other notes: Both the Open Philanthropy website and the donations list website list the grantee as AI Safety Support, but this is actually a combination of three grants, one each to "AI Safety Support, Effective Ventures Foundation USA, and the Berkeley Existential Risk Initiative"; the single donee is for simplicity and due to system limitations.
RAND Corporation (Earmark: Jason Matheny)5,500,000.00142023-04AI safety/governancehttps://www.openphilanthropy.org/grants/rand-corporation-emerging-technology-fellowships-and-research/Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to be spent at RAND President Jason Matheny’s discretion. Matheny has designated this funding to launch two new initiatives: a technology policy training program, and a research fund to help produce information that policymakers need to make wise decisions about emerging technology and security priorities."

Donor reason for selecting the donee: The grant page says: "We have been impressed with Matheny’s past work on technology and security — at IARPA, at the Center for Security and Technology, and in the White House — and we believe RAND is well-positioned to use such funding to great impact."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/rand-corporation-emerging-technology-initiatives/ for similar purposes suggests continued satisfaction with the grantee.
University of Maryland312,959.00902023-04AI safetyhttps://www.openphilanthropy.org/grants/university-of-maryland-policy-fellowship-2023/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the University of Maryland to support a fellowship related to technology and national security."
National Science Foundation5,000,000.00172023-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/national-science-foundation-safe-learning-enabled-systems/-- Intended use of funds (category): Regranting

Intended use of funds: Grant "to support [National Science Foundation's] Safe Learning-Enabled Systems program, which will regrant the funds to foundational research projects aimed at finding ways to guarantee the safety of machine learning systems." https://new.nsf.gov/funding/opportunities/safe-learning-enabled-systems is the linked webpage for the Safe Learning-Enabled Systems program.
Leap Labs230,000.001112023-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/leap-labs-interpretability-research/-- Donation process: The grant page says: "This grant was made primarily based on the recommendation of an external technical advisor."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Leap Labs to support research on AI interpretability, particularly model agnostic interpretability." The links are as follows: https://www.alignmentforum.org/s/Tp3ryR4AxY56ctGh2/p/CzZ6Fch4JSpwCpu6C for AI interpretability and https://www.lesswrong.com/posts/uXGLciramzNfb8Hvz/why-i-m-working-on-model-agnostic-interpretability (GW, IR) for model agnostic interpretability.

Donor reason for selecting the donee: The grant page says: "This grant was made primarily based on the recommendation of an external technical advisor."
FAR AI280,000.00972023-03AI safety/movement growthhttps://www.openphilanthropy.org/grants/far-ai-far-labs-office-space/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support FAR Labs, an office space in Berkeley for people working on AI safety and alignment."
FAR AI100,000.001442023-03AI safety/technical researchhttps://www.openphilanthropy.org/grants/far-ai-ai-interpretability-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a research project, led by Open Philanthropy AI Fellow Alex Tamkin, aimed at developing a neural network architecture that could serve as a more interpretable alternative to the transformer architecture used in leading language models."
California State University, San José (Earmark: Yan Zhang)39,000.001742023-03AI safety/governance/forecastinghttps://www.openphilanthropy.org/grants/san-jose-state-university-ai-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Yan Zhang on AI forecasting and AI governance." The webpage https://www.sjsu.edu/people/yan.zhang/ is linked.
Forecasting Research Institute150,000.001262023-03AI safety/strategy/forecastinghttps://www.openphilanthropy.org/grants/forecasting-research-institute-ai-forecasting-project/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a project that will bring together forecasters who disagree about the magnitude of AI existential risk to discuss and make predictions about AI, with the goal of identifying key views and arguments driving their forecasts and disagreements. The participants will include some “superforecasters” (people with a strong track record of making accurate predictions) and some AI subject-matter experts, among others."
University of Illinois (Earmark: Ben Levinstein)80,000.001532023-03AI safety/tecnical researchhttps://www.openphilanthropy.org/grants/university-of-illinois-ai-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor Ben Levinstein’s research on AI alignment." Levinstein's website https://www.levinstein.org/ is linked.
Center for AI Safety1,433,000.00402023-02AI safety/technical research/strategyhttps://www.openphilanthropy.org/grants/center-for-ai-safety-philosophy-fellowship/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support the CAIS Philosophy Fellowship, which is a research fellowship that will support philosophers researching topics related to AI safety. This grant also supported a workshop on adversarial robustness, as well as prizes for safety-related competitions at the 2022 NeurIPS conference." Links: https://philosophy.safe.ai/ for CAIS Philosophy Fellowship, https://eccv22-arow.github.io/ for the workshop, and https://trojandetection.ai/ and https://neurips2022.mlsafety.org/ for the prizes.
Epoch188,558.001182023-02AI safety/strategyhttps://www.openphilanthropy.org/grants/epoch-ai-worldview-investigations/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [Epoch's] “worldview investigations” related to AI." The linked blog post section https://www.openphilanthropy.org/research/our-progress-in-2019-and-plans-for-2020/#worldview-investigations describes this in more detail, starting with: "Identify debatable views we hold that play a key role in our cause prioritization, such as the view that there’s a nontrivial likelihood of transformative artificial intelligence being developed by 2036."
Longview Philanthropy770,076.00532023-02AI safety/governancehttps://www.openphilanthropy.org/grants/longview-philanthropy-ai-policy-development-at-the-oecd/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Longview Philanthropy to support their collaboration with the Organization for Economic Co-operation and Development (OECD) on a project to develop potential policies that could reduce existential risks from artificial intelligence."

Other notes: Currency info: donation given as 720,000.00 EUR (conversion done via donor calculation).
University of Tübingen (Earmark: Matthias Bethge)575,000.00612023-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-tuebingen-adversarial-robustness-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research led by Professor Matthias Bethge on adversarial robustness as a means to improve AI safety."
Brian Christian37,903.001752023-02AI safety/strategyhttps://www.openphilanthropy.org/grants/brian-christian-psychology-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a DPhil in psychology at the University of Oxford. His research will focus on human preferences, with the goal of informing efforts to align AI systems with human values."

Other notes: Currency info: donation given as 29,700.00 GBP (conversion done via donor calculation).
Responsible AI Collaborative100,000.001442023-02AI safety/strategyhttps://www.openphilanthropy.org/grants/responsible-ai-collaborative-ai-incident-database/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support its work maintaining the AI Incident Database, which is a database of incidents where AI systems have caused real-world harm." https://incidentdatabase.ai/ is the linked webpage for the AI Incident Database.
Alignment Research Engineer Accelerator18,800.001892023-02AI safety/movement growthhttps://www.openphilanthropy.org/grants/alignment-research-engineer-accelerator-ai-safety-technical-program/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "to support the Alignment Research Engineer Accelerator (ARENA), which is a program to help individuals interested in AI safety improve their technical skills in machine learning."
Cornell University (Earmark: Lionel Levine)342,645.00832023-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/cornell-university-ai-safety-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor Lionel Levine’s research related to AI alignment and safety."
University of Toronto (Earmark: Toryn Klassen)80,000.001532023-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-toronto-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Toryn Klassen’s research on topics related to AI alignment."
University of California, Santa Cruz (Earmark: Cihang Xie)114,000.001382023-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-california-santa-cruz-adversarial-robustness-research-2023/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research, led by Professor Cihang Xie, on adversarial robustness in AI systems. This funding will support salaries and other costs for two graduate students in Professor Xie’s lab." The webpage https://cihangxie.github.io/ is linked.

Donor reason for donating at this time (rather than earlier or later): This grant is made two years after the previous grant, that was intended to be a three-year grant. So, the new grant is being made a year before the running out of the old grant. The reason for the timing is not explicitly specified.
University of British Columbia (Earmark: Jeff Clune)100,375.001432023-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-british-columbia-ai-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "over two years to the University of British Columbia to support research led by Professor Jeff Clune on AI alignment." The webpage https://www.cs.ubc.ca/people/jeff-clune is linked.

Other notes: Intended funding timeframe in months: 24.
Adam Jermyn19,231.001872023-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/adam-jermyn-independent-ai-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Adam Jermyn to support his independent technical research on AI alignment."
Neel Nanda70,000.001572023-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/neel-nanda-interpretability-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Neel Nanda to support his independent research on interpretability. His work is aimed at improving human understanding of neural networks and machine learning models."
FAR AI (Earmark: Alex Tamkin)50,000.001682022-12AI safety/technical researchhttps://www.openphilanthropy.org/grants/far-ai-interpretability-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [FAR AI's] research on machine learning interpretability, in collaboration with Open Philanthropy AI Fellow Alex Tamkin."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/far-ai-ai-interpretability-research/ for the same research area and same leader (Alex Tamkin), as well as several other grants to the organization, suggest continued satisfaction with the grantee.
FAR AI49,500.001722022-12AI safety/technical researchhttps://www.openphilanthropy.org/grants/far-ai-inverse-scaling-prize/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support their Inverse Scaling Prize, which is a contest that awards prizes to contestants who find examples of tasks where language models perform worse as they scale."

Donor reason for donating that amount (rather than a bigger or smaller amount): The announcement post https://www.alignmentforum.org/posts/eqxqgFxymP8hXDTt5/announcing-the-inverse-scaling-prize-usd250k-prize-pool (published six months prior to the grant) states that the total prize pool is $250,000. https://github.com/inverse-scaling/prize#prize-information says "2023/03/21 Update: The prize pool has been funded by Open Philanthropy" suggesting that the amount provided by Open Philanthropy closed the funding gap to the target amount of $250,000.

Donor reason for donating at this time (rather than earlier or later): The grant is made six months after the announcement https://www.alignmentforum.org/posts/eqxqgFxymP8hXDTt5/announcing-the-inverse-scaling-prize-usd250k-prize-pool of the $250,000 prize pool for the prize. https://github.com/inverse-scaling/prize#prize-information says "2023/03/21 Update: The prize pool has been funded by Open Philanthropy"; this suggests that Open Philanthropy made the grant in light of the already-running prize in order to fill the funding gap.

Other notes: See the announcement post https://www.alignmentforum.org/posts/eqxqgFxymP8hXDTt5/announcing-the-inverse-scaling-prize-usd250k-prize-pool and the GitHub repository https://github.com/inverse-scaling/prize for more details.
FAR AI625,000.00572022-12AI safety/technical researchhttps://www.openphilanthropy.org/grants/far-ai-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. FAR AI works to incubate and accelerate research agendas to ensure AI systems are more trustworthy and beneficial to society."

Donor retrospective of the donation: The followup general support grant https://www.openphilanthropy.org/grants/far-ai-general-support-2023/ as well as other followup grants to FAR AI suggest continued satisfaction with the grantee.
Georgetown University239,061.001082022-12AI safetyhttps://www.openphilanthropy.org/grants/georgetown-university-policy-fellowship-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a fellowship related to AI and cybersecurity policy."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount of the grant is very similar to the amount of the previous grant https://www.openphilanthropy.org/grants/georgetown-university-policy-fellowship-2021/ to the grantee for the same fellowship the previous year.

Donor reason for donating at this time (rather than earlier or later): The grant is made exactly one year after the previous grant https://www.openphilanthropy.org/grants/georgetown-university-policy-fellowship-2021/ to the grantee for the same fellowship, suggesting that this is an annual renewal grant as the fellowship is being run for a second year.
Apart Research89,000.001512022-12AI safetyhttps://www.openphilanthropy.org/grants/apart-research-ai-alignment-hackathons/-- Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "two grants totaling $130,050 to Apart Research to support their work hosting four “hackathons” where participants will work on small projects related to AI alignment."

Other notes: This is a total across two grants.
Jérémy Scheurer110,000.001392022-12AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/jeremy-scheurer-independent-ai-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [grantee's] independent research on AI alignment." The Google Scholar citations page https://scholar.google.com/citations?user=_6nYXQYAAAAJ is linked from the grantee's name.

Donor reason for selecting the donee: The grant page says: "This is part of our strategy to grow the field of AI researchers who are focused on reducing potential risks from advanced artificial intelligence."
Simon McGregor7,000.001952022-12AI safety/technical researchhttps://www.openphilanthropy.org/grants/simon-mcgregor-ai-risk-workshop/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [grantee's] work to organize a workshop on AI risk."
Purdue University (Earmark: Xiangyu Zhang)170,000.001212022-12AI safety/technical researchhttps://www.openphilanthropy.org/grants/purdue-university-language-model-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research led by Professor Xiangyu Zhang on improving the robustness of language models against adversarial attacks." The webpage https://www.cs.purdue.edu/homes/xyzhang/ is linked.
Berkeley Existential Risk Initiative100,000.001442022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support-2/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. BERI seeks to reduce existential risks to humanity by providing services and support to university-based research groups, including the Center for Human-Compatible AI at the University of California, Berkeley."
Berkeley Existential Risk Initiative (Earmark: SERI-MATS program)2,047,268.00282022-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community. This grant will support the MATS program’s third cohort."

Donor reason for donating at this time (rather than earlier or later): The grant is made in time for the third cohort of the SERI-MATS program; this is the cohort being funded by the grant.
Intended funding timeframe in months: 6

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ for the London-based extension suggests continued satisfaction with this funded program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year. See also the companion grants: https://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/ to AI Safety Support and https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ to Conjecture for the London-based extension.
Alignment Research Center1,250,000.00442022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/alignment-research-center-general-support-november-2022/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. The Alignment Research Center conducts research on how to align AI with human interests, with a focus on techniques that could be adopted in existing machine learning systems and effectively scale up to future systems."

Donor reason for selecting the donee: While no reason is specified in the grant page, it's worth noting that the founder of the donee organization, Paul Christiano, has previously been a technical advisor to Open Philanthropy, and has been affiliated with multiple organizations (Machine Intelligence Research Institute, OpenAI, and Ought) that have previously received funding from Open Philanthropy for AI safety. These past connections may have influenced the grant.

Donor reason for donating at this time (rather than earlier or later): The grant is made eight months after the previous $265,000 grant https://www.openphilanthropy.org/grants/alignment-research-center-general-support/ and likely reflects the renewal of that now-used-up funding.
Intended funding timeframe in months: 24
Center for AI Safety5,160,000.00162022-11AI safety/technical research/movement growthhttps://www.openphilanthropy.org/grants/center-for-ai-safety-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. The Center for AI Safety does technical research and field-building aimed at reducing catastrophic and existential risks from artificial intelligence."

Donor retrospective of the donation: The followup general support grant https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support-2023/ in 2023 for a similar amount suggests continued satisfaction with the grantee.
AI Safety Support (Earmark: SERI-MATS program)1,538,000.00342022-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [AI Safety Support's] collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with in-person alignment research communities."

Other notes: See also the companion grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ to Berkeley Existential Risk Initiative and the grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ to Conjecture for the London-based extension.
AI Safety Hub63,839.001632022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/ai-safety-hub-safety-labs/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [grantee's] Safety Labs program, which will match students with mentors while the students research questions related to AI safety."

Other notes: Currency info: donation given as 53,700.00 GBP (conversion done via donor calculation).
Northeastern University (Earmark: David Bau)562,128.00622022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/northeastern-university-large-language-model-interpretability-research/-- Donation process: This is a followup grant to the grant to David Bau made as part of a collection of grants https://www.openphilanthropy.org/grants/funding-for-ai-alignment-projects-working-with-deep-learning-systems/ providing funding for projects working with deep learning systems. That previous grant had been made through grant applications sought at https://www.openphilanthropy.org/request-for-proposals-for-projects-in-ai-alignment-that-work-with-deep-learning-systems/ (a request for proposals).

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor David Bau’s research on interpreting large language models." The webpage https://baulab.info/ is linked.

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/northeastern-university-mechanistic-interpretability-research/ to Northeastern University for David Bau's lab suggests continued satisfaction with the grantee.

Other notes: Intended funding timeframe in months: 24.
Mordechai Rorvig110,000.001392022-11AI safety/movement growthhttps://www.openphilanthropy.org/grants/mordechai-rorvig-independent-ai-journalism/-- Donation process: The grantee's webpage gives context on why the grantee sought the grant: "In November 2022, I was awarded a grant from Open Philanthropy, a grantmaking organization, to provide one year’s worth of support for my independent journalism work in computer science and AI. This grant neither affects my editorial independence nor indicates an endorsement of my writing by the Open Philanthropy organization. I sought out grants after leaving Quanta in August 2022, and becoming increasingly informed about what I believe is a severe state of underfunding in science journalism, particularly for areas as important as computer science and AI." The Twitter thread https://twitter.com/mordecwhy/status/1559254697940336640 is linked from the webpage.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support his independent journalism on topics related to computer science, AI, and AI safety." The webpage https://mordechairorvig.com/ is linked.
Jacob Steinhardt100,000.001442022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/jacob-steinhardt-ai-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to provide operational support to Steinhardt’s lab at the University of California Berkeley, which specializes in research on how to align machine learning systems." The webpage https://jsteinhardt.stat.berkeley.edu/ is linked.
Conjecture (Earmark: SERI-MATS program)457,380.00712022-10AI safety/technical researchhttps://www.openphilanthropy.org/grants/conjecture-seri-mats-program-in-london/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [Conjecture's] collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment. This grant will support a London-based extension for a MATS cohort that started in Berkeley. Conjecture will use this funding to provide office space in London and operations support."

Donor reason for selecting the donee: The grant is a followup to the grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ for the original SERI-MATS cohort (the second cohort). Conjecture was likely selected for the grant due to its interest, willingness and ability to manage the logistics of the extension in London.

Donor reason for donating that amount (rather than a bigger or smaller amount): While no reason is provided for the amount, the amount is a little under half the amount granted at https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ for the SERI-MATS cohort whose extension is being funded by this grant.

Donor reason for donating at this time (rather than earlier or later): The grant is made six months after the grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ for the SERI-MATS cohort whose extension is being funded by this grant. This makes sense since the extension happens after the program, whose duration (including application steps) is about 6 months.

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ for a similar London-based extension of the third SERI-MATS cohort suggests continued satisfaction with the program being funded.
Foundation Model Tracker (Earmark: Thomas Liao)15,000.001912022-10AI safety/strategyhttps://www.openphilanthropy.org/grants/thomas-liao-foundation-model-tracker/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Thomas Liao to support his work on maintaining Foundation Model Tracker, a website that tracks the release of large AI models." The webpage https://foundationmodeltracker.com/ is linked, but does not work as of 2023-11-19. However, the webpage https://foundationmodeltracker.notion.site/foundationmodeltracker/Model-Tracker-v0-9-794ba77f74ec469186efdbdb87e9b8e6 that https://foundationmodeltracker.com/ used to redirect to still works.
OxAI Safety Hub (Earmark: Catherine Brewer)11,622.001932022-10AI safety/movement growthhttps://www.openphilanthropy.org/grants/catherine-brewer-oxai-safety-hub/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Catherine Brewer to support the OxAI Safety Hub, which is a new Oxford-based group working on building the AI safety community."

Other notes: Currency info: donation given as 10,540.00 GBP (conversion done via donor calculation).
Centre for the Governance of AI50,532.001672022-09AI safety/governancehttps://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-compute-strategy-workshop/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a workshop bringing together compute experts from several subfields, such as large-model infrastructure, ASIC design, and governance, to discuss compute governance ideas that could reduce existential risk from artificial intelligence."
Centre for the Governance of AI19,200.001882022-09AI safety/governancehttps://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-research-assistant/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a new research assistant."
AI Safety Hub (Earmark: Julia Karbing)235,000.001102022-09AI safety/movement growthhttps://www.openphilanthropy.org/grants/ai-safety-hub-startup-costs/-- Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Open Philanthropy recommended two grants totaling $235,000 to the AI Safety Hub to support their initial development costs, and to hire several contractors to work on projects related to AI safety. The AI Safety Hub, directed by Century Fellow Julia Karbing, is a new organization that will work on movement building in the AI safety field."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/ai-safety-hub-safety-labs/ suggests satisfaction with the outcome of this grant.

Other notes: This is a total across two grants.
AI Alignment Awards70,000.001572022-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/ai-alignment-awards-shutdown-problem-contest/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "over 1.5 years to AI Alignment Awards to support a contest asking participants to share ideas on how AI systems can be designed or trained to avoid the shutdown problem." The webpage https://www.alignmentawards.com/shutdown is linked.

Other notes: Intended funding timeframe in months: 18.
Redwood Research10,700,000.0062022-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/redwood-research-general-support-2/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. Redwood Research is a nonprofit research institution focused on aligning advanced AI with human interests."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/redwood-research-general-support-2023/ suggests continued satisfaction with the grantee.

Other notes: Intended funding timeframe in months: 18.
FAR AI (Earmark: Ethan Perez)463,693.00692022-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/fund-for-alignment-research-language-model-misalignment-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research projects, led by Ethan Perez, related to misalignment in language models."

Donor retrospective of the donation: Several followup grants, such as https://www.openphilanthropy.org/grants/far-ai-general-support/ for general support, suggest continued satisfaction with the grantee. However, as of mid-2023, there are no followup grants exclusively for the research area of this grant (language model misalignment).

Other notes: Intended funding timeframe in months: 18.
Daniel Dewey175,000.001192022-08AI safety/strategyhttps://www.openphilanthropy.org/grants/daniel-dewey-ai-alignment-projects-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support [Dewey's] work on AI alignment. Daniel will continue work on a website explaining how artificial intelligence poses a global risk, and continue work on proposals for experiments related to AI safety." The webpage https://www.danieldewey.net/risk/index.html is linked.
Centre for Effective Altruism250,000.001022022-08AI safety/movement growthhttps://www.openphilanthropy.org/grants/centre-for-effective-altruism-harvard-ai-safety-office/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to rent and refurbish a temporary office space for one year for the Harvard AI Safety Team." The webpage https://haist.ai/ is linked.
Center for a New American Security4,816,710.00182022-07AI safety/governancehttps://www.openphilanthropy.org/grants/center-for-a-new-american-security-work-on-ai-governance/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support work related to artificial intelligence policy and governance."

Other notes: Intended funding timeframe in months: 36.
Stanford University (Earmark: Clark Barrett|Scott Viteri)153,820.001252022-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-barrett-and-viteri/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research on AI alignment by Professor Clark Barrett and Stanford student Scott Viteri."
Carnegie Mellon University (Earmark: Aditi Raghunathan)343,235.00822022-07AI safetyhttps://www.openphilanthropy.org/grants/carnegie-mellon-university-research-on-adversarial-examples/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research led by Professor Aditi Raghunathan on adversarial examples (inputs optimized to cause machine learning models to make mistakes)." The webpages https://www.cs.cmu.edu/~aditirag/ and https://en.wikipedia.org/wiki/Adversarial_machine_learning#Adversarial_examples are linked.

Other notes: Aditi Raghunathan, whose work the grant funds, previously received money from Open Philanthropy as part of the Open Phil AI Fellowship https://www.openphilanthropy.org/grants/open-phil-ai-fellowship-2018-class/ and https://www.openphilanthropy.org/grants/uc-berkeley-adversarial-robustness-research-aditi-raghunathan/ while at UC Berkeley. Intended funding timeframe in months: 36.
Berkeley Existential Risk Initiative (Earmark: Samuel Bowman)30,000.001802022-06AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-language-model-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language model alignment research."

Other notes: Intended funding timeframe in months: 36.
AI Impacts364,893.00812022-06AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. AI Impacts works on strategic questions related to advanced artificial intelligence."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/ai-impacts-expert-survey-on-progress-in-ai/ suggests continued satisfaction with the grantee.
Epoch1,960,000.00312022-06AI safety/strategyhttps://www.openphilanthropy.org/grants/epoch-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. Epoch is a research organization that works on investigating trends in machine learning and forecasting the development of transformative artificial intelligence."
Berkeley Existential Risk Initiative (Earmark: SERI-MATS program)1,008,127.00502022-04AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Berkeley Existential Risk Initiative to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the second cohort of the SERI Machine Learning Alignment Theory Scholars (MATS) Program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community."

Donor reason for donating at this time (rather than earlier or later): The grant is made in time for the second cohort of the SERI-MATS program; this is the cohort being funded by the grant.
Intended funding timeframe in months: 6

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the third cohort of the SERI-MATS program suggests the donor's continued satisfaction with the SERI-MATS program. Also, the grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-program-in-london/ for the London-based extension of this cohort (the second cohort) also suggests the donor's satisfaction with the program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year.
Berkeley Existential Risk Initiative (Earmark: Center for Long-Term Cybersecurity)210,000.001132022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-ai-standards-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence."

Donor reason for donating at this time (rather than earlier or later): The grant is made at the same time as the companion grant https://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/ to the Center for Long-Term Cybersecurity (CLTC), via the University of California, Berkeley.

Other notes: There is a companion grant https://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/ to the Center for Long-Term Cybersecurity (CLTC), via the University of California, Berkeley.
Berkeley Existential Risk Initiative (Earmark: David Krueger)140,050.001312022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-david-krueger-collaboration/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger."

Other notes: The grant page says: "The grant amount was updated in August 2023.".
Open Phil AI Fellowship (Earmark: Adam Gleave|Cassidy Laidlaw|Cynthia Chen|Daniel Kunin|Erik Jenner|Johannes Treutlein|Lauro Langosco|Maksym Andriushchenko|Qian Huang|Usman Anwar|Zhijing Jin)1,840,000.00322022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/open-phil-ai-fellowship-2022-class/-- Donation process: The Open Phil AI Fellowship is awarded annually based on an application process. https://www.openphilanthropy.org/potential-risks-advanced-artificial-intelligence-the-open-phil-ai-fellowship/ has more details on the application process.

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship to eleven machine learning researchers over five years.

Donor reason for selecting the donee: According to the grant page: "These [eleven] fellows were selected for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. [...] We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it’s most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI. The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence."

Donor reason for donating that amount (rather than a bigger or smaller amount): Although the amount per researcher is lower than in previous years (at $1,840,000 over 11 years, it averages to around $170,000 per researcher, less than the $260,000 in the previous year), this reduced amount is partly explained by some of the grantees also receiving funding as Vitalik Buterin Postdoctoral Fellows (see https://futureoflife.org/team/fellowship-winners-2022/ for details); for these grantees, Open Phil and Future of Life Institute split the money equally. Also, regarding the amount, the grant page says: "This is an estimate because of uncertainty around future year tuition costs and currency exchange rates. This number may be updated as costs are finalized."

Donor reason for donating at this time (rather than earlier or later): This is the fourth of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Other notes: Five of the eleven grantees (Cynthia Chen, Erik Jenner, Johannes Treutlein, Usman Anwar, and Zhijing Jin) also receiving funding as Vitalik Buterin Postdoctoral Fellows (see https://futureoflife.org/team/fellowship-winners-2022/ for details); for these grantees, Open Phil and Future of Life Institute split the money equally.
AI Safety Support (Earmark: Jaime Sevilla)42,000.001732022-04AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-safety-support-research-on-trends-in-machine-learning/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to scale up a research group, led by Jaime Sevilla, which studies trends in machine learning."

Donor retrospective of the donation: Further grant https://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/ and https://www.openphilanthropy.org/grants/ai-safety-support-situational-awareness-research/ from Open Philanthropy, though with slightly different goals, suggest continued satisfaction with the grantee.
OpenMined28,320.001812022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/openmined-research-on-privacy-enhancing-technologies-and-ai-safety/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research on the intersection between privacy-enhancing technologies and technical infrastructure for AI safety." The webpage https://en.wikipedia.org/wiki/Privacy-enhancing_technologies is linked.

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/openmined-software-for-ai-audits/ in September 2023 for a much larger amount suggests continued satisfaction with the grantee.
Center for Long-Term Cybersecurity20,000.001862022-04AI safety/governancehttps://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support work by CLTC’s AI Security Initiative on the development and implementation of AI standards."

Other notes: This grant is made via the University of California, Berkeley. A related, much larger ($210,000) grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-ai-standards-2022/ is made by Open Philanthropy to the Berkeley Existential Risk Initiative for supporting work.
Massachusetts Institute of Technology (Earmark: Neil Thompson)13,277,348.0042022-03AI safety/strategyhttps://www.openphilanthropy.org/grants/massachusetts-institute-of-technology-ai-trends-and-impacts-research-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research led by Neil Thompson on modeling the trends and impacts of AI and computing. Thompson will use this funding to hire new staff and expand his lab work."

Other notes: Intended funding timeframe in months: 48.
Alignment Research Center265,000.001002022-03AI safety/technical researchhttps://www.openphilanthropy.org/grants/alignment-research-center-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. ARC focuses on developing strategies for AI alignment that can be adopted by industry today and scaled to future machine learning systems."

Donor reason for selecting the donee: While no reason is specified in the grant page, it's worth noting that the founder of the donee organization, Paul Christiano, has previously been a technical advisor to Open Philanthropy, and has been affiliated with multiple organizations (Machine Intelligence Research Institute, OpenAI, and Ought) that have previously received funding from Open Philanthropy for AI safety. These past connections may have influenced the grant.

Donor reason for donating at this time (rather than earlier or later): The grant is made shortly after the announcement by Alignment Research Center of its plans at https://www.alignment.org/blog/early-2022-hiring-round/ to hire beyond its current full-time staff of two. As grants are often committed after the internal decision process to make them, it is possible that the funding for this grant was sought for the purpose of this round of hiring, and was factored into the hiring announcement.

Donor retrospective of the donation: The followup two-year grant https://www.openphilanthropy.org/grants/alignment-research-center-general-support-november-2022/ suggests continued satisfaction with the grantee.
Rethink Priorities2,728,319.00212022-03AI safety/governancehttps://www.openphilanthropy.org/grants/rethink-priorities-ai-governance-research-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to Rethink Priorities to expand its research on topics related to AI governance."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/rethink-priorities-ai-governance-research-2023/ suggests continued satisfaction with the grantee.

Other notes: Intended funding timeframe in months: 24.
Hofvarpnir Studios (Earmark: Jacob Steinhardt|Center for Human-Compatible Artificial Intelligence)1,443,540.00392022-03AI safety/technical researchhttps://www.openphilanthropy.org/grants/hofvarpnir-studios-compute-cluster-for-ai-safety-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "create and maintain a compute cluster for Jacob Steinhardt’s lab that will also be used by researchers at the Center for Human-Compatible Artificial Intelligence." The webpage https://jsteinhardt.stat.berkeley.edu/ is linked.

Other notes: Intended funding timeframe in months: 36.
Carnegie Endowment for International Peace (Earmark: Matt Sheehan)597,717.00582022-03AI safety/governancehttps://www.openphilanthropy.org/grants/carnegie-endowment-for-international-peace-ai-governance-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a research project on international AI governance led by Matt Sheehan."

Other notes: Intended funding timeframe in months: 24.
Stiftung Neue Verantwortung444,000.00732022-03AI safety/strategyhttps://www.openphilanthropy.org/grants/stiftung-neue-verantwortung-ai-policy-analysis/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support data-driven reports on AI-related talent flows and the global microchip supply chain."

Other notes: Currency info: donation given as 390,528.00 EUR (conversion done via donor calculation).
Egor Krasheninnikov (Earmark: David Krueger)6,526.001972022-03AI safety/technical researchhttps://www.openphilanthropy.org/grants/egor-krasheninnikov-research-collaboration-with-david-krueger/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support [grantee's] research on machine learning in collaboration with Professor David Krueger." The webpage https://www.davidscottkrueger.com/ is linked.

Other notes: This is a followup to the April 2021 support https://www.openphilanthropy.org/grants/university-of-cambridge-machine-learning-research/ to University of Cambridge in support of David Krueger's work. At around the same time, a similar grant https://www.openphilanthropy.org/grants/usman-anwar-research-collaboration-with-david-krueger/ is made to Egor Krasheninnikov, also for work with David Krueger. Currency info: donation given as 5,000.00 GBP (conversion done via donor calculation).
Usman Anwar (Earmark: David Krueger)6,526.001972022-03AI safety/technical researchhttps://www.openphilanthropy.org/grants/usman-anwar-research-collaboration-with-david-krueger/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support his research on machine learning in collaboration with Professor David Krueger." The webpage https://www.davidscottkrueger.com/ is linked.

Donor retrospective of the donation: Usman Anwar would also be the recipient of the Open Phil AI Fellowship https://www.openphilanthropy.org/grants/open-phil-ai-fellowship-2022-class/ in 2022.

Other notes: This is a followup to the April 2021 support https://www.openphilanthropy.org/grants/university-of-cambridge-machine-learning-research/ to University of Cambridge in support of David Krueger's work. At around the same time, a similar grant https://www.openphilanthropy.org/grants/egor-krasheninnikov-research-collaboration-with-david-krueger/ is made to Egor Krasheninnikov, also for work with David Krueger. Currency info: donation given as 5,000.00 GBP (conversion done via donor calculation).
Berkeley Existential Risk Initiative (Earmark: Center for Human-Compatible AI)1,126,160.00482022-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. BERI will use the funding to facilitate the creation of an in-house compute cluster for CHAI’s use, purchase compute resources, and hire a part-time system administrator to help manage the cluster."
National Academies of Sciences, Engineering, and Medicine309,441.00912022-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/national-academies-of-sciences-engineering-and-medicine-safety-critical-machine-learning/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research on machine learning in safety-critical environments."
Michael Page52,500.001652022-02AI safetyhttps://www.openphilanthropy.org/grants/michael-page-career-transition-grant/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to work on several short-term projects while [Michael Page] explores different career options."

Donor reason for selecting the donee: The grant page says: "Page recently finished his tenure as a Research Fellow at the Center for Security and Emerging Technology, and we believe that his expertise on forecasting and AI policy makes him an exceptionally strong candidate for an impactful career."
The Wilson Center2,023,322.00292022-01AI safety/governancehttps://www.openphilanthropy.org/grants/wilson-center-ai-policy-training-program-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support [grantee's] AI policy training program, which is aimed at staffers for members of Congress and other policymakers. The program’s ultimate goal is to increase policymakers’ access to technical AI expertise."

Other notes: Intended funding timeframe in months: 24.
Centre for the Governance of AI2,537,600.00242021-12AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-field-buildingLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support activities related to building the field of AI governance research. GovAI intends to use this funding to conduct AI governance research and to develop a talent pipeline for those interested in entering the field."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-research-assistant/ and https://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-general-support-2/ suggest continued satisfaction with the grantee.

Other notes: Grant made via the Centre for Effective Altruism. Intended funding timeframe in months: 24.
Georgetown Universty246,564.001062021-12AI safetyhttps://www.openphilanthropy.org/grants/georgetown-university-policy-fellowship-2021/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a fellowship related to AI and cybersecurity policy."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/georgetown-university-policy-fellowship-2022/ the next year for the same purpose and for a similar amount suggest satisfaction with the outcome of the grant.
Berkeley Existential Risk Initiative (Earmark: SERI-MATS program)195,000.001172021-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program-2/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the SERI ML Alignment Theory Scholars (MATS) Program. MATS is a two-month program where students will research problems related to AI alignment while supervised by a mentor."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ and https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the second and third cohort of the SERI-MATS program suggests the donor's continued satisfaction with the SERI-MATS program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year. Intended funding timeframe in months: 6.
Stanford University (Earmark: Percy Liang)1,500,000.00372021-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-2021/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research led by Professor Percy Liang on AI safety and alignment."

Donor reason for selecting the donee: The grant page says: "We hope this funding will accelerate progress on technical problems and help to build a pipeline for younger researchers to work on AI alignment."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason is given for the amount. It is somewhat but not a lot higher per year ($1,500,000 over 3 years = $500,000) than the previous grant https://www.openphilanthropy.org/grants/stanford-university-support-for-percy-liang/ ($1,337,600 over 4 years = $334,400 per year).

Donor reason for donating at this time (rather than earlier or later): No explicit reason is given for the timing. The grant is made right around the end of the timeframe of the previous grant https://www.openphilanthropy.org/grants/stanford-university-support-for-percy-liang/ (four-year grant made in 2017) also for Percy Liang's research.
Intended funding timeframe in months: 36
Redwood Research9,420,000.0082021-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/redwood-research-general-support/Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. Redwood Research is a new research institution that conducts research to better understand and make progress on AI alignment in order to improve the long-run future."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/redwood-research-general-support-2/ of a comparable amount ($10.7 million) suggests continued satisfaction with the grantee.

Other notes: This is a total across four grants.
Mila237,931.001092021-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/mila-research-project-on-artificial-intelligence/Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a research project investigating AI consciousness and moral patienthood. The research will be conducted in collaboration with the Université de Montréal and the Future of Humanity Institute. This funding will support postdoctoral researchers and students studying the topic, as well as publications and workshops."

Other notes: Currency info: donation given as 295,900.00 CAD (conversion done via donor calculation).
FAR AI (Earmark: Ethan Perez)425,800.00762021-10AI safety/technical researchhttps://www.openphilanthropy.org/grants/language-model-safety-fund-language-model-misalignment/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Fund for Alignment Research, led by Ethan Perez, to support salaries and equipment for projects related to misalignment in language models. Perez plans to hire and supervise four engineers to work on these projects."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/fund-for-alignment-research-language-model-misalignment-2022/ for a similar amount (and with the same research area and leader Ethan Perez), as well as several other followup grants in the coming years, suggest continued satisfaction with the grantee.
University of Washington (Earmark: Ludwig Schmidt)730,000.00552021-10AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-washington-adversarial-robustness-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support early-career research by Ludwig Schmidt on adversarial robustness as a means to improve AI safety."

Other notes: Intended funding timeframe in months: 36.
Center for a New American Security101,187.001422021-09AI safety/governancehttps://www.openphilanthropy.org/grants/center-for-a-new-american-security-risks-from-militarized-ai/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a working group that will focus on mitigating risks from possible military applications of artificial intelligence. This group will be composed of technical and policy experts from the US, Russia, China, and Europe, and will investigate possible confidence-building measures (actions designed to prevent miscalculation and conflict between states) for militarized AI."

Other notes: Intended funding timeframe in months: 18.
Stanford University78,000.001552021-09AI safety/strategyhttps://www.openphilanthropy.org/grants/stanford-university-ai-index/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support the AI Index, which collects and reports data related to artificial intelligence, including data relevant to AI safety and AI ethics." The webpage https://aiindex.stanford.edu/ is linked.
Université de Montréal (Earmark: Mila|Future of Humanity Institute)210,552.001122021-09AI safetyhttps://www.openphilanthropy.org/grants/universite-de-montreal-research-project-on-artificial-intelligence/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a research project investigating AI consciousness and moral patienthood. The research will be conducted in collaboration with Mila and the Future of Humanity Institute. This funding will support post-docs and students studying the topic, as well as publications and workshops."

Other notes: Currency info: donation given as 266,200.00 EUR (conversion done via donor calculation).
University of California, Berkeley (Earmark: Aditi Raghunathan)87,829.001522021-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/uc-berkeley-adversarial-robustness-research-aditi-raghunathan/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support postdoctoral research by Aditi Raghunathan on adversarial robustness as a means to improve AI safety."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/carnegie-mellon-university-research-on-adversarial-examples/ for the continuation of the grantee's work at Carnegie Mellon University suggests satisfaction with the grant outcome.

Other notes: The grant page says: "The grant amount was updated in July 2023.".
Center for Security and Emerging Technology38,920,000.0022021-08AI safetyhttps://www.openphilanthropy.org/grants/center-for-security-and-emerging-technology-general-support-august-2021/Luke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "CSET is a think tank, incubated by our January 2019 support, dedicated to policy analysis at the intersection of national and international security and emerging technologies. This funding is intended to augment our original support for CSET, particularly for its work on security and artificial intelligence."

Other notes: Intended funding timeframe in months: 36.
Stanford University (Earmark: Dimitis Tsipras)330,792.00842021-08AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsiprasCatherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Dimitris Tsipras on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
Stanford University (Earmark: Shibani Santurkar)330,792.00842021-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-adversarial-robustness-research-shibani-santurkar/Catherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Shibani Santurkar on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
University of Southern California (Earmark: Robin Jia)320,000.00892021-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-southern-california-adversarial-robustness-research/Catherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Robin Jia on adversarial robustness and out-of-distribution generalization as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36
Rethink Priorities495,685.00682021-07AI safety/governancehttps://www.openphilanthropy.org/grants/rethink-priorities-ai-governance-research/Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research projects on topics related to AI governance."

Donor reason for selecting the donee: The grant page says: "We believe that Rethink Priorities’ research outputs may help inform our AI policy grantmaking strategy."
Daniel Dewey175,000.001192021-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/daniel-dewey-ai-alignment-projectNick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "work on an AI alignment project and related field-building efforts. Daniel plans to use this funding to produce writing and reports summarizing existing research and investigating potentially valuable projects relevant to AI alignment, with the goal of helping junior researchers and others understand how they can contribute to the field."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/daniel-dewey-ai-alignment-projects-2022/ suggests continued satisfaction with the grant outcome.
Carnegie Mellon University (Earmark: Zico Kolter)330,000.00862021-05AI safetyhttps://www.openphilanthropy.org/grants/carnegie-mellon-university-adversarial-robustness-research/Catherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor Zico Kolter on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes grants earlier and later in the year to early-stage researchers at UC Berkeley, University of Tübingen, Stanfard University, and University of Southern California.

Other notes: Intended funding timeframe in months: 36.
Open Phil AI Fellowship (Earmark: Collin Burns|Jared Quincy Davis|Jesse Mu|Meena Jagadeesan|Tan Zhi-Xuan)1,300,000.00432021-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-classDaniel Dewey Donation process: According to the grant page: "These [five] fellows were selected from 397 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship to five machine learning researchers over five years.

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): An explicit reason for the amount is not specified, and the total amount is lower than previous years, but the amount per researcher ($260,000) is a little higher than previous years. It's likely that the amount per researcher is determined first and the total amount is the sum of these.

Donor reason for donating at this time (rather than earlier or later): This is the fourth of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/open-phil-ai-fellowship-2022-class/ confirms that the program would continue.

Other notes: The initial grant page only listed four of the five fellows and an amount of $1,000,000. The fifth fellow, Tan Zhi-Xuan, was added later and the amount was increased to $1,300,000.
The Wilson Center291,214.00952021-04AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-training-programLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to pilot an AI policy training program. The Wilson Center is a non-partisan policy forum for tackling global issues through independent research and open dialogue."
University of Cambridge (Earmark: David Krueger)250,000.001022021-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-cambridge-david-kruegerDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor David Krueger’s machine learning research."

Other notes: Grant made via Cambridge in America. Intended funding timeframe in months: 48.
Berkeley Existential Risk Initiative (Earmark: Stanford Existential Risks Initiative)210,000.001132021-03AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-summer-fellowships/Claire Zabel Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to provide stipends for the Stanford Existential Risks Initiative (SERI) summer research fellowship program."

Donor retrospective of the donation: The multiple future grants https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program-2/ https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ and https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ from Open Philanthropy to BERI for the SERI-MATS program, a successor of sorts to this program, suggests satisfaction with the outcome of this grant.

Other notes: Intended funding timeframe in months: 2.
Brian Christian66,000.001622021-03AI safety/movement growthhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/brian-christian-alignment-book-promotionNick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "with Brian Christian to support the promotion of his book The Alignment Problem: Machine Learning and Human Values."

Donor reason for selecting the donee: The grant page says: "Our potential risks from advanced artificial intelligence team hopes that the book will generate interest in AI alignment among academics and others."
Hypermind (Earmark: Metaculus)121,124.001342021-03AI safety/strategy/forecastinghttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/hypermind-ai-forecasting-tournamentLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to collaborate with Metaculus on an AI development forecasting tournament. Forecasts will cover the themes of hardware and supercomputing, performance and benchmarks, research trends, and economic and financial impact."
University of California, Berkeley (Earmark: Dawn Song)330,000.00862021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-songCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Dawn Song on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the foour other grants.It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in Januaay and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
University of California, Berkeley (Earmark: David Wagner)330,000.00862021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagnerCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor David Wagner on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Massachusetts Institute of Technology (Earmark: Aleksander Madry)1,430,000.00412021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-researchCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Aleksandr Madry on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
University of Tübingen (Earmark: Wieland Brendel)590,000.00602021-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-tubingen-robustness-research-wieland-brendel/Catherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support early-career research by Wieland Brendel on robustness as a means to improve AI safety."

Donor reason for selecting the donee: Open Phil made five grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research for "adversarial robustness research" in January and February 2021, around the time of this grant. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): Open Phil made five grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research for "adversarial robustness research" in January and February 2021, around the time of this grant. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
University of Tübingen (Earmark: Matthias Hein)300,000.00932021-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-tubingen-adversarial-robustness-research-matthias-hein/Catherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Matthias Hein on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Center for Security and Emerging Technology8,000,000.0092021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-supportLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "This funding is intended to augment our original support for CSET, particularly for its work on the intersection of security and artificial intelligence."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021 for a much larger amount suggests continued satisfaction with the grantee.
Center for Human-Compatible AI11,355,246.0052021-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2021Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "The multi-year commitment and increased funding will enable CHAI to expand its research and student training related to potential risks from advanced artificial intelligence."

Other notes: This is a renewal of the original founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai made August 2016. Intended funding timeframe in months: 60.
University of California, Santa Cruz (Earmark: Cihang Xie)265,000.001002021-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/uc-santa-cruz-adversarial-robustness-research/Catherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support early-career research by Cihang Xie on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/university-of-california-santa-cruz-adversarial-robustness-research-2023/ to support the same research leader and research agenda suggests satisfaction with the grant outcome.
Berryville Institute of Machine Learning (Earmark: Gary McGraw)150,000.001262021-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berryville-institute-of-machine-learningCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research led by Gary McGraw on machine learning security. The research will focus on building a taxonomy of known attacks on machine learning, exploring a hypothesis of representation and machine learning risk, and performing an architectural risk analysis of machine learning systems."

Donor reason for selecting the donee: The grant page says: "Our potential risks from advanced artificial intelligence team hopes that the research will help advance the field of machine learning security."
University of Toronto (Earmark: Chris Maddison)520,000.00652020-12AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-researchDaniel Dewey Catherine Olsson Donation process: The researcher (Chris Maddison) whose students' work is to be funded with this grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research on understanding, predicting, and controlling machine learning systems, led by Professor Chris Maddison, a former Open Phil AI Fellow. This funding is intended to enable three students and a postdoctoral researcher to work with Professor Maddison on the research."

Donor reason for selecting the donee: The researcher (Chris Maddison) whose students' work is to be funded with htis grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Other notes: Intended funding timeframe in months: 48.
AI Impacts50,000.001682020-11AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2020/Tom Davidson Ajeya Cotra Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2022 https://www.openphilanthropy.org/grants/ai-impacts-general-support/ (for a much larger amount) suggests continued satisfaction with the grantee.
Massachusetts Institute of Technology (Earmark: Neil Thompson)275,344.00992020-11AI safety/strategyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/massachusetts-institute-of-technology-ai-trends-and-impacts-researchLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "The research will consist of projects to learn how algorithmic improvement affects economic growth, gather data on the performance and compute usage of machine learning methods, and estimate cost models for deep learning projects."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/massachusetts-institute-of-technology-ai-trends-and-impacts-research-2022/ suggests continued satisfaction with the grantee.
Center for a New American Security (Earmark: Paul Scharre)24,350.001852020-10AI safety/governancehttps://www.openphilanthropy.org/grants/center-for-a-new-american-security-ai-governance-projects/Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work exploring possible projects related to AI governance."

Donor reason for selecting the donee: No explicit reason is provided for the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre) suggesting a broader endorsement.

Donor reason for donating at this time (rather than earlier or later): No explicit reason is provided for the timing of the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre).
Center for a New American Security (Earmark: Paul Scharre)116,744.001362020-10AI safety/governancehttps://www.openphilanthropy.org/grants/center-for-a-new-american-security-ai-and-security-projects/Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work by Paul Scharre on projects related to AI and security."

Donor reason for selecting the donee: No explicit reason is provided for the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre) suggesting a broader endorsement.

Donor reason for donating at this time (rather than earlier or later): No explicit reason is provided for the timing of the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre).
Smitha Milli (Earmark: Smitha Milli)370.002022020-10AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/smitha-milli-participatory-approaches-machine-learning-workshopDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Participatory Approaches to Machine Learning, a virtual workshop held during the 2020 International Conference on Machine Learning."

Donor reason for selecting the donee: The donee had previously been a recipient of the Open Phil AI Fellowship, so it is likely that that relationship helped make the way for this grant.

Donor reason for donating that amount (rather than a bigger or smaller amount): No specific reasons are given for the amount; this is an unusually small grant size by the donor's standards. The amount is likely determined by the limited funding needs of the grantee.

Donor reason for donating at this time (rather than earlier or later): The 2020 International Conference on Machine Learning was held in July 2020, so this grant seems to have been made after the thing it was supporting was already finished. No details on timing are provided.
Intended funding timeframe in months: 1
Center for Strategic and International Studies118,307.001352020-09AI safety/strategyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competitionLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor retrospective of the donation: The increase in grant amount in May 2021, from $75,245 to $118,307, suggests that Open Phil was satisfied with initial progress on the grant.

Other notes: The grant amount was updated in May 2021. The original amount was $75,245.
Center for International Security and Cooperation67,000.001612020-09AI safety/strategyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competitionLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.
Rice, Hadley, Gates & Manuel LLC25,000.001822020-09AI safety/strategyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-riskLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.
The Wilson Center496,540.00672020-06AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to organize additional in-depth AI policy seminars as part of its seminar series."

Donor reason for selecting the donee: The grant page says "We continue to believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor reason for donating that amount (rather than a bigger or smaller amount): No reason is given for the amount. The grant is a little more than the original $368,440 two-year grant so it is likely that the additional amount is expected to double the frequency of AI policy seminars.

Donor reason for donating at this time (rather than earlier or later): The grant is a top-up rather than a renewal; the previous two-year grant was made in February 2020. No specific reasons for timing are given.

Donor retrospective of the donation: A later grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-training-program in the same general area suggests Open Philanthropy's continued satisfaction with the grantee.
Andrew Lohn (Earmark: Andrew Lohn)15,000.001912020-06AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/andrew-lohn-paper-machine-learning-model-robustnessLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to write a paper on machine learning model robustness for safety-critical AI systems."

Donor reason for selecting the donee: Nothing is specified, but the grantee's work had previously been funded by Open Phil via the RAND Corporation for AI assurance methods.
Open Phil AI Fellowship (Earmark: Alex Tamkin|Clare Lyle|Cody Coleman|Dami Choi|Dan Hendrycks|Ethan Perez|Frances Ding|Leqi Liu|Peter Henderson|Stanislav Fort)2,300,000.00272020-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-classCatherine Olsson Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship to ten machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests." In a comment reply https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020#BCvuhRCg9egAscpyu (GW, IR) on the Effectiive Altruism Forum, grant investigator Catherine Olsson writes: "But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is comparable to the total amount of the 2019 fellowship grants, though it is distributed among a slightly larger pool of people.

Donor reason for donating at this time (rather than earlier or later): This is the third of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirms that the program would continue.

Other notes: Announced: 2020-05-12.
Centre for the Governance of AI450,000.00722020-05AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-general-supportCommittee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "GovAI intends to use these funds to support the visit of two senior researchers and a postdoc researcher."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" but does not link to specific past writeups (Open Phil has not previously made grants directly to GovAI).

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public.

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-field-building (December 2021) suggests continued satisfaction with the grantee.

Other notes: Grant made via the Berkeley Existential Risk Initiative.
International Conference on Learning Representations3,500.002002020-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ICLR-machine-learning-paper-awardsDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to the International Conference on Learning Representations to provide awards for the best papers submitted as part of the “Towards Trustworthy Machine Learning” virtual workshop.
World Economic Forum50,000.001682020-04AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/world-economic-forum-global-ai-council-workshopDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a workshop hosted by the Global AI Council and co-developed with the Center for Human-Compatible AI at UC Berkeley. The workshop will facilitate the development of AI policy recommendations that could lead to future economic prosperity, and is part of a series of workshops examining solutions to maximize economic productivity and human wellbeing."

Other notes: Intended funding timeframe in months: 1.
Johns Hopkins University (Earmark: Jared Kaplan|Brice Ménard)55,000.001642020-03AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/johns-hopkins-kaplan-menardLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support the initial research of Professors Jared Kaplan and Brice Ménard on principles underlying neural network training and performance."
Study and Training Related to AI Policy Careers (Earmark: Emefa Agawu|Karson Elmgren|Matthew Gentzel|Becca Kagan|Benjamin Mueller)594,420.00592020-03AI safety/governance/talent pipelinehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-and-training-related-to-ai-policy-careersLuke Muehlhauser Donation process: This is a scholarship program run by Open Philanthropy. Applications were sought at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers with the last date for applications being 2019-10-15.

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant is "flexible support to enable individuals to pursue and explore careers in artificial intelligence policy." Recipients include Emefa Agawu, Karson Elmgren, Matthew Gentzel, Becca Kagan, and Benjamin Mueller. The ways that specific recipients intend to use the funds is not described, but https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#examples gives general guidance on the kinds of uses Open Philanthropy was expecting to see when it opened applications.

Donor reason for selecting the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#goal says: "The goal of this program is to provide flexible support that empowers exceptional people who are interested in positively affecting the long-run effects of transformative AI via careers in AI policy, which we see as an important and neglected issue." https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#appendix provides links to Open Philanthropy's other writing on the importance of the issue.

Donor reason for donating that amount (rather than a bigger or smaller amount): https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#summary says: "There is neither a maximum nor a minimum number of applications we intend to fund; rather, we intend to fund the applications that seem highly promising to us."

Donor reason for donating at this time (rather than earlier or later): Timing is likely determined by the time taken to review all applications after the close of applications on 2019-10-15.

Donor retrospective of the donation: As of early 2022, there do not appear to have been further rounds of grantmaking from Open Philanthropy for this purpose.

Other notes: Open Philanthropy runs a related fellowship program called the Open Phil AI Fellowship, that has an annual cadence of announcing new grants, though individual grants are often multi-year. The Open Phil AI Fellowship grantees are mostly people working on technical AI safety, whereas this grant is focused on AI policy work. Moreover, the Open Phil AI Fellowship targets graduate-level research, whereas this grant targets study and training.
Machine Intelligence Research Institute7,703,750.00102020-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" with the most similar previous grant being https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 (February 2019). Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Centre for Effective Altruism ($4,146,795), 80,000 Hours ($3,457,284), and Ought ($1,593,333).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation.
Intended funding timeframe in months: 24

Other notes: The donee describes the grant in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ (2020-04-27) along with other funding it has received ($300,000 from the Berkeley Existential Risk Initiative and $100,000 from the Long-Term Future Fund). The fact that the grant is a two-year grant is mentioned here, but not in the grant page on Open Phil's website. The page also mentions that of the total grant amount of $7.7 million, $6.24 million is coming from Open Phil's normal funders (Good Ventures) and the remaining $1.46 million is coming from Ben Delo, co-founder of the cryptocurrency trading platform BitMEX, as part of a funding partnership https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo announced November 11, 2019. Announced: 2020-04-10.
The Wilson Center368,440.00802020-02AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-february-2020Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to continue support for a series of in-depth AI policy seminars."

Donor reason for selecting the donee: The grat page says: "We continue to believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is similar to the previous grant of $400,000 over a similar time period (two years).

Donor reason for donating at this time (rather than earlier or later): The grant is made almost two years after the original two-year grant, so its timing is likely determined by the original grant running out.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 suggests ongoing satisfaction with the grant outcomes. A later grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-training-program in the same general area suggests Open Philanthropy's continued satisfaction with the grantee.
WestExec540,000.00632020-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/westexec-report-on-assurance-in-machine-learning-systemsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to support the production and distribution of a report on advancing policy, process, and funding for the Department of Defense’s work on test, evaluation, verification, and validation for deep learning systems."

Donor retrospective of the donation: The increases in grant amounts suggest that the donor was satisfied with initial progress.

Other notes: The grant amount was updated in October and November 2020 and again in May 2021. The original grant amount had been $310,000. Announced: 2020-03-20.
Berkeley Existential Risk Initiative150,000.001262020-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support/Claire Zabel Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "BERI seeks to reduce existential risks to humanity, and collaborates with other long-termist organizations, including the Center for Human-Compatible AI at UC Berkeley. This funding is intended to help BERI establish new collaborations."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support-2/ suggests continued satisfaction with the grantee.
Ought1,593,333.00332020-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020Committee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing potential risks from advanced artificial intelligence."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter"

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Machine Intelligence Research Institute ($7,703,750), Centre for Effective Altruism ($4,146,795), and 80,000 Hours ($3,457,284).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation

Other notes: Announced: 2020-02-14.
Stanford University (Earmark: Dorsa Sadigh)6,500.001992020-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-ai-safety-seminarDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant "is intended to fund the travel costs for experts on AI safety to present at the [AI safety] seminar [led by Dorsa Sadigh]."

Other notes: Intended funding timeframe in months: 1.
RAND Corporation (Earmark: Andrew Lohn)30,751.001792020-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rand-corporation-research-on-the-state-of-ai-assurance-methodsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support exploratory research by Andrew Lohn on the state of AI assurance methods."

Donor retrospective of the donation: A few months later, Open Phil would make a grant directly to Andrew Lohn for machine learning robustness research, suggesting that they were satisfied with the outcome from this grant.

Other notes: Announced: 2020-03-19.
Press Shop (Earmark: Stuart Russell)17,000.001902020-01AI safety/movement growthhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/press-shop-human-compatibleDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the publicity firm Press Shop to support expenses related to publicizing Professor Stuart Russell’s book Human Compatible: Artificial Intelligence and the Problem of Control."

Donor reason for selecting the donee: The grant page links this grant to past support for the Center for Human-Compatible AI (CHAI) where Russell is director, so the reason for the grant is likely similar to reasons for that past support. Grant pages: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019

Donor reason for donating at this time (rather than earlier or later): The grant is made shortly after the release of the book (book release date: October 8, 2019) so the timing is likely related to the release date.
Berkeley Existential Risk Initiative (Earmark: Center for Human-Compatible AI)705,000.00562019-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2019/Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/ from Open Philanthropy to BERI for the same purpose (CHAI collaboration) suggests satisfaction with the outcome of the grant.

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
University of California, Berkeley (Earmark: Jacob Steinhardt)1,111,000.00492019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-research-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "This funding will allow Professor Steinhardt to fund students to work on robustness, value learning, aggregating preferences, and other areas of machine learning."

Other notes: This is the third year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Jacob Steinhardt) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 36; announced: 2020-02-19.
Center for Human-Compatible AI200,000.001152019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "CHAI plans to use these funds to support graduate student and postdoc research."

Other notes: Open Phil makes a $705,000 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 to the Berkeley Existential Risk Initiative (BERI) at the same time (November 2019) to collaborate with CHAI. Intended funding timeframe in months: 24; announced: 2019-12-20.
Ought1,000,000.00512019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought

Other notes: Intended funding timeframe in months: 24; announced: 2020-02-14.
Open Phil AI Fellowship (Earmark: Aidan Gomez|Andrew Ilyas|Julius Adebayo|Lydia T. Liu|Max Simchowitz|Pratyusha Kullari|Siddharth Karamcheti|Smitha Milli)2,325,000.00262019-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-classDaniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship support to eight machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is about double the amount of the 2018 grant, although the number of people supported is just one more (8 instead of 7). No explicit comparison of grant amounts is done in the grant page.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirm that the program would continue. Among the grantees, Smitha Milli would receive further support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/smitha-milli-participatory-approaches-machine-learning-workshop from Open Philanthropy, indicating continued confidence in the support.

Other notes: Announced: 2019-05-17.
Machine Intelligence Research Institute2,652,500.00222019-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.

Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount decided by the Committee for Effective Altruism Support (CEAS) https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by CEAS, made at the same time and therefore likely drawing from the same money pot, are to the Centre for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The original amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later. In November 2019, additional funding would bring the total award amount to $2,652,500.

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round.
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant.

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 with a very similar writeup suggests that Open Phil and the Committee for Effective Altruism Support would continue to stand by the reasoning for the grant.

Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-01.
Berkeley Existential Risk Initiative (Earmark: Center for Human-Compatible AI)250,000.001022019-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-ml-engineers/Daniel Dewey Donation process: The grant page describes the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/grants/uc-berkeley-center-for-human-compatible-ai-2016/ for the launch of CHAI and previous grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/ to collaborate with CHAI. Announced: 2019-03-04.
Center for Security and Emerging Technology55,000,000.0012019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technologyLuke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: Grant via Georgetown University for the Center for Security and Emerging Technology (CSET), a new think tank led by Jason Matheny, formerly of IARPA, dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders.

Donor reason for selecting the donee: Open Phil thinks that one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. As AI grows more powerful, calls for government to play a more active role are likely to increase, and government funding and regulation could affect the benefits and risks of AI. Thus: "Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice." Despite risks and uncertainty, the grant is described as worthwhile under Open Phil's hits-based giving framework

Donor reason for donating that amount (rather than a bigger or smaller amount): The large amount over an extended period (5 years) is explained at https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant "In the case of the new Center for Security and Emerging Technology, we think it will take some time to develop expertise on key questions relevant to policymakers and want to give CSET the commitment necessary to recruit key people, so we provided a five-year grant."

Donor reason for donating at this time (rather than earlier or later): Likely determined by the timing that the grantee plans to launch. More timing details are not discussed
Intended funding timeframe in months: 60

Other notes: Donee is entered as Center for Security and Emerging Technology rather than as Georgetown University for consistency with future grants directly to the organization once it is set up. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.
University of California, Berkeley (Earmark: Pieter Abeel|Aviv Tamar)1,145,000.00462018-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems."

Other notes: This is the second year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Pieter Abbeel) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 36; announced: 2018-12-11.
Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai (Earmark: Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai)2,351.002012018-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-robustness-machine-learning-modelsDaniel Dewey Donation process: The grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to reimburse technology costs for their efforts to study the robustness of machine learning models, especially robustness to unforeseen adversaries."

Donor reason for selecting the donee: The grant page says "We believe this will accelerate progress in adversarial, worst-case robustness in machine learning."
GoalsRL (Earmark: Ashley Edwards)7,500.001942018-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learningDaniel Dewey Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.
The Wilson Center400,000.00792018-07AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-seriesLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a series of in-depth AI policy seminars."

Donor reason for selecting the donee: The grant page says: "We believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-february-2020 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 suggest that the donor was satisfied with the outcome of the grant.

Other notes: Intended funding timeframe in months: 24; announced: 2018-08-01.
Stanford University (Earmark: Dan Boneh|Florian Tremer)100,000.001442018-07AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramerDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer."

Donor reason for selecting the donee: The grant page gives three reasons: (1) Florian Tremer is a very strong Ph.D. student, (2) excellent machine learning security work is important for AI safety, (3) increased funding in areas relevant to AI safety, like machine learning security, is expected to lead to more long-term benefits for AI safety.

Other notes: Grant is structured as an unrestricted "gift" to Stanford University Computer Science. Announced: 2018-09-06.
University of Oxford (Earmark: Allan Dafoe)429,770.00752018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom. The Open Philanthropy Project recommended additional funds to support this work in 2017, while Professor Dafoe was at Yale. Continuation of grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe. Announced: 2018-07-20.
Machine Intelligence Research Institute150,000.001262018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."

Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-27.
AI Impacts100,000.001442018-06AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2018/Daniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2020 https://www.openphilanthropy.org/grants/ai-impacts-general-support-2020/ and 2022 https://www.openphilanthropy.org/grants/ai-impacts-general-support/ suggest continued satisfaction with the grantee, though the amount of the 2020 renewal grant is lower (just $50,000).

Other notes: The grant is via the Machine Intelligence Research Institute. Announced: 2018-06-27.
Open Phil AI Fellowship (Earmark: Aditi Raghunathan|Chris Maddison|Felix Berkenkamp|Jon Gauthier|Michael Janner|Noam Brown|Ruth Fong)1,135,000.00472018-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 180 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research"

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship support to seven machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating at this time (rather than earlier or later): This is the first of annual sets of grants, decided through an annual application process.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The corresponding grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class (2019), https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020), and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirm that these grants will be made annually. Among the grantees, Chris Maddison would continue receiving support from Open Philanthropy in the future in the form of support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-research for his students, indicating continued endorsement of his work.

Other notes: Announced: 2018-05-31.
Ought525,000.00642018-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."

Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought

Other notes: Intended funding timeframe in months: 36; announced: 2018-05-30.
Stanford University6,771.001962018-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learningDaniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775

Donor reason for selecting the donee: No specific reasons are included in the grant, but several of the workshop presenters for the previous year's conference (2017) would have their research funded by Open Philanthropy, including Jacob Steinhardt, Percy Liang, and Dawn Song.

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount was likely determined by the cost of running the workshop. The original amount of $2,539 was updated in June 2020 to $6,771.

Donor reason for donating at this time (rather than earlier or later): The timing was likely determined by the timing of the conference.
Intended funding timeframe in months: 1

Other notes: The original amount of $2,539 was updated in June 2020 to $6,771. Announced: 2018-04-18.
AI Scholarships (Earmark: Dmitrii Krasheninnikov|Michael Cohen)159,000.001232018-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018Daniel Dewey Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.
Machine Intelligence Research Institute3,750,000.00202017-10AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) suggests that work on the review had been going on well before the grant renewal date

Intended use of funds (category): Organizational general support

Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."

Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."

Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36

Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."

Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." is discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
University of California, Berkeley (Earmark: Sergey Levine|Anca Dragan)1,450,016.00382017-10AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-draganDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "The work will be led by Professors Sergey Levine and Anca Dragan, who will each devote approximately 20% of their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems, though they have the flexibility to adjust their focus during the grant period." The project narrative is at https://www.openphilanthropy.org/files/Grants/UC_Berkeley/Levine_Dragan_Project_Narrative_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our broad goals for this funding are to encourage top researchers to work on AI alignment and safety issues in order to build a pipeline for young researchers; to support progress on technical problems; and to generally support the growth of this area of study."

Other notes: This is the first year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It would begin an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Anca Dragan) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 48; announced: 2017-10-20.
Berkeley Existential Risk Initiative (Earmark: Center for Human-Compatible AI)403,890.00782017-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/Daniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx

Other notes: Announced: 2017-09-28.
Mila (Earmark: Yoshua Bengio|Joelle Pineau|Doina Precup)2,400,000.00252017-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/montreal-institute-for-learning-algorithms-ai-safety-research/-- Donation process: The grant page says: "We spoke with Professor Bengio and several of his students during our recent outreach to machine learning researchers and formed a positive impression of him and his work. Our technical advisors spoke highly of Professor Bengio’s capabilities, reputation, and goals."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support technical research on potential risks from advanced artificial intelligence (AI). $1.6 million of this grant will support Professor Yoshua Bengio and his co-investigators at the Université de Montréal, and $800,000 will support Professors Joelle Pineau and Doina Precup at McGill University. We see Professor Bengio’s research group as one of the world’s preeminent deep learning labs and are excited to provide support for it to undertake AI safety research."

Donor reason for selecting the donee: The grant page says: "Among potential grantees in the field, we believe that Professor Bengio is one of the best positioned to help build the talent pipeline in AI safety research. Our understanding, based on conversations with our technical advisors and our general impressions from the field, is that many of the most talented machine learning researchers spend some time in Professor Bengio’s lab before joining other universities or industry groups. This is an important contributing factor to our expectations for the impact of this grant, both because it increases our confidence in the quality of the research that this grant will support and because of the potential benefits for pipeline building. In our conversations with Professor Bengio, we’ve found significant overlap between his perspective on AI safety and ours, and Professor Bengio was excited to be part of our overall funding activities in this area. We think that Professor Bengio is likely to serve as a valuable member of the AI safety research community, and that he will encourage his lab to be involved in that community as well. We believe that members of his lab could likely be valuable participants at future workshops on AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says: "Our impression is that MILA is already fairly well-funded, and that its ability to use additional marginal funding is somewhat limited. Professor Bengio told us that the amount of additional yearly funding that he would be able to use productively for AI safety research is $400,000; we have decided to grant this full amount for four years ($1.6 million total). We have also granted two of Professor Bengio’s co-investigators at MILA who are also interested in working on this agenda, Professors Pineau and Precup, $200,000 per year ($800,000 total), which they estimated as the amount of funding they would be able to use productively."

Donor thoughts on making further donations to the donee: The grant page says: "We expect to have a conversation with Professor Bengio six months after the start of the grant, and annually after that, to discuss his projects and results, with public notes if the conversation warrants it. In the first few months of the grant, we plan to visit Montreal for several days to meet Professor Bengio’s co-investigators and discuss the project with them. At the conclusion of this grant in 2020, we will decide whether to renew our support. If Professor Bengio’s research is going well (based on our technical advisors’ assessment and the impressions of others in the field), and if we have achieved a better mutual understanding with Professor Bengio about how his research is likely to be valuable, it is likely that we will decide to provide renewed funding. If Professor Bengio is using half or more of our funding to pursue research directions that we do not find particularly promising, it is likely that we would choose not to renew."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/mila-research-project-on-artificial-intelligence/ suggests continued satisfaction with the grantee, though the amount of this followup grant is much smaller and the scope narrower than that of the original grant.

Other notes: See also https://www.facebook.com/permalink.php?story_fbid=10110258359382500&id=13963931 for a Facebook share by David Krueger, a member of the grantee organization. The comments include some discussion about the grantee. Intended funding timeframe in months: 48; announced: 2017-07-19.
Yale University (Earmark: Allan Dafoe)299,320.00942017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research into the global politics of artificial intelligence, led by Assistant Professor of Political Science, Allan Dafoe, who will conduct part of the research at the Future of Humanity Institute in Oxford, United Kingdom over the next year. Funds from the two gifts will support the hiring of two full-time research assistants, travel, conferences, and other expenses related to the research efforts, as well as salary, relocation, and health insurance expenses related to Professor Dafoe’s work in Oxford. Announced: 2017-09-28.
Stanford University (Earmark: Percy Liang)1,337,600.00422017-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liangDaniel Dewey Donation process: The grant is the result of a proposal written by Percy Liang. The writing of the proposal was funded by a previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grant written March 2017. The proposal was reviewed by two of Open Phil's technical advisors, who both felt largely positive about the proposed research directions.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant is intended to fund about 20% of Percy Liang's time as well as about three graduate students. Liang expects to focus on a subset of these topics: robustness against adversarial attacks on ML systems, verification of the implementation of ML systems, calibrated/uncertainty-aware ML, and natural language supervision.

Donor reason for selecting the donee: The grant page says: "Both [technical advisors who reviewed te garnt proposal] felt largely positive about the proposed research directions and recommended to Daniel that Open Philanthropy make this grant, despite some disagreements [...]."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is likely determined by the grant proposal details; it covers about 20% of Percy Liang's time as well as about three graduate students.

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the timing of the grant proposal being ready.
Intended funding timeframe in months: 48

Donor thoughts on making further donations to the donee: The grant page says: "At the end of the grant period, we will decide whether to renew our support based on our technical advisors’ evaluation of Professor Liang’s work so far, his proposed next steps, and our assessment of how well his research program has served as a pipeline for students entering the field. We are optimistic about the chances of renewing our support. We think the most likely reason we might choose not to renew would be if Professor Liang decides that AI alignment research isn’t a good fit for him or for his students."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-2021/ suggests satisfaction with the grant outcome.

Other notes: Announced: 2017-09-26.
UCLA School of Law (Earmark: Edward Parson,Richard Re)1,536,222.00352017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governanceHelen Toner Grant to support work on governance related to AI risk led by Edward Parson and Richard Re. Announced: 2017-07-27.
Future of Life Institute100,000.001442017-05Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant for general support. However, the primary use of the grant will be to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant

Donor retrospective of the donation: The followup grants in 2018 and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee.

Other notes: Announced: 2017-09-27.
OpenAI30,000,000.0032017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support-- Donation process: According to the grant page Section 4 Our process: "OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors."

Intended use of funds (category): Organizational general support

Intended use of funds: The funds will be used for general support of OpenAI, with 10 million USD per year for the next three years. The funding is also accompanied with Holden Karnofsky (Open Phil director) joining the OpenAI Board of Directors. Karnofsky and one other board member will oversee OpenAI's safety and governance work.

Donor reason for selecting the donee: Open Phil says that, given its interest in AI safety, it is looking to fund and closely partner with orgs that (a) are working to build transformative AI, (b) are advancing the state of the art in AI research, (c) employ top AI research talent. OpenAI and Deepmind are two such orgs, and OpenAI is particularly appealing due to "our shared values, different starting assumptions and biases, and potential for productive communication." Open Phil is looking to gain the following from a partnership: (i) Improve its understanding of AI research, (ii) Improve its ability to generically achieve goals regarding technical AI safety research, (iii) Better position Open Phil to promote its ideas and goals.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page Section 2.2 "A note on why this grant is larger than others we’ve recommended in this focus area" explains the reasons for the large grant amount (relative to other grants by Open Phil so far). Reasons listed are: (i) Hits-based giving philosophy, described at https://www.openphilanthropy.org/blog/hits-based-giving in depth, (ii) Disproportionately high importance of the cause if transformative AI is developed in the next 20 years, and likelihood that OpenAI will be very important if that happens, (iii) Benefits of working closely with OpenAI in informing Open Phil's understanding of AI safety, (iv) Field-building benefits, including promoting an AI safety culture, (v) Since OpenAI has a lot of other funding, Open Phil can grant a large amount while still not raising the concern of dominating OpenAI's funding.

Donor reason for donating at this time (rather than earlier or later): No specific timing considerations are provided. It is likely that the timing of the grant is determined by when OpenAI first approached Open Phil and the time taken for the due diligence.
Intended funding timeframe in months: 36

Other notes: External discussions include http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ cross-posted to https://www.lesswrong.com/posts/2z5vrsu7BoiWckLby/an-openai-board-seat-is-surprisingly-expensive (GW, IR) (post by Ben Hoffman, attracting comments at both places), https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.
Stanford University (Earmark: Percy Liang)25,000.001822017-03AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grantDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to enable Professor Liang to spend significant time engaging in our process to determine whether to provide his research group with a much larger grant." The larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang would be made.

Donor thoughts on making further donations to the donee: The grant is a planning grant intended to help Percy Liang write up a proposal for a bigger grant.

Donor retrospective of the donation: The bigger proposal whose writing was funded by this grant would lead to a bigger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang in May 2017.

Other notes: Announced: 2017-09-26.
Future of Humanity Institute1,994,000.00302017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support-- Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.
Distill25,000.001822017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-supportDaniel Dewey Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.
AI Impacts32,000.001772016-12AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2016/-- Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewals in 2018 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018 and 2020 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020 suggest continued satisfaction with the grantee.

Other notes: Announced: 2017-02-02.
Electronic Frontier Foundation (Earmark: Peter Eckersley)199,000.001162016-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social-- Grant funded work by Peter Eckersley, whom the Open Philanthropy Project believed in. Followup conversation with Peter Eckersley and Jeremy Gillula of grantee organization at https://www.openphilanthropy.org/sites/default/files/Peter_Eckersley_Jeremy_Gillula_05-26-16_%28public%29.pdf on 2016-05-26. Announced: 2016-12-15.
Machine Intelligence Research Institute500,000.00662016-08AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."

Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety

Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."

Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."

Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant

Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Center for Human-Compatible AI5,555,550.00132016-08AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Donation process: The grant page section https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process says: "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."

Intended use of funds (category): Organizational general support

Intended use of funds: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding says: "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year." The funding from Open Phil will be used toward this budget. An earlier section of the grant page says that the Center's research topics will include value alignment, value functions defined by partially observable and partially defined terms, the structure of human value systems, and conceptual questions including the properties of ideal value systems.

Donor reason for selecting the donee: The grant page gives these reasons: (1) "We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it." (2) "The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research." (3) "We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence." Also, counterfactual impact: "Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. [...] We are not aware of other potential [substantial] funders, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is based on budget estimates in https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year."

Donor reason for donating at this time (rather than earlier or later): Timing seems to have been determined by the time it took to work out the details of the new center after Open Phil decided to make AI safety a major priority in 2016. According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 in November 2019, five-year renewal https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2021 in January 2021, as well as many grants to Berkeley Existential Risk Initiative (BERI) to collaborate with the grantee, suggest that Open Phil would continue to think highly of the grantee, and stand by its reasoning.

Other notes: Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the name of the center for easier cross-referencing. Announced: 2016-08-29.
George Mason University (Earmark: Robin Hanson)277,435.00982016-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios-- Earmarked for Robin Hanson research. Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence for background. Original amount $264,525. Increased to $277,435 through the addition of $12,910 in July 2017 to cover an increase in George Mason University’s instructional release costs (teaching buyouts). Announced: 2016-07-07.
Future of Life Institute1,186,000.00452015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.

Similarity to other donors

Sorry, we couldn't find any similar donors.