Open Philanthropy Project donations made (filtered to cause areas matching AI safety)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy+Project

Brief history: The Open Philanthropy Project (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell in June 2017

Brief notes on broad donor philosophy and major focus areas: The Open Philanthropy Project is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Project Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy Project database are not listed on Donations List Website as being under Open Philanthropy Project. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by the Open Philanthropy Project that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by the Open Philanthropy Project are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 486 265,000 919,283 2,000 42,000 75,000 100,000 200,000 265,000 400,000 515,000 1,000,000 1,927,640 55,000,000
History of philanthropy 4 25,000 60,708 2,000 2,000 2,000 25,000 25,000 25,000 50,000 50,000 165,833 165,833 165,833
AI safety 29 403,890 1,909,477 2,539 25,000 100,000 159,000 277,435 403,890 525,000 1,186,000 1,536,222 3,750,000 30,000,000
Criminal justice reform 191 150,000 357,017 5,000 40,000 50,000 75,000 100,000 150,000 200,000 275,000 404,800 800,000 4,000,000
Animal welfare 100 430,000 669,108 6,683 50,000 100,000 150,000 265,000 430,000 500,000 625,400 1,000,000 1,347,742 10,000,000
Effective altruism 7 1,125,000 1,533,193 10,000 10,000 153,750 1,032,947 1,032,947 1,125,000 2,500,000 2,500,000 2,688,000 3,222,653 3,222,653
Biosecurity and pandemic preparedness 33 454,025 1,392,466 14,605 32,621 49,942 152,950 300,000 454,025 500,000 643,415 1,904,942 3,500,000 16,000,000
Migration policy 13 375,000 555,323 24,000 30,000 50,000 150,000 360,000 375,000 400,000 700,000 1,184,720 1,310,483 1,800,000
Macroeconomic stabilization policy 18 305,000 465,306 31,500 100,000 100,000 200,000 250,000 305,000 425,000 500,000 750,000 1,200,000 1,429,000
Land use reform 12 250,000 224,155 37,000 40,000 40,000 50,000 97,865 250,000 300,000 350,000 350,000 400,000 500,000
Scientific research 34 1,044,501 2,340,816 40,000 81,500 200,000 300,000 780,000 1,044,501 1,695,376 2,350,000 3,000,000 5,000,000 17,500,000
Global poverty 3 300,000 1,115,565 46,696 46,696 46,696 46,696 300,000 300,000 300,000 3,000,000 3,000,000 3,000,000 3,000,000
Organ donation 3 100,000 116,667 50,000 50,000 50,000 50,000 100,000 100,000 100,000 200,000 200,000 200,000 200,000
Global catastrophic risks 14 493,425 1,536,335 76,234 100,000 100,000 260,000 400,352 493,425 776,095 2,000,000 2,982,206 3,000,000 8,070,371
Biomedical research 2 100,000 200,000 100,000 100,000 100,000 100,000 100,000 100,000 300,000 300,000 300,000 300,000 300,000
International relations 2 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
Drug policy 4 150,000 468,658 103,000 103,000 103,000 150,000 150,000 150,000 250,000 250,000 1,371,630 1,371,630 1,371,630
Rationality improvement 4 340,000 669,750 304,000 304,000 304,000 340,000 340,000 340,000 1,000,000 1,000,000 1,035,000 1,035,000 1,035,000
Forecasting 2 500,000 900,000 500,000 500,000 500,000 500,000 500,000 500,000 1,300,000 1,300,000 1,300,000 1,300,000 1,300,000
Public services improvement and transparency 1 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000
Global health 3 724,929 741,643 500,000 500,000 500,000 500,000 724,929 724,929 724,929 1,000,000 1,000,000 1,000,000 1,000,000
Other areas 1 510,000 510,000 510,000 510,000 510,000 510,000 510,000 510,000 510,000 510,000 510,000 510,000 510,000
Politics 2 628,600 1,214,300 628,600 628,600 628,600 628,600 628,600 628,600 1,800,000 1,800,000 1,800,000 1,800,000 1,800,000
criminal justice reform 1 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000
Cause prioritization 1 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284
1 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000 5,000,000
Security 1 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2019 2018 2017 2016 2015
AI safety (filter this donor) 29 21 55,374,842.00 250,000.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Security (filter this donor) 1 1 55,000,000.00 55,000,000.00 0.00 0.00 0.00 0.00
Total 30 22 110,374,842.00 55,250,000.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2019 2018 2017 2016 2015
AI safety 29 21 55,374,842.00 250,000.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety 1 1 55,000,000.00 55,000,000.00 0.00 0.00 0.00 0.00
Classified total 30 22 110,374,842.00 55,250,000.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Unclassified total 0 0 0.00 0.00 0.00 0.00 0.00 0.00
Total 30 22 110,374,842.00 55,250,000.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2019 2018 2017 2016 2015
Georgetown University (filter this donor) FB Tw WP Site 55,000,000.00 55,000,000.00 0.00 0.00 0.00 0.00
OpenAI (filter this donor) AI safety FB Tw WP Site TW 30,000,000.00 0.00 0.00 30,000,000.00 0.00 0.00
Center for Human-Compatible AI (filter this donor) AI safety Site TW 5,555,550.00 0.00 0.00 0.00 5,555,550.00 0.00
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 4,400,000.00 0.00 150,000.00 3,750,000.00 500,000.00 0.00
University of California, Berkeley (filter this donor) FB Tw WP Site 2,595,016.00 0.00 1,145,000.00 1,450,016.00 0.00 0.00
Montreal Institute for Learning Algorithms (filter this donor) AI capabilities/AI safety Site 2,400,000.00 0.00 0.00 2,400,000.00 0.00 0.00
Future of Humanity Institute (filter this donor) Global catastrophic risks/AI safety/Biosecurity and pandemic preparedness FB Tw WP Site TW 1,994,000.00 0.00 0.00 1,994,000.00 0.00 0.00
UCLA School of Law (filter this donor) Tw WP Site 1,536,222.00 0.00 0.00 1,536,222.00 0.00 0.00
Stanford University (filter this donor) FB Tw WP Site 1,465,139.00 0.00 102,539.00 1,362,600.00 0.00 0.00
Future of Life Institute (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 1,186,000.00 0.00 0.00 0.00 0.00 1,186,000.00
AI Fellows Program (filter this donor) AI safety Site 1,135,000.00 0.00 1,135,000.00 0.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 653,890.00 250,000.00 0.00 403,890.00 0.00 0.00
Ought (filter this donor) AI safety Site 525,000.00 0.00 525,000.00 0.00 0.00 0.00
University of Oxford (filter this donor) FB Tw WP Site 429,770.00 0.00 429,770.00 0.00 0.00 0.00
The Wilson Center (filter this donor) FB Tw WP Site 400,000.00 0.00 400,000.00 0.00 0.00 0.00
Yale University (filter this donor) FB Tw WP Site 299,320.00 0.00 0.00 299,320.00 0.00 0.00
George Mason University (filter this donor) FB WP Site 277,435.00 0.00 0.00 0.00 277,435.00 0.00
Electronic Frontier Foundation (filter this donor) FB Tw WP Site 199,000.00 0.00 0.00 0.00 199,000.00 0.00
AI Scholarships (filter this donor) 159,000.00 0.00 159,000.00 0.00 0.00 0.00
AI Impacts (filter this donor) AI safety Site 132,000.00 0.00 100,000.00 0.00 32,000.00 0.00
Distill (filter this donor) AI capabilities/AI safety Tw Site 25,000.00 0.00 0.00 25,000.00 0.00 0.00
GoalsRL (filter this donor) AI safety Site 7,500.00 0.00 7,500.00 0.00 0.00 0.00
Total -- -- 110,374,842.00 55,250,000.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2019 2018 2017
Luke Muehlhauser 2 2 55,400,000.00 55,000,000.00 400,000.00 0.00
Daniel Dewey 14 9 6,665,545.00 250,000.00 3,174,039.00 3,241,506.00
Nick Beckstead 3 3 4,479,090.00 0.00 429,770.00 4,049,320.00
Helen Toner 1 1 1,536,222.00 0.00 0.00 1,536,222.00
Claire Zabel 1 1 150,000.00 0.00 150,000.00 0.00
Classified total 21 15 68,230,857.00 55,250,000.00 4,153,809.00 8,827,048.00
Unclassified total 9 9 42,143,985.00 0.00 0.00 34,394,000.00
Total 30 22 110,374,842.00 55,250,000.00 4,153,809.00 43,221,048.00

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

If you hover over a cell for a given disclosures and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Disclosures Number of donations Number of donees Total 2017 2016 2015
Paul Christiano 2 2 30,500,000.00 30,000,000.00 500,000.00 0.00
Dario Amodei 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Holden Karnofsky 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Daniel Dewey 4 4 5,171,435.00 4,394,000.00 777,435.00 0.00
Nick Beckstead 4 4 3,957,435.00 1,994,000.00 777,435.00 1,186,000.00
Chris Olah 1 1 2,400,000.00 2,400,000.00 0.00 0.00
Carl Shulman 1 1 1,994,000.00 1,994,000.00 0.00 0.00
Unknown, generic, or multiple 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Helen Toner 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Luke Muehlhauser 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Ben Hoffman 1 1 1,186,000.00 0.00 0.00 1,186,000.00
Jacob Steinhardt 1 1 500,000.00 0.00 500,000.00 0.00
Classified total 6 6 36,357,435.00 34,394,000.00 777,435.00 1,186,000.00
Unclassified total 24 17 74,017,407.00 8,827,048.00 5,786,550.00 0.00
Total 30 22 110,374,842.00 43,221,048.00 6,563,985.00 1,186,000.00

Graph of spending by disclosures and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by disclosures and year (cumulative)

Graph of spending should have loaded here

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (9 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Project Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.2018-02-27Robert Wiblin Kieran Harris Holden Karnofsky 80,000 HoursOpen Philanthropy Project Broad donor strategyAI safety|Global catastrophic risks|Biosecurity and pandemic preparedness|Global health and development|Animal welfare|Scientific researchThis interview, with full transcript, is an episode of the 80,000 Hours podcast. In the interview, Karnofsky provides an overview of the cause prioritization and grantmaking strategy of the Open Philanthropy Project, and also notes that the Open Philanthropy Project is hiring for a number of positions
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups
The Open Philanthropy Project AI Fellows Program2017-09-12Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAI safetyThis annouces an AI Fellows Program to support students doing Ph.D. work in AI-related fields who have interest in AI safety. See https://www.facebook.com/vipulnaik.r/posts/10213116327718748 and https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussions
My current thoughts on MIRI’s highly reliable agent design work2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Project Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06)
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority

Full list of donations in reverse chronological order (30 donations)

DoneeAmount (current USD)Amount rank (out of 486)Donation dateCause areaURLInfluencerNotes
Berkeley Existential Risk Initiative250,000.002242019-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineersDaniel Dewey Grant to BERI to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI). Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-05.
Georgetown University55,000,000.004862019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technologyLuke Muehlhauser Grant to establish the Center for Security and Emerging Technology (CSET), a new think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET is led by Jason Matheny, former Assistant Director of National Intelligence and Director of Intelligence Advanced Research Projects Activity (IARPA), the U.S. intelligence community’s research organization. Grant money to be given over five years. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. Announced: 2019-02-28.
University of California, Berkeley (Earmark: Pieter Abeel|Aviv Tamar)1,145,000.004062018-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018Daniel Dewey Total across two grants over three years for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems. Announced: 2018-12-12.
GoalsRL (Earmark: Ashley Edwards)7,500.0052018-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learningDaniel Dewey Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.
Stanford University (Earmark: Dan Boneh|Florian Tremer)100,000.001152018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramerDaniel Dewey Grant is a "gift" to Stanford Unviersity to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer. Machine learning security probes worst-case performance of learned models. Believed to be a way of pushing in the direction of more AI safety concern in machine learning research and AI development. Announced: 2018-09-07.
The Wilson Center400,000.002872018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-seriesLuke Muehlhauser Grant over two years to support a series of in-depth AI policy seminars. Named for President Woodrow Wilson, the Wilson Center is a non-partisan policy forum for tackling global issues through independent research and open dialogue. Open Phil believes the seminar series could help raise the salience of AI policy in Washington, D.C. policymaking circles, and could help us identify and empower one or more influential thinkers in those circles, a key component of the Open Phil AI policy strategy. Announced: 2018-08-02.
University of Oxford (Earmark: Allan Dafoe)429,770.003032018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom. The Open Philanthropy Project recommended additional funds to support this work in 2017, while Professor Dafoe was at Yale. Continuation of grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe. Announced: 2018-07-20.
AI Impacts100,000.001152018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018Daniel Dewey Discretionary grant via the Machine Intelligence Research Institute. AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence.. Renewal of December 2016 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support. Announced: 2018-06-28.
Machine Intelligence Research Institute150,000.001722018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. Announced: 2018-06-28.
AI Fellows Program (Earmark: Aditi Raghunathan|Chris Maddison|Felix Berkenkamp|Jon Gauthier|Michael Janner|Noam Brown|Ruth Fong)1,135,000.004052018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018Daniel Dewey Grants to 7 AI Fellows pursuing research relevant to AI risk. More details at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence. Announced: 2018-06-01.
Ought525,000.003442018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Grantee has a mission to “leverage machine learning to help people think.” Ought plans to conduct research on deliberation and amplification, a concept we consider relevant to AI alignment. The funding, combined with another grant from Open Philanthropy Project technical advisor Paul Christiano, is intended to allow Ought to hire up to three new staff members and provide one to three years of support for Ought’s work, depending how quickly they hire. Announced: 2018-05-31.
Stanford University2,539.0022018-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learningDaniel Dewey Discretionary grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775. Announced: 2018-04-19.
AI Scholarships (Earmark: Dmitrii Krasheninnikov|Michael Cohen)159,000.001862018-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018Daniel Dewey Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.
Machine Intelligence Research Institute3,750,000.004702017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Grant over three years for general support. Represents a renewal and increase of the previous $500,000 grant in 2016 announced at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support -- the increase is due to a positive review by a leading researcher of the logical induction work by MIRI, and a general increase in Open Phil grants in AI risk making it less likely that a large grant to MIRI will be construed as too much of an endorsement of their approach. Donee (MIRI) also blogged this grant at https://intelligence.org/2017/11/08/major-grant-open-phil/. Announced: 2017-11-08.
University of California, Berkeley (Earmark: Sergey Levine,Anca Dragan)1,450,016.004232017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-draganDaniel Dewey Pair of grants to support AI safety work led by Professors Sergey Levine and Anca Dragan, who would each devote half their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems. Announced: 2017-10-20.
Berkeley Existential Risk Initiative403,890.002952017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaborationDaniel Dewey Grant to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley, also an Open Phil grantee (see https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for info on that grant). Open Phil also sees this as a promising model for providing assistance to other BERI clients in the future. Announced: 2017-09-28.
Yale University (Earmark: Allan Dafoe)299,320.002542017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research into the global politics of artificial intelligence, led by Assistant Professor of Political Science, Allan Dafoe, who will conduct part of the research at the Future of Humanity Institute in Oxford, United Kingdom over the next year. Funds from the two gifts will support the hiring of two full-time research assistants, travel, conferences, and other expenses related to the research efforts, as well as salary, relocation, and health insurance expenses related to Professor Dafoe’s work in Oxford. Announced: 2017-09-28.
Montreal Institute for Learning Algorithms (Earmark: Yoshua Bengio)2,400,000.004472017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research-- Grant to support research to improve the positive long-term impact of artificial intelligence on society. Mainly due to star power of researcher Yoshua Bengio who influences many young ML/AI researchers. Detailed writeup available. See also https://www.facebook.com/permalink.php?story_fbid=10110258359382500&id=13963931 for a Facebook share by David Krueger, a member of the grantee organization. The comments include some discussion about the grantee. Announced: 2017-07-19.
Stanford University (Earmark: Percy Liang)1,337,600.004172017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liangDaniel Dewey Grant awarded over year years (July 2017 to July 2021) to support research by Professor Percy Liang and three graduate students on AI safety and alignment. The funds will be split approximately evenly across the four years (i.e. roughly $320,000 to $350,000 per year). Preceded by planning grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $25,000. Announced: 2017-09-26.
UCLA School of Law (Earmark: Edward Parson,Richard Re)1,536,222.004282017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governanceHelen Toner Grant to support work on governance related to AI risk led by Edward Parson and Richard Re. Announced: 2017-07-27.
Stanford University (Earmark: Percy Liang)25,000.00222017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grantDaniel Dewey Grant awarded to Professor Percy Liang to spend significant time engaging in the Open Philanthropy Project grant application process, that would lead to a larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $1,337,600. Announced: 2017-09-26.
Distill25,000.00222017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-supportDaniel Dewey Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.
OpenAI30,000,000.004852017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support-- Grant for general support; 10 million dollars each over three years. Grant write-up explains reasons for this grant being larger than other grants in the cause area. External discussions include https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.
Future of Humanity Institute1,994,000.004392017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support-- Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.
AI Impacts32,000.00362016-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-- Grant for work on strategic questions related to potential risks from advanced artificial intelligence. Announced: 2017-02-02.
Electronic Frontier Foundation (Earmark: Peter Eckersley)199,000.001922016-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social-- Grant funded work by Peter Eckersley, whom the Open Philanthropy Project believed in. Followup conversation with Peter Eckersley and Jeremy Gillula of grantee organization at https://www.openphilanthropy.org/sites/default/files/Peter_Eckersley_Jeremy_Gillula_05-26-16_%28public%29.pdf on 2016-05-26. Announced: 2016-12-15.
Machine Intelligence Research Institute500,000.003132016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- The grant page describes grant process in detail; also links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant was also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Center for Human-Compatible AI5,555,550.004772016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence and https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity and states a 50% chance that two years from the grant date, the Center will be spending at least $2 million a year, and will be considered by one or more of our relevant technical advisors to have a reasonably good reputation in the field. Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the naem of the center for easier cross-referencing. Announced: 2016-08-29.
George Mason University (Earmark: Robin Hanson)277,435.002472016-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios-- Earmarked for Robin Hanson research. Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence for background. Original amount $264,525. Increased to $277,435 through the addition of $12,910 in July 2017 to cover an increase in George Mason University’s instructional release costs (teaching buyouts). Announced: 2016-07-07.
Future of Life Institute1,186,000.004082015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.

Similarity to other donors

Sorry, we couldn't find any similar donors.