Open Philanthropy Project donations made (filtered to cause areas matching AI safety|chicken)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy+Project

Brief history: The Open Philanthropy Project (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell in June 2017

Brief notes on broad donor philosophy and major focus areas: The Open Philanthropy Project is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Project Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy Project database are not listed on Donations List Website as being under Open Philanthropy Project. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by the Open Philanthropy Project that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by the Open Philanthropy Project are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 80 500,000 1,937,228 2,539 80,400 107,200 231,677 375,000 500,000 781,498 1,000,000 1,337,600 2,325,000 55,000,000
AI safety 40 500,000 1,882,654 2,539 25,000 100,000 200,000 310,000 500,000 1,111,000 1,337,600 1,994,000 2,652,500 30,000,000
Animal welfare 38 500,000 646,633 14,961 89,392 111,986 250,000 472,864 500,000 683,000 1,000,000 1,000,000 1,000,000 2,772,430
Global catastrophic risks 1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
Security 1 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2020 2019 2018 2017 2016 2015
AI safety (filter this donor) 40 23 75,306,176.00 11,937,834.00 8,243,500.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Security (filter this donor) 1 1 55,000,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Animal welfare (filter this donor) 38 24 24,572,058.00 0.00 1,888,698.00 3,729,107.00 10,264,861.00 8,689,392.00 0.00
Global catastrophic risks (filter this donor) 1 1 100,000.00 0.00 0.00 0.00 100,000.00 0.00 0.00
Total 80 48 154,978,234.00 11,937,834.00 65,132,198.00 7,882,916.00 53,585,909.00 15,253,377.00 1,186,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2020 2019 2018 2017 2016 2015
AI safety 40 23 75,306,176.00 11,937,834.00 8,243,500.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety 1 1 55,000,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken 18 15 12,090,944.00 0.00 1,781,498.00 3,379,107.00 4,430,339.00 2,500,000.00 0.00
Animal welfare/factory farming/chicken/cage-free campaign/international 5 5 3,611,986.00 0.00 0.00 0.00 111,986.00 3,500,000.00 0.00
Animal welfare/factory farming/chicken/cage-free 4 4 3,257,200.00 0.00 107,200.00 150,000.00 3,000,000.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free campaign/United States 3 3 2,500,000.00 0.00 0.00 0.00 0.00 2,500,000.00 0.00
Animal welfare/factory farming/chicken and pig 1 1 1,000,000.00 0.00 0.00 0.00 1,000,000.00 0.00 0.00
Animal welfare/factory farming/chicken/turkey/pig welfare 1 1 1,000,000.00 0.00 0.00 0.00 1,000,000.00 0.00 0.00
Animal welfare/factory farming/broiler chicken 2 2 389,592.00 0.00 0.00 0.00 389,592.00 0.00 0.00
Animal welfare/factory farming/chicken and dairy 1 1 332,944.00 0.00 0.00 0.00 332,944.00 0.00 0.00
Animal welfare/factory farming/chicken/pig/cage-free 1 1 200,000.00 0.00 0.00 200,000.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free campaign/international/Brazil 1 1 100,000.00 0.00 0.00 0.00 0.00 100,000.00 0.00
Global catastrophic risks/AI safety 1 1 100,000.00 0.00 0.00 0.00 100,000.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free campaign/international/India 1 1 89,392.00 0.00 0.00 0.00 0.00 89,392.00 0.00
Classified total 80 48 154,978,234.00 11,937,834.00 65,132,198.00 7,882,916.00 53,585,909.00 15,253,377.00 1,186,000.00
Unclassified total 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 80 48 154,978,234.00 11,937,834.00 65,132,198.00 7,882,916.00 53,585,909.00 15,253,377.00 1,186,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2020 2019 2018 2017 2016 2015
Center for Security and Emerging Technology (filter this donor) 55,000,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
OpenAI (filter this donor) AI safety FB Tw WP Site TW 30,000,000.00 0.00 0.00 0.00 30,000,000.00 0.00 0.00
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 14,756,250.00 7,703,750.00 2,652,500.00 150,000.00 3,750,000.00 500,000.00 0.00
Open Phil AI Fellowship (filter this donor) 5,760,000.00 2,300,000.00 2,325,000.00 1,135,000.00 0.00 0.00 0.00
Center for Human-Compatible AI (filter this donor) AI safety Site TW 5,755,550.00 0.00 200,000.00 0.00 0.00 5,555,550.00 0.00
The Humane League (filter this donor) Animal welfare/Diet change/Veganism/Factory farming FB Tw WP Site TW 4,750,000.00 0.00 750,000.00 0.00 2,000,000.00 2,000,000.00 0.00
University of California, Berkeley (filter this donor) FB Tw WP Site 3,706,016.00 0.00 1,111,000.00 1,145,000.00 1,450,016.00 0.00 0.00
Mercy For Animals (filter this donor) Animal welfare/Diet change/Veganism/Factory farming FB Tw WP Site TW 3,375,000.00 0.00 0.00 375,000.00 0.00 3,000,000.00 0.00
Animal Equality (filter this donor) FB Tw WP Site 3,272,430.00 0.00 0.00 2,772,430.00 0.00 500,000.00 0.00
Ought (filter this donor) AI safety Site 3,118,333.00 1,593,333.00 1,000,000.00 525,000.00 0.00 0.00 0.00
Montreal Institute for Learning Algorithms (filter this donor) AI capabilities/AI safety Site 2,400,000.00 0.00 0.00 0.00 2,400,000.00 0.00 0.00
Future of Humanity Institute (filter this donor) Global catastrophic risks/AI safety/Biosecurity and pandemic preparedness FB Tw WP Site TW 1,994,000.00 0.00 0.00 0.00 1,994,000.00 0.00 0.00
UCLA School of Law (filter this donor) Tw WP Site 1,536,222.00 0.00 0.00 0.00 1,536,222.00 0.00 0.00
The Humane Society of the United States (filter this donor) FB Tw WP Site 1,500,000.00 0.00 0.00 0.00 0.00 1,500,000.00 0.00
Stanford University (filter this donor) FB Tw WP Site 1,465,139.00 0.00 0.00 102,539.00 1,362,600.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 1,358,890.00 0.00 955,000.00 0.00 403,890.00 0.00 0.00
Association L214 (filter this donor) Animal welfare/broiler chicken welfare WP Site 1,347,742.00 0.00 0.00 0.00 1,347,742.00 0.00 0.00
World Animal Protection (filter this donor) FB Tw WP Site 1,299,086.00 0.00 781,498.00 0.00 517,588.00 0.00 0.00
Future of Life Institute (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 1,286,000.00 0.00 0.00 0.00 100,000.00 0.00 1,186,000.00
Albert Schweitzer Foundation for Our Contemporaries (filter this donor) Animal welfare WP Site 1,111,986.00 0.00 0.00 0.00 1,111,986.00 0.00 0.00
Humane Society International (filter this donor) FB Tw WP Site 1,000,000.00 0.00 0.00 0.00 0.00 1,000,000.00 0.00
Foundation for Food and Agricultural Research (filter this donor) Animal welfare FB Tw Site 1,000,000.00 0.00 0.00 0.00 1,000,000.00 0.00 0.00
Compassion in World Farming (filter this donor) FB Tw WP Site 1,000,000.00 0.00 0.00 0.00 1,000,000.00 0.00 0.00
Compassion Over Killing (filter this donor) FB Tw WP Site 750,000.00 0.00 250,000.00 0.00 0.00 500,000.00 0.00
Anima (filter this donor) Animal welfare/factory farming FB Tw WP Site 683,000.00 0.00 0.00 0.00 683,000.00 0.00 0.00
Eurogroup for Animals (filter this donor) Animal welfare FB Tw WP Site 640,361.00 0.00 0.00 0.00 640,361.00 0.00 0.00
Royal Society for the Prevention of Cruelty to Animals (filter this donor) FB Tw WP Site 606,308.00 0.00 0.00 231,677.00 374,631.00 0.00 0.00
Global Animal Partnership (filter this donor) Animal welfare FB Tw WP Site 515,000.00 0.00 0.00 0.00 515,000.00 0.00 0.00
Otwarte Klatki (filter this donor) Animal welfare FB Tw Site 472,864.00 0.00 0.00 0.00 472,864.00 0.00 0.00
University of Oxford (filter this donor) FB Tw WP Site 429,770.00 0.00 0.00 429,770.00 0.00 0.00 0.00
The Wilson Center (filter this donor) FB Tw WP Site 400,000.00 0.00 0.00 400,000.00 0.00 0.00 0.00
Federation of Indian Animal Protection Organisations (filter this donor) Animal welfare FB Tw WP Site 332,944.00 0.00 0.00 0.00 332,944.00 0.00 0.00
WestExec (filter this donor) 310,000.00 310,000.00 0.00 0.00 0.00 0.00 0.00
Fórum Nacional de Proteção e Defesa Animal (filter this donor) Animal welfare FB Tw Site 300,000.00 0.00 0.00 200,000.00 0.00 100,000.00 0.00
Yale University (filter this donor) FB Tw WP Site 299,320.00 0.00 0.00 0.00 299,320.00 0.00 0.00
George Mason University (filter this donor) FB WP Site 277,435.00 0.00 0.00 0.00 0.00 277,435.00 0.00
Electronic Frontier Foundation (filter this donor) FB Tw WP Site 199,000.00 0.00 0.00 0.00 0.00 199,000.00 0.00
AI Scholarships (filter this donor) 159,000.00 0.00 0.00 159,000.00 0.00 0.00 0.00
University of Bern (filter this donor) Tw WP Site 150,000.00 0.00 0.00 150,000.00 0.00 0.00 0.00
AI Impacts (filter this donor) AI safety Site 132,000.00 0.00 0.00 100,000.00 0.00 32,000.00 0.00
FAI Farms (filter this donor) 107,200.00 0.00 107,200.00 0.00 0.00 0.00 0.00
Farm Forward (filter this donor) Animal welfare FB Tw WP Site GS 100,000.00 0.00 0.00 0.00 100,000.00 0.00 0.00
People for Animals (filter this donor) WP 89,392.00 0.00 0.00 0.00 0.00 89,392.00 0.00
Wageningen UR (filter this donor) 88,345.00 0.00 0.00 0.00 88,345.00 0.00 0.00
Institute for Advancement of Animal Welfare Science (filter this donor) 80,400.00 0.00 0.00 0.00 80,400.00 0.00 0.00
RAND Corporation (filter this donor) FB Tw WP Site 30,751.00 30,751.00 0.00 0.00 0.00 0.00 0.00
Distill (filter this donor) AI capabilities/AI safety Tw Site 25,000.00 0.00 0.00 0.00 25,000.00 0.00 0.00
GoalsRL (filter this donor) AI safety Site 7,500.00 0.00 0.00 7,500.00 0.00 0.00 0.00
Total -- -- 154,978,234.00 11,937,834.00 65,132,198.00 7,882,916.00 53,585,909.00 15,253,377.00 1,186,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2020 2019 2018 2017 2016
Luke Muehlhauser 4 4 55,740,751.00 340,751.00 55,000,000.00 400,000.00 0.00 0.00
Lewis Bollard 36 24 23,540,560.00 0.00 857,200.00 3,729,107.00 10,264,861.00 8,689,392.00
Daniel Dewey 19 10 12,006,545.00 0.00 5,591,000.00 3,174,039.00 3,241,506.00 0.00
Claire Zabel|Committee for Effective Altruism Support 2 1 10,356,250.00 7,703,750.00 2,652,500.00 0.00 0.00 0.00
Nick Beckstead 4 4 4,579,090.00 0.00 0.00 429,770.00 4,149,320.00 0.00
Catherine Olsson|Daniel Dewey 1 1 2,300,000.00 2,300,000.00 0.00 0.00 0.00 0.00
Committee for Effective Altruism Support 1 1 1,593,333.00 1,593,333.00 0.00 0.00 0.00 0.00
Helen Toner 1 1 1,536,222.00 0.00 0.00 0.00 1,536,222.00 0.00
Amanda Hungerford 2 2 1,031,498.00 0.00 1,031,498.00 0.00 0.00 0.00
Claire Zabel 1 1 150,000.00 0.00 0.00 150,000.00 0.00 0.00
Classified total 71 43 112,834,249.00 11,937,834.00 65,132,198.00 7,882,916.00 19,191,909.00 8,689,392.00
Unclassified total 9 9 42,143,985.00 0.00 0.00 0.00 34,394,000.00 6,563,985.00
Total 80 48 154,978,234.00 11,937,834.00 65,132,198.00 7,882,916.00 53,585,909.00 15,253,377.00

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

If you hover over a cell for a given disclosures and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Disclosures Number of donations Number of donees Total 2017 2016 2015
Paul Christiano 2 2 30,500,000.00 30,000,000.00 500,000.00 0.00
Dario Amodei 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Holden Karnofsky 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Daniel Dewey 4 4 5,171,435.00 4,394,000.00 777,435.00 0.00
Nick Beckstead 4 4 3,957,435.00 1,994,000.00 777,435.00 1,186,000.00
Chris Olah 1 1 2,400,000.00 2,400,000.00 0.00 0.00
Carl Shulman 1 1 1,994,000.00 1,994,000.00 0.00 0.00
Unknown, generic, or multiple 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Helen Toner 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Luke Muehlhauser 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Ben Hoffman 1 1 1,186,000.00 0.00 0.00 1,186,000.00
Jacob Steinhardt 1 1 500,000.00 0.00 500,000.00 0.00
Lewis Bollard 1 1 500,000.00 0.00 500,000.00 0.00
Classified total 7 7 36,857,435.00 34,394,000.00 1,277,435.00 1,186,000.00
Unclassified total 73 44 118,120,799.00 19,191,909.00 13,975,942.00 0.00
Total 80 48 154,978,234.00 53,585,909.00 15,253,377.00 1,186,000.00

Graph of spending by disclosures and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by disclosures and year (cumulative)

Graph of spending should have loaded here

Donation amounts by country and year

If you hover over a cell for a given country and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Country Number of donations Number of donees Total 2019 2018 2017 2016
United States 7 5 5,150,000.00 0.00 150,000.00 0.00 5,000,000.00
United States|Brazil|Italy|Mexico|Spain 1 1 2,772,430.00 0.00 2,772,430.00 0.00 0.00
United Kingdom 3 2 1,606,308.00 0.00 231,677.00 1,374,631.00 0.00
France 1 1 1,347,742.00 0.00 0.00 1,347,742.00 0.00
Germany 1 1 1,000,000.00 0.00 0.00 1,000,000.00 0.00
Thailand|Indonesia 1 1 781,498.00 781,498.00 0.00 0.00 0.00
Scandinavia 1 1 683,000.00 0.00 0.00 683,000.00 0.00
Poland|Ukraine 1 1 472,864.00 0.00 0.00 472,864.00 0.00
India 2 2 422,336.00 0.00 0.00 332,944.00 89,392.00
Brazil 2 1 300,000.00 0.00 200,000.00 0.00 100,000.00
China 1 1 107,200.00 107,200.00 0.00 0.00 0.00
Classified total 21 17 14,643,378.00 888,698.00 3,354,107.00 5,211,181.00 5,189,392.00
Unclassified total 59 37 140,334,856.00 64,243,500.00 4,528,809.00 48,374,728.00 10,063,985.00
Total 80 48 154,978,234.00 65,132,198.00 7,882,916.00 53,585,909.00 15,253,377.00

Graph of spending by country and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by country and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (26 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Our Progress in 2019 and Plans for 20202020-05-08Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCriminal justice reform|Animal welfare|AI safety|Effective altruismThe post compares progress madee by the Open Philanthropy Project in 2019 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019 and then lays out plans fr 2020. The post notes that grantmaking, including grants to GiveWell topo charities, was over $200 million. The post reviews the following from 2019: continued grantmaking, growth of the operations team, impact evaluation (with good progress in evaluation of giving in criminal justice reform and animal welfare), worldview investigations (that was harder than anticipated, resulting in slower progress), other cause prioritization work, hiring and other capacity building, and outreach to external donors.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Project Survival and Flourising Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open Philanthropy ProjectChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views
Co-funding Partnership with Ben Delo2019-11-11Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Ben Delo PartnershipAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|Effective altruismBen Delo, co-founder of the cryptocurrency trading platform BitMEX, recently signed the Giving Pledge. He is entering into a partnership with the Open Philanthropy Project, providing funds, initially in the $5 million per year range, to support Open Phil's longtermist grantmaking, in areas including AI safety, biosecurity and pandemic preparedness, global catastrophic risks, and effective altruism. Later, the Machine Intelligence Research Institute (MIRI) would reveal at https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ that, of a $7.7 million grant from Open Phil, $1.46 million is coming from Ben Delo.
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)2019-09-10Ryan Carey Effective Altruism ForumFounders Pledge Open Philanthropy Project OpenAI Machine Intelligence Research Institute Broad donor strategyAI safety|Global catastrophic risks|Scientific research|PoliticsRyan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Project Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security2019-03-20Tate Williams Inside PhilanthropyOpen Philanthropy Project Center for Security and Emerging Technology Third-party coverage of donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|SecurityThe article focuses on grantmaking by the Open Philanthropy Project in the areas of global catastrophic risks and security, particularly in AI safety and biosecurity and pandemic preparedness. It includes quotes from Luke Muehlhauser, Senior Research Analyst at the Open Philanthropy Project and the investigator for the $55 million grant https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology to the Center for Security and Emerging Technology (CSET). Muehlhauser was previously Executive Director at the Machine Intelligence Research Institute. It also includes a quote from Holden Karnofsky, who sees the early interest of effective altruists in AI safety as prescient. The CSET grant is discussed in the context of the Open Philanthropy Project's hits-based giving approach, as well as the interest in the policy space in better understanding of safety and governance issues related to technology and AI
Committee for Effective Altruism Support2019-02-27Open Philanthropy ProjectOpen Philanthropy Project Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Project Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.2018-02-27Robert Wiblin Kieran Harris Holden Karnofsky 80,000 HoursOpen Philanthropy Project Broad donor strategyAI safety|Global catastrophic risks|Biosecurity and pandemic preparedness|Global health and development|Animal welfare|Scientific researchThis interview, with full transcript, is an episode of the 80,000 Hours podcast. In the interview, Karnofsky provides an overview of the cause prioritization and grantmaking strategy of the Open Philanthropy Project, and also notes that the Open Philanthropy Project is hiring for a number of positions
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups
How Will Hen Welfare Be Impacted by the Transition to Cage-Free Housing?2017-09-15Ajeya Cotra Open Philanthropy ProjectOpen Philanthropy Project Reasoning supplementAnimal welfare/factory farming/chicken/cage-free campaignA followup to https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms which described the original cage-free campaign funding strategy. This report compares aviaries (cage-free living environments) with cages for hens. It tempers original enthusiasm for cage-free by noting higher mortality rates, but continues to support the position that cage-free is likely better on net for hens. Described in blog post https://www.openphilanthropy.org/blog/new-report-welfare-differences-between-cage-and-cage-free-housing that expresses regret for not investigating this more thoroughly earlier, and thanks Direct Action Everywhere for highlighting the issue. See https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/cnK5yNlYHuc for the announcement
The Open Philanthropy Project AI Fellows Program2017-09-12Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAI safetyThis annouces an AI Fellows Program to support students doing Ph.D. work in AI-related fields who have interest in AI safety. See https://www.facebook.com/vipulnaik.r/posts/10213116327718748 and https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussions
A major grant from the Open Philanthropy Project2017-09-08Malo Bourgon Machine Intelligence Research InstituteOpen Philanthropy Project Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
My current thoughts on MIRI’s highly reliable agent design work (GW, IR)2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin
Our Progress in 2016 and Plans for 20172017-03-14Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2016 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2015-and-plans-2016 and then lays out plans for 2017. The post notes success in scaling up grantmaking, as hoped for in last year's plan. The spinoff from GiveWell is still not completed because it turned out to be more complex than expected, but it is expected to be finished in mid-2017. Open Phil highlights the hiring of three Scientific Advisors (Chris Somerville, Heather Youngs, and Daniel Martin-Alarcon) in mid-2016, as part of its scientific research work. The organization also plans to focus more on figuring out how to decide how much money to allocate between different cause areas, with Karnofsky's worldview diversification post https://www.openphilanthropy.org/blog/worldview-diversification also highlighted. There is no plan to scale up staff or grantmmaking (unlike 2016, when the focus was to scale up hiring, and 2015, when the focus was to scale up staff)
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas
Grisly Undercover Video Shows Chickens Being Starved To Produce More Eggs2016-10-11Nico Pitney Huffington PostOpen Philanthropy Project Humane Society International Mercy For Animals Animal Equality People for Animals The Humane League Third-party coverage of donor strategyAnimal welfare/factory farming/chicken/cage-free campaign/internationalProvides some context for the move by the Open Philanthropy Project in mid-2016 to expand its cage-free campaign funding internationally
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Project Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06)
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority
Our Progress in 2015 and Plans for 20162016-04-29Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2015 against plans laid out in https://www.openphilanthropy.org/blog/open-philanthropy-project-progress-2014-and-plans-2015 and then lays out plans for 2016. The post notes the following in relation to its 2015 plans: it succeeded in hiring and expanding the team, but had to scale back on its scientific research ambitions in mid-2015. For 2016, Open Phil plans to focus on scaling up its grantmaking and reducing its focus on hiring. AI safety is declared as an intended priority for 2016, with Daniel Dewey working on it full-time, and Nick Beckstead and Holden Karnofsky also devoting significant time to it. The post also notes plans to continue work on separating the Open Philanthropy Project from GiveWell
Initial Grants to Support Corporate Cage-free Reforms2016-03-31Lewis Bollard Open Philanthropy ProjectOpen Philanthropy Project The Humane League Mercy For Animals The Humane Society of the United States Broad donor strategyAnimal welfare/factory farming/chicken/cage-free campaign/internationalWritten to explain a bunch of grants already made in 2016-02 to support cage-free reforms in the United States for egg-laying chicken. The blog post had a heated comment section, potentially influencing future Open Phil communication on the subject
Potential Global Catastrophic Risk Focus Areas2014-06-26Alexander Berger Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risksIn this blog post originally published at https://blog.givewell.org/2014/06/26/potential-global-catastrophic-risk-focus-areas/ Alexander Berger goes over a list of seven types of global catastrophic risks (GCRs) that the Open Philanthropy Project has considered. He details three promising areas that the Open Philanthropy Project is exploring more and may make grants in: (1) Biosecurity and pandemic preparedness, (2) Geoengineering research and governance, (3) AI safety. For the AI safety section, there is a note from Executive Director Holden Karnofsky saying that he sees AI safety as a more promising area than Berger does
Thoughts on the Singularity Institute (SI) (GW, IR)2012-05-11Holden Karnofsky LessWrongOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit
Singularity Institute for Artificial Intelligence2011-04-30Holden Karnofsky GiveWellOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyIn this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI

Full list of donations in reverse chronological order (80 donations)

DoneeAmount (current USD)Amount rank (out of 80)Donation dateCause areaURLInfluencerNotes
Open Phil AI Fellowship (Earmark: Alex Tamkin|Clare Lyle|Cody Coleman|Dami Choi|Dan Hendrycks|Ethan Perez|Frances Ding|Leqi Liu|Peter Henderson|Stanislav Fort)2,300,000.00102020-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-classCatherine Olsson Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship to ten machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests." In a comment reply https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020#BCvuhRCg9egAscpyu (GW, IR) on the Effectiive Altruism Forum, grant investigator Catherine Olsson writes: "But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is comparable to the total amount of the 2019 fellowship grants, though it is distributed among a slightly larger pool of people.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60 Announced: 2020-05-12.
WestExec310,000.00522020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/westexec-report-on-assurance-in-machine-learning-systemsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to support the production and distribution of a report on advancing policy, process, and funding for the Department of Defense’s work on test, evaluation, verification, and validation for deep learning systems." Announced: 2020-03-20.
Machine Intelligence Research Institute7,703,750.0032020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" with the most similar previous grant being https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 (February 2019). Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Centre for Effective Altruism ($4,146,795), 80,000 Hours ($3,457,284), and Ought ($1,593,333).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation.
Intended funding timeframe in months: 24

Other notes: The donee describes the grant in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ (2020-04-27) along with other funding it has received ($300,000 from the Berkeley Existential Risk Initiative and $100,000 from the Long-Term Future Fund). The fact that the grant is a two-year grant is mentioned here, but not in the grant page on Open Phil's website. The page also mentions that of the total grant amount of $7.7 million, $6.24 million is coming from Open Phil's normal funders (Good Ventures) and the remaining $1.46 million is coming from Ben Delo, co-founder of the cryptocurrency trading platform BitMEX, as part of a funding partnership https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo announced November 11, 2019. Announced: 2020-04-10.
Ought1,593,333.00132020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020Committee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing potential risks from advanced artificial intelligence."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter"

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Machine Intelligence Research Institute ($7,703,750), Centre for Effective Altruism ($4,146,795), and 80,000 Hours ($3,457,284)

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation Announced: 2020-02-14.
RAND Corporation (Earmark: Andrew Lohn)30,751.00752020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rand-corporation-research-on-the-state-of-ai-assurance-methodsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support exploratory research by Andrew Lohn on the state of AI assurance methods." Announced: 2020-03-19.
Berkeley Existential Risk Initiative705,000.00352019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 1; announced: 2019-12-13.
University of California, Berkeley (Earmark: Jacob Steinhardt)1,111,000.00212019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-research-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "This funding will allow Professor Steinhardt to fund students to work on robustness, value learning, aggregating preferences, and other areas of machine learning."

Other notes: This is the third year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Jacob Steinhardt) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 1; announced: 2020-02-19.
Center for Human-Compatible AI200,000.00582019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "CHAI plans to use these funds to support graduate student and postdoc research."

Other notes: Open Phil makes a $705,000 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 to the Berkeley Existential Risk Initiative (BERI) at the same time (November 2019) to collaborate with CHAI. Intended funding timeframe in months: 1; announced: 2019-12-20.
Ought1,000,000.00222019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought Intended funding timeframe in months: 1; announced: 2020-02-14.
Open Phil AI Fellowship (Earmark: Aidan Gomez|Andrew Ilyas|Julius Adebayo|Lydia T. Liu|Max Simchowitz|Pratyusha Kullari|Siddharth Karamcheti|Smitha Milli)2,325,000.0092019-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-classDaniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship support to eight machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is about double the amount of the 2018 grant, although the number of people supported is just one more (8 instead of 7). No explicit comparison of grant amounts is done in the grant page.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) confirms that the program would continue. Announced: 2019-05-17.
FAI Farms107,200.00652019-04Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-eggs-chinaLewis Bollard Donation process: Discretionary grant made via the Open Philanthropy Action Fund. The grant page says: "This project was supported through a contractor agreement. While we do not typically publish pages for contractor agreements, we chose to write about this funding because we view it as conceptually similar to an ordinary grant, despite its structure as a contract due to the recipient’s organizational form."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Chinese farm animal welfare auditor trainings, egg farm audits, and a cage-free conference. These projects will promote cage-free production in China, the world’s largest egg producer, and aim to reduce the suffering of egg-laying hens." Affected countries: China; announced: 2019-06-07.
World Animal Protection781,498.00332019-04Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/world-animal-protection-se-asia-broilerAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate broiler chicken campaigns in Southeast Asia with a focus on Thailand and Indonesia. WAP plans to increase its broiler chicken campaigns in Thailand and perform scoping research to lay the groundwork for future campaigns in Indonesia, as both Thailand and Indonesia have large numbers of farmed birds." Intended funding timeframe in months: 1; affected countries: Thailand|Indonesia; announced: 2019-06-26.
Machine Intelligence Research Institute2,652,500.0072019-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.

Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount decided by the Committee for Effective Altruism Support (CEAS) https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by CEAS, made at the same time and therefore likely drawing from the same money pot, are to the Center for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The original amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later. In November 2019, additional funding would bring the total award amount to $2,652,500

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 with a very similar writeup suggests that Open Phil and the Committee for Effective Altruism Support would continue to stand by the reasoning for the grant

Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-01.
Center for Security and Emerging Technology55,000,000.0012019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technologyLuke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: Grant via Georgetown University for the Center for Security and Emerging Technology (CSET), a new think tank led by Jason Matheny, formerly of IARPA, dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders.

Donor reason for selecting the donee: Open Phil thinks that one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. As AI grows more powerful, calls for government to play a more active role are likely to increase, and government funding and regulation could affect the benefits and risks of AI. Thus: "Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice." Despite risks and uncertainty, the grant is described as worthwhile under Open Phil's hits-based giving framework

Donor reason for donating that amount (rather than a bigger or smaller amount): The large amount over an extended period (5 years) is explained at https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant "In the case of the new Center for Security and Emerging Technology, we think it will take some time to develop expertise on key questions relevant to policymakers and want to give CSET the commitment necessary to recruit key people, so we provided a five-year grant."

Donor reason for donating at this time (rather than earlier or later): Likely determined by the timing that the grantee plans to launch. More timing details are not discussed
Intended funding timeframe in months: 60

Other notes: Donee is entered as Center for Security and Emerging Technology rather than as Georgetown University for consistency with future grants directly to the organization once it is set up. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.
Berkeley Existential Risk Initiative250,000.00552019-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineersDaniel Dewey Donation process: The Open Philanthropy Project described the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-04.
The Humane League750,000.00342019-01Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-broiler-welfare-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support corporate campaigns to improve the welfare of broiler chickens. Broiler chickens are the most numerous land farm animals. Broiler welfare campaigns seek to address these causes of suffering.

Donor reason for selecting the donee: Open Phil considers broiler chicken welfare a high-impact cause: "Broiler chickens are the most numerous land farm animals, with more than a billion alive at any time and approximately 9 billion slaughtered annually in the U.S. alone. Their welfare is impacted by genetics, overcrowding, inhumane slaughter, and environmental factors like chronic sleep deprivation due to lighting schedules optimized for growth." Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken The Humane League is selected for reasons outlined in earlier grants, such as the August 2018 general support https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018

Donor reason for donating at this time (rather than earlier or later): Likely based on funding needs and the using up of funds from previous grants. No explicit reasons for timing are given Announced: 2019-04-30.
Compassion Over Killing250,000.00552019-01Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-exit-grant-2019Amanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support farm animal welfare outreach and investigations related to chickens and fish. The new funding represents an “exit grant” that will provide Compassion Over Killing with approximately one year of operating support to allow them to secure other funding.

Donor reason for selecting the donee: The donor had previously supported the donee in 2016 https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-over-killing-us-broiler-welfare-campaigns The new grant is an exit grant to give the donee time to find other sources of funding

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely selected as a reasonable amount for a one-year exit grant

Donor reason for donating at this time (rather than earlier or later): Timing likely determined by the end of the previous grant, and the need to provide more funding for a smooth exit grant
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: There will be no next donation; this is an exit grant Announced: 2019-05-07.
University of Bern150,000.00622018-11Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/university-of-bern-higher-welfare-cage-free-systemsLewis Bollard Discretionary grant to develop and implement a pilot project for U.S. egg producers, equipment installers, and USDA extension agents to learn about management of high-welfare, cage-free systems in Switzerland, Sweden, Holland, and Belgium. The funds will support Dr. Michael Toscano, Group Leader of Switzerland’s Centre for Proper Housing of Poultry and Rabbits, and colleagues to develop the educational program and deploy it with approximately 20 U.S. producers, installers, and extension agents. Due to Switzerland’s ban of battery cages in 1992, its producers and scientists have more than 25 years of experience managing cage-free systems. Affected countries: United States; announced: 2018-12-12.
University of California, Berkeley (Earmark: Pieter Abeel|Aviv Tamar)1,145,000.00192018-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems."

Other notes: This is the second year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Pieter Abbeel) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 1; announced: 2018-12-11.
Fórum Nacional de Proteção e Defesa Animal200,000.00582018-08Animal welfare/factory farming/chicken/pig/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-crate-and-cage-free-campaigning-in-brazilLewis Bollard Grant over two years for campaigning to reduce the use of battery cages for layer hens and gestation crates for pigs in Brazil. FNPDA has played a role in securing crate-free pledges from Brazil’s four largest pork producers and cage-free pledges from 26 Brazilian food companies, and intends to use these funds to continue its corporate campaigns, to start a tracker of corporate implementation of cage-free pledges, and to host a conference with egg producers, food companies, scientists, and activists to discuss implementation. Renewal of October 2016 grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-international-cage-free-advocacy. Affected countries: Brazil; announced: 2018-09-28.
GoalsRL (Earmark: Ashley Edwards)7,500.00792018-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learningDaniel Dewey Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.
University of Oxford (Earmark: Allan Dafoe)429,770.00462018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom. The Open Philanthropy Project recommended additional funds to support this work in 2017, while Professor Dafoe was at Yale. Continuation of grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe. Announced: 2018-07-20.
The Wilson Center400,000.00482018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-seriesLuke Muehlhauser Grant over two years to support a series of in-depth AI policy seminars. Named for President Woodrow Wilson, the Wilson Center is a non-partisan policy forum for tackling global issues through independent research and open dialogue. Open Phil believes the seminar series could help raise the salience of AI policy in Washington, D.C. policymaking circles, and could help us identify and empower one or more influential thinkers in those circles, a key component of the Open Phil AI policy strategy. Announced: 2018-08-02.
Stanford University (Earmark: Dan Boneh|Florian Tremer)100,000.00662018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramerDaniel Dewey Grant is a "gift" to Stanford Unviersity to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer. Machine learning security probes worst-case performance of learned models. Believed to be a way of pushing in the direction of more AI safety concern in machine learning research and AI development. Announced: 2018-09-07.
Animal Equality2,772,430.0062018-06Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-corporate-animal-welfare-campaignsLewis Bollard Grant for corporate lobbying in the United States, Brazil, Italy, Mexico, and Spain, covering cage-free and broiler welfare campaigns. Combination of five grants spanning three years. Affected countries: United States|Brazil|Italy|Mexico|Spain; announced: 2018-07-12.
Machine Intelligence Research Institute150,000.00622018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."

Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-27.
AI Impacts100,000.00662018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018Daniel Dewey Discretionary grant via the Machine Intelligence Research Institute. AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence.. Renewal of December 2016 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support. Announced: 2018-06-28.
Mercy For Animals375,000.00492018-05Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-us-broiler-chicken-welfare-corporate-campaignsLewis Bollard Discretionary grant to support broiler chicken welfare corporate campaigns. It follows two 2016 grants: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-chicken-welfare-corporate-campaigns and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns. Announced: 2018-06-15.
Royal Society for the Prevention of Cruelty to Animals231,677.00572018-05Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/rspca-broiler-breed-studyLewis Bollard Discretionary grant of £171,600 (approximately $231,677 at the time of conversion) to the Royal Society for the Prevention of Cruelty to Animals (RSPCA) to support a broiler chicken breed welfare study. The study, to be conducted by the Royal Veterinary College under RSPCA supervision, will test the welfare of two new breeds and will validate two new behavioral measures to enhance future breed tests. Affected countries: United Kingdom; announced: 2018-06-15.
Open Phil AI Fellowship (Earmark: Aditi Raghunathan|Chris Maddison|Felix Berkenkamp|Jon Gauthier|Michael Janner|Noam Brown|Ruth Fong)1,135,000.00202018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 180 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research"

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship support to seven machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating at this time (rather than earlier or later): This is the first of annual sets of grants, decided through an annual application process.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The corresponding grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class (2019) and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) confirm that these grants will be made annually. Announced: 2018-05-31.
Ought525,000.00382018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."

Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought Intended funding timeframe in months: 1; announced: 2018-05-30.
Stanford University2,539.00802018-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learningDaniel Dewey Discretionary grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775. Announced: 2018-04-19.
AI Scholarships (Earmark: Dmitrii Krasheninnikov|Michael Cohen)159,000.00612018-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018Daniel Dewey Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.
Otwarte Klatki472,864.00452017-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/otwarte-klatki-chicken-welfare-campaigns-poland-ukraineLewis Bollard Grant to support farm animal welfare campaigns and organizational capacity building in Poland and Ukraine. The funding will allow Otwarte Klatki to launch broiler chicken welfare campaigns in Poland and cage-free campaigns in Ukraine, as well as support expenses related to a planned merger with the Danish animal rights organization, Anima. Affected countries: Poland|Ukraine; announced: 2017-11-21.
Association L2141,347,742.00162017-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/L214-broiler-chicken-campaignsLewis Bollard Grant over two years of €1,140,000 (approximately $1,347,742 at the time of conversion). Using this funding, L214 will conduct a campaign advocating for reduced chicken meat consumption as well as a corporate campaign targeting higher welfare standards for broiler chickens. Affected countries: France; announced: 2017-12-08.
Royal Society for the Prevention of Cruelty to Animals374,631.00502017-10Animal welfare/factory farming/broiler chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/rspca-broiler-chicken-welfare-campaign-UKLewis Bollard Grant over two years to support a corporate chicken welfare campaign in the United Kingdom. Affected countries: United Kingdom; announced: 2017-11-08.
Anima683,000.00362017-10Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/anima-corporate-campaigns-merger-supportLewis Bollard Two grants combined. Funding to support chicken welfare campaigns and organizational capacity building in Scandinavia. Affected countries: Scandinavia; announced: 2017-11-21.
Machine Intelligence Research Institute3,750,000.0052017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) suggests that work on the review had been going on well before the grant renewal date

Intended use of funds (category): Organizational general support

Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."

Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."

Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36

Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."

Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." is discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
University of California, Berkeley (Earmark: Sergey Levine|Anca Dragan)1,450,016.00152017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-draganDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "The work will be led by Professors Sergey Levine and Anca Dragan, who will each devote approximately 20% of their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems, though they have the flexibility to adjust their focus during the grant period." The project narrative is at https://www.openphilanthropy.org/files/Grants/UC_Berkeley/Levine_Dragan_Project_Narrative_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our broad goals for this funding are to encourage top researchers to work on AI alignment and safety issues in order to build a pipeline for young researchers; to support progress on technical problems; and to generally support the growth of this area of study."

Other notes: This is the first year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It would begin an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Anca Dragan) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 1; announced: 2017-10-20.
Compassion in World Farming1,000,000.00222017-10Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-world-farming-end-the-cage-age-campaignLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support [the] “End the Cage Age” campaign in the UK and Europe. The campaign will seek to end the use of cages and crates for all farmed animal species in the UK and Europe through advocacy and outreach, including an EU-wide citizens’ ballot measure. [The] funds will support staffing needs related to the campaign in six regional EU offices as well as its headquarters in the United Kingdom; marketing, social media, and exhibition activities; advocacy work; investigations; as well as technical and operational costs over the next two years."

Donor reason for donating that amount (rather than a bigger or smaller amount): Budget available at https://www.openphilanthropy.org/files/Grants/CIWF/CIWF_End_the_Cage_Age_Campaign_2017.pdf Intended funding timeframe in months: 1; affected countries: United Kingdom; announced: 2017-11-14.
Albert Schweitzer Foundation for Our Contemporaries1,000,000.00222017-09Animal welfare/factory farming/chicken/turkey/pig welfarehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-general-support-2017Lewis Bollard Grant over two years for general support for farm animal welfare activities in Germany. Grant will allow grantee to significantly expand their corporate outreach on broiler chicken welfare, increase their fundraising capacity, and hire a law firm to pursue litigation related to turkey and pig welfare. Affected countries: Germany; announced: 2017-10-25.
Eurogroup for Animals625,400.00372017-09Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/eurogroup-animals-eu-chicken-welfare-advocacyLewis Bollard Grant over two years, denominated in euros (€530,000) to support EU policy advocacy for chicken welfare. Grantee plans to use these funds on either broiler chicken or egg-laying hen welfare campaigns, depending upon which campaign appears most tractable. Actual financing of the grant from a 501(c)(4) social welfare organization. Announced: 2017-11-28.
The Humane League2,000,000.00112017-09Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-open-wing-alliance-2017Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the Open Wing Alliance to expand corporate campaigns in Europe. The Alliance, started by The Humane League, supports global efforts to eliminate battery cages. The new grant will bolster these campaigns in Europe and allow Alliance members to expand into campaigns to improve the welfare of broiler (meat) chickens.

Donor reason for selecting the donee: Grant investigator Lewis Bollard, is excited to continue supporting the Open Wing Alliance (which grew out of a previous Open Phil grant to The Humane League) due to the coalition’s strong track record of securing corporate cage-free pledges; his confidence in its leadership team; and the project’s strategic fit with our goal to build a stronger farm animal welfare movement in Europe.

Donor reason for donating at this time (rather than earlier or later): Likely determined by the development timeline of the Open Wing Alliance, which grew out of an earlier grant about a year earlier, in February 2016: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns
Intended funding timeframe in months: 24

Donor retrospective of the donation: The general support grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018 in 2018 renews this grant among others

Other notes: This and other grants from Open Philanthropy Project to The Humane League are discussed in https://ssir.org/articles/entry/giving_in_the_light_of_reason as part of an overview of the Open Philanthropy Project grantmaking strategy. Announced: 2017-10-09.
Federation of Indian Animal Protection Organisations332,944.00512017-07Animal welfare/factory farming/chicken and dairyhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/federation-indian-animal-protection-organisations-india-animal-welfare-reformLewis Bollard Reason for grant includes excitement about broad network of FIAPO throughout India, and the scale of opportunity in India. Affected countries: India; announced: 2017-08-21.
Berkeley Existential Risk Initiative403,890.00472017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaborationDaniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx Announced: 2017-09-28.
Montreal Institute for Learning Algorithms (Earmark: Yoshua Bengio)2,400,000.0082017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research-- Grant to support research to improve the positive long-term impact of artificial intelligence on society. Mainly due to star power of researcher Yoshua Bengio who influences many young ML/AI researchers. Detailed writeup available. See also https://www.facebook.com/permalink.php?story_fbid=10110258359382500&id=13963931 for a Facebook share by David Krueger, a member of the grantee organization. The comments include some discussion about the grantee. Announced: 2017-07-19.
Yale University (Earmark: Allan Dafoe)299,320.00532017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research into the global politics of artificial intelligence, led by Assistant Professor of Political Science, Allan Dafoe, who will conduct part of the research at the Future of Humanity Institute in Oxford, United Kingdom over the next year. Funds from the two gifts will support the hiring of two full-time research assistants, travel, conferences, and other expenses related to the research efforts, as well as salary, relocation, and health insurance expenses related to Professor Dafoe’s work in Oxford. Announced: 2017-09-28.
Eurogroup for Animals14,961.00782017-05Animal welfare/factory farming/broiler chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/eurogroup-animals-broiler-chicken-welfare-campaignLewis Bollard Discretionary grant to support the International Broiler Advocacy Meeting in Brussels, of European advocacy groups to discuss broiler chicken welfare campaigns. Announced: 2017-08-08.
Future of Life Institute100,000.00662017-05Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant for general support. However, the primary use of the grant will be to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant

Donor retrospective of the donation: The followup grants in 2018 and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee. Announced: 2017-09-27.
UCLA School of Law (Earmark: Edward Parson,Richard Re)1,536,222.00142017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governanceHelen Toner Grant to support work on governance related to AI risk led by Edward Parson and Richard Re. Announced: 2017-07-27.
Stanford University (Earmark: Percy Liang)1,337,600.00172017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liangDaniel Dewey Grant awarded over year years (July 2017 to July 2021) to support research by Professor Percy Liang and three graduate students on AI safety and alignment. The funds will be split approximately evenly across the four years (i.e. roughly $320,000 to $350,000 per year). Preceded by planning grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $25,000. Announced: 2017-09-26.
Foundation for Food and Agricultural Research1,000,000.00222017-04Animal welfare/factory farming/chicken and pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/foundation-food-and-agriculture-research-farm-animal-welfare-researchLewis Bollard Grant was to match FFAR funding 1:1 in supporting research to find solutions to bone fractures in cage-free hens and painful castration in piglets. Announced: 2017-05-11.
Wageningen UR88,345.00722017-03Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/wageningen-ur-broiler-welfare-reviewLewis Bollard Part of a strategy focus on broiler chicken welfare started in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Announced: 2017-05-08.
Institute for Advancement of Animal Welfare Science80,400.00732017-03Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/colorado-state-university-planning-giftLewis Bollard Grant goes for Colorado State University research on broiler chicken welfare. Discretionary grant. Amount increased from original value of $25,300 to $80,400 on 2018-02-16. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Announced: 2017-06-26.
OpenAI30,000,000.0022017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support-- Donation process: According to the grant page Section 4 Our process: "OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors."

Intended use of funds (category): Organizational general support

Intended use of funds: The funds will be used for general support of OpenAI, with 10 millioon USD per year for the next three years. The funding is also accompanied with Holden Karnofsky (Open Phil director) joining the OpenAI Board of Directors. Karnofsky and one other board member will oversee OpenAI's safety and governance work

Donor reason for selecting the donee: Open Phil says that, given its interest in AI safety, it is looking to fund and closely partner with orgs that (a) are working to build transformative AI, (b) are advancing the state of the art in AI research, (c) employ top AI research talent. OpenAI and Deepmind are two such orgs, and OpenAI is particularly appealing due to "our shared values, different starting assumptions and biases, and potential for productive communication." Open Phil is looking to gain the following from a partnership: (i) Improve its understanding of AI research, (ii) Improve its ability to generically achieve goals regarding technical AI safety research, (iii) Better position Open Phil to promote its ideas and goals

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page Section 2.2 "A note on why this grant is larger than others we’ve recommended in this focus area" explains the reasons for the large grant amount (relative to other grants by Open Phil so far). Reasons listed are: (i) Hits-based giving philosophy, described at https://www.openphilanthropy.org/blog/hits-based-giving in depth, (ii) Disproportionately high importance of the cause if transformative AI is developed in the next 20 years, and likelihood that OpenAI will be very important if that happens, (iii) Benefits of working closely with OpenAI in informing Open Phil's understanding of AI safety, (iv) Field-building benefits, including promoting an AI safety culture, (v) Since OpenAI has a lot of other funding, Open Phil can grant a large amount while still not raising the concern of dominating OpenAI's funding

Donor reason for donating at this time (rather than earlier or later): No specific timing considerations are provided. It is likely that the timing of the grant is determined by when OpenAI first approached Open Phil and the time taken for the due diligence
Intended funding timeframe in months: 36

Other notes: External discussions include https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.
Future of Humanity Institute1,994,000.00122017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support-- Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.
Distill25,000.00762017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-supportDaniel Dewey Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.
Stanford University (Earmark: Percy Liang)25,000.00762017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grantDaniel Dewey Grant awarded to Professor Percy Liang to spend significant time engaging in the Open Philanthropy Project grant application process, that would lead to a larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $1,337,600. Announced: 2017-09-26.
World Animal Protection517,588.00392017-03Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/world-animal-protection-broiler-chicken-welfareLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for campaigns to improve the welfare of broiler chickens. Activities: (1) Producing and promoting campaign materials to raise awareness of broiler chicken suffering (2) Developing and launching a corporate chicken welfare scorecard (3) Building evidence of the suffering endured by broiler chickens in factory farming operations (4) Staff time, creative development, and travel (5) Indirect costs such as occupancy, technical support, and administrative support

Donor reason for selecting the donee: For more background on Open Phil grants related to broiler chicken, see https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken

Donor reason for donating that amount (rather than a bigger or smaller amount): Donee's budget proposal is at https://www.openphilanthropy.org/files/Grants/World_Animal_Protection/Revised_WAP_Chicken_Campaign_Proposal_REDACTED.xlsx Intended funding timeframe in months: 1; announced: 2017-06-26.
Global Animal Partnership515,000.00402017-02Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/global-animal-partnership-broiler-chicken-welfare-researchLewis Bollard Grant to support research into broiler chicken welfare at the University of Guelph. Followup to general support grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/global-animal-partnership-general-support. Announced: 2018-10-05.
Albert Schweitzer Foundation for Our Contemporaries111,986.00642017-01Animal welfare/factory farming/chicken/cage-free campaign/internationalhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-international-cage-free-advocacyLewis Bollard Grant focused on ending confinement of hens in battery cages in Poland. Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Announced: 2017-03-21.
Farm Forward100,000.00662017-01Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/farm-forward-broiler-chicken-welfare-advocacyLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work to secure pledges from institutions including universities, technology companies, and religious organizations to source higher-welfare animal products through The Leadership Circle. While Farm Forward typically works with institutions that purchase fewer animal products than the large food companies that other advocacy groups work with, it also seeks stronger welfare commitments, such as sourcing 100% of chicken from farms that are certified to at least Global Animal Partnership (GAP) Step 2 within two years. The Leadership Circle also asks institutions to commit to continuous improvement and investments in highest-welfare farms and ranches. Project description available at https://www.openphilanthropy.org/files/Grants/Farm_Forward/The_Leadership_Circle_Project_Description.pdf

Donor reason for selecting the donee: Open Phil writes: "It seems plausible to us that the institutions that Farm Forward works with may exert cultural influence that may influence much larger food companies."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget, available at https://www.openphilanthropy.org/files/Grants/Farm_Forward/The_Leadership_Circle_Budget_Public.xlsx that gives a total of $100,000 from January 1, 2017 to December 31, 2017

Donor retrospective of the donation: The February 2018 renewal https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/farm-forward-leadership-circle-2018 suggests that the grant was considered at least somewhat successful. The renewal writeup says that the grantee says that the grant "helped enable its work with the University of California system, Dr. Bronner’s, Airbnb, Duke University, Villanova University, Johns Hopkins University, and others to commit to source some of their animal products from farms certified to higher-welfare standards."

Other notes: Recipient works with institutions that purchase animal food products, and pushes them to raise the standards of treatment of animals for the food they purchased, through the Leadership Circle. Example: sourcing 100% of chicken from farms that are certified to at least Global Animal Partnership (GAP) Step two in two years. Intended funding timeframe in months: 1; announced: 2017-03-30.
AI Impacts32,000.00742016-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-- Grant for work on strategic questions related to potential risks from advanced artificial intelligence. Announced: 2017-02-02.
Compassion Over Killing500,000.00412016-12Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-over-killing-us-broiler-welfare-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support broiler chicken welfare research and costs of United States corporate campaigns against the abuse of broiler chickens

Other notes: Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Affected countries: United States; announced: 2017-02-16.
The Humane Society of the United States1,000,000.00222016-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-society-united-states-new-broiler-welfare-corporate-campaignsLewis Bollard Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Affected countries: United States; announced: 2016-12-15.
Mercy For Animals1,000,000.00222016-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-chicken-welfare-corporate-campaignsLewis Bollard Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Affected countries: United States; announced: 2017-01-10.
Electronic Frontier Foundation (Earmark: Peter Eckersley)199,000.00602016-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social-- Grant funded work by Peter Eckersley, whom the Open Philanthropy Project believed in. Followup conversation with Peter Eckersley and Jeremy Gillula of grantee organization at https://www.openphilanthropy.org/sites/default/files/Peter_Eckersley_Jeremy_Gillula_05-26-16_%28public%29.pdf on 2016-05-26. Announced: 2016-12-15.
Fórum Nacional de Proteção e Defesa Animal100,000.00662016-10Animal welfare/factory farming/chicken/cage-free campaign/international/Brazilhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-international-cage-free-advocacyLewis Bollard Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Affected countries: Brazil; announced: 2016-11-07.
Humane Society International1,000,000.00222016-08Animal welfare/factory farming/chicken/cage-free campaign/internationalhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-society-international-international-cage-free-outreachLewis Bollard Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Announced: 2016-10-03.
Mercy For Animals1,000,000.00222016-08Animal welfare/factory farming/chicken/cage-free campaign/internationalhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-international-cage-free-advocacyLewis Bollard Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Announced: 2016-10-03.
Animal Equality500,000.00412016-08Animal welfare/factory farming/chicken/cage-free campaign/internationalhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-international-cage-free-advocacyLewis Bollard Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Announced: 2016-10-03.
People for Animals89,392.00712016-08Animal welfare/factory farming/chicken/cage-free campaign/international/Indiahttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/people-animals-international-cage-free-advocacyLewis Bollard Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Followup conversation with Gauri Mulekhi of grantee organization at https://www.openphilanthropy.org/sites/default/files/Gauri_Maulekhi_02-06-17_%28public%29.pdf on 2017-02-06. Affected countries: India; announced: 2016-10-03.
Machine Intelligence Research Institute500,000.00412016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."

Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety

Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."

Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."

Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant

Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Center for Human-Compatible AI5,555,550.0042016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Donation process: The grant page section https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process says: "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."

Intended use of funds (category): Organizational general support

Intended use of funds: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding says: "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year." The funding from Open Phil will be used toward this budget. An earlier section of the grant page says that the Center's research topics will include value alignment, value functions defined by partially observable and partially defined terms, the structure of human value systems, and conceptual questions including the properties of ideal value systems.

Donor reason for selecting the donee: The grant page gives these reasons: (1) "We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it." (2) "The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research." (3) "We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence." Also, counterfactual impact: "Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. [...] We are not aware of other potential [substantial] funders, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is based on budget estimates in https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year."

Donor reason for donating at this time (rather than earlier or later): Timing seems to have been determined by the time it took to work out the details of the new center after Open Phil decided to make AI safety a major priority in 2016. According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 in November 2019 as well as many grants to Berkeley Existential Risk Initiative (BERI) to collaborate with the grantee suggest that Open Phil would continue to think highly of the grantee, and stand by its reasoning.

Other notes: Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the name of the center for easier cross-referencing. Announced: 2016-08-29.
The Humane League1,000,000.00222016-07Animal welfare/factory farming/chicken/cage-free campaign/internationalhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-international-cage-free-advocacyLewis Bollard Donation process: No details are provided for this grant, but it likely builds on past vetting of the organization for the previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns and general interest in cage-free campaigns described at https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support international advocacy to end the confinement of hens in battery cages, complementing a similar grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns focused on the United States

Donor reason for selecting the donee: The grant page does not discuss reasons, but reasons are likely similar to those for the previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns (both for the donee as an organization and for cage-free campaigns)

Donor reason for donating at this time (rather than earlier or later): No timing-related reasons are discussed, but the timing is likely a result of the Open Philanthropy Project's general push for cage-free campaigning, and promise shown by the first round of cage-free campaign grants made earlier in the year

Donor retrospective of the donation: The general support grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018 in 2018 renews this grant among others

Other notes: Part of a second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. This and other grants from Open Philanthropy Project to The Humane League are discussed in https://ssir.org/articles/entry/giving_in_the_light_of_reason as part of an overview of the Open Philanthropy Project grantmaking strategy. Announced: 2016-10-03.
George Mason University (Earmark: Robin Hanson)277,435.00542016-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios-- Earmarked for Robin Hanson research. Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence for background. Original amount $264,525. Increased to $277,435 through the addition of $12,910 in July 2017 to cover an increase in George Mason University’s instructional release costs (teaching buyouts). Announced: 2016-07-07.
Mercy For Animals1,000,000.00222016-02Animal welfare/factory farming/chicken/cage-free campaign/United Stateshttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaignsLewis Bollard Part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more. Followup conversation with Nick Cooney of grantee organization at https://www.openphilanthropy.org/sites/default/files/Nick_Cooney_08-01-16_%28public%29.pdf on 2016-08-01. Affected countries: United States; announced: 2016-03-10.
The Humane Society of the United States500,000.00412016-02Animal welfare/factory farming/chicken/cage-free campaign/United Stateshttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-society-united-states-corporate-cage-free-campaignsLewis Bollard Part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more. Followup conversation with Paul Shapiro of grantee organization at https://www.openphilanthropy.org/sites/default/files/Paul_Shapiro_07-20-16_%28public%29.pdf on 2016-07-20. Affected countries: United States; announced: 2016-03-10.
The Humane League1,000,000.00222016-02Animal welfare/factory farming/chicken/cage-free campaign/United Stateshttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaignsLewis Bollard Donation process: The donation is part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more background. The specific process for The Humane League is not discussed in detail; see https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#Our_process

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support corporate cage-free campaigns

Donor reason for selecting the donee: The donor's positive assessment of the donee as a corporate campaigner is described at https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#The_organization The donor's positive assessment of cage-free campaigns is described at https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#The_cause and https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms The donor believes the donee's effectiveness will increase with scale; this is part of the reason for the grant, explained more at https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#Case_for_the_grant

Donor reason for donating that amount (rather than a bigger or smaller amount): From https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#Budget_and_room_for_more_funding (Section 2.2): "THL shared two potential two-year budgets for its corporate campaign expansion with us: for an additional $250,000/year, or $500,000/year. We have decided to fund THL’s full corporate campaign expansion budget of $500,000/year for the next two years."

Donor reason for donating at this time (rather than earlier or later): The grant is part of a push by the Open Philanthropy Project to fund corporate cage-free campaigning, explained in more detail at https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms The timing is therefore controlled by the timing of that push
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: Next donation is not directly discussed, but follow-up plans are described in Section 2.4 "Follow-up expectations": a followup with THL staff every 3-6 months, an update at the one-year mark, and a holistic evaluation at the end of the grant period

Donor retrospective of the donation: Followup conversation at https://www.openphilanthropy.org/sites/default/files/The_Humane_League_08-22-16_%28public%29.pdf on 2016-08-22. There are many followup grants for international expansion and general support, suggesting that the grant is considered a success. A renewal and expansion grant is made in August 2018: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018

Other notes: This and other grants from Open Philanthropy Project to The Humane League are discussed in https://ssir.org/articles/entry/giving_in_the_light_of_reason as part of an overview of the Open Philanthropy Project grantmaking strategy. Affected countries: United States; announced: 2016-02-24.
Future of Life Institute1,186,000.00182015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.

Similarity to other donors

Sorry, we couldn't find any similar donors.