Open Philanthropy donations made (filtered to cause areas matching AI safety|chicken)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)

Brief history: Open Philanthropy (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell as the "Open Philanthropy Project" in June 2017. In 2020, it started going by "Open Philanthropy" dropping the "Project" word.

Brief notes on broad donor philosophy and major focus areas: Open Philanthropy is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more.

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval.

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy database are not listed on Donations List Website as being under Open Philanthropy. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by open philanthropy that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by Open Philanthropy are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 168 400,000 1,679,648 370 32,000 100,000 159,000 277,435 400,000 517,588 800,000 1,300,000 2,537,600 55,000,000
AI safety 84 330,000 1,864,104 370 24,350 55,000 150,000 265,000 330,000 495,685 705,000 1,450,016 2,652,500 38,920,000
Animal welfare 82 472,864 859,709 14,961 88,345 130,000 215,000 332,944 472,864 600,000 800,000 1,000,000 1,700,000 10,000,000
Global catastrophic risks 1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
Security 1 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2021 2020 2019 2018 2017 2016 2015
AI safety (filter this donor) 84 50 156,584,727.00 77,638,453.00 15,571,349.00 8,243,500.00 4,160,392.00 43,221,048.00 6,563,985.00 1,186,000.00
Animal welfare (filter this donor) 82 42 70,496,101.00 3,600,000.00 14,667,711.00 16,253,030.00 16,729,107.00 10,556,861.00 8,689,392.00 0.00
Security (filter this donor) 1 1 55,000,000.00 0.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Global catastrophic risks (filter this donor) 1 1 100,000.00 0.00 0.00 0.00 0.00 100,000.00 0.00 0.00
Total 168 92 282,180,828.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 53,877,909.00 15,253,377.00 1,186,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2021 2020 2019 2018 2017 2016 2015
AI safety 82 48 155,869,183.00 77,517,329.00 14,976,929.00 8,243,500.00 4,160,392.00 43,221,048.00 6,563,985.00 1,186,000.00
Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety 1 1 55,000,000.00 0.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free/corporate campaign 11 4 19,297,600.00 0.00 800,000.00 1,997,600.00 10,000,000.00 2,000,000.00 4,500,000.00 0.00
Animal welfare/factory farming/chicken/broiler chicken/cage-free/corporate campaign 4 3 14,911,430.00 0.00 5,501,000.00 6,638,000.00 2,772,430.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken 14 13 6,790,775.00 0.00 1,965,057.00 831,466.00 0.00 2,994,252.00 1,000,000.00 0.00
Animal welfare/factory farming/chicken/broiler chicken/corporate campaign 7 5 4,730,240.00 0.00 0.00 2,007,498.00 375,000.00 1,347,742.00 1,000,000.00 0.00
Animal welfare/factory farming/chicken/cage-free 10 8 3,247,586.00 600,000.00 40,000.00 395,600.00 0.00 1,111,986.00 1,100,000.00 0.00
Animal welfare/factory farming/chicken/fish 5 5 3,129,448.00 0.00 1,279,448.00 1,850,000.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/chick culling 1 1 3,000,000.00 0.00 0.00 0.00 3,000,000.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/broiler chicken/cage-fre/corporate campaign 1 1 3,000,000.00 3,000,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free/broiler chicken 1 1 1,700,000.00 0.00 0.00 1,700,000.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free/broiler chicken/corporate campaign 1 1 1,642,046.00 0.00 1,642,046.00 0.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/layer chicken 3 2 1,022,452.00 0.00 784,586.00 237,866.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/turkey/pig 1 1 1,000,000.00 0.00 0.00 0.00 0.00 1,000,000.00 0.00 0.00
Animal welfare/factory farming/chicken and pig 1 1 1,000,000.00 0.00 0.00 0.00 0.00 1,000,000.00 0.00 0.00
Animal welfare/factory farming/chicken/broiler chicken 3 2 814,592.00 0.00 425,000.00 0.00 0.00 389,592.00 0.00 0.00
Animal welfare/factory farming/chicken/broiler chicken/layer chicken 1 1 635,000.00 0.00 635,000.00 0.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/fish/chicken/pig 2 1 612,974.00 0.00 462,974.00 150,000.00 0.00 0.00 0.00 0.00
AI safety/talent pipeline 1 1 594,420.00 0.00 594,420.00 0.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/cage-free campaign/United States 1 1 500,000.00 0.00 0.00 0.00 0.00 0.00 500,000.00 0.00
Animal welfare/factory farming/chicken/broiler chicken/research/corporate campaign 1 1 500,000.00 0.00 0.00 0.00 0.00 0.00 500,000.00 0.00
Animal welfare/factory farming/chicken/cattle/pig 1 1 445,000.00 0.00 0.00 445,000.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/layer chicken/cage-free/research 1 1 410,000.00 0.00 410,000.00 0.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/pig/chicken 1 1 350,000.00 0.00 350,000.00 0.00 0.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/cattle 1 1 332,944.00 0.00 0.00 0.00 0.00 332,944.00 0.00 0.00
Animal welfare/factory farming/chicken/layer chicken/pig/cage-free 2 1 300,000.00 0.00 100,000.00 0.00 200,000.00 0.00 0.00 0.00
Animal welfare/factory farming/chicken/chick culling|Animal welfare/diet change 1 1 292,000.00 0.00 0.00 0.00 0.00 292,000.00 0.00 0.00
Animal welfare/factory farming/chicken/broiler chicken/researcch 1 1 231,677.00 0.00 0.00 0.00 231,677.00 0.00 0.00 0.00
Classified total 168 92 282,180,828.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 53,877,909.00 15,253,377.00 1,186,000.00
Unclassified total 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 168 92 282,180,828.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 53,877,909.00 15,253,377.00 1,186,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2021 2020 2019 2018 2017 2016 2015
Center for Security and Emerging Technology (filter this donor) 101,920,000.00 46,920,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
OpenAI (filter this donor) AI safety FB Tw WP Site TW 30,000,000.00 0.00 0.00 0.00 0.00 30,000,000.00 0.00 0.00
The Humane League (filter this donor) Animal welfare/Diet change/Veganism/Factory farming FB Tw WP Site TW 19,915,000.00 0.00 3,600,000.00 2,315,000.00 10,000,000.00 2,000,000.00 2,000,000.00 0.00
Center for Human-Compatible AI (filter this donor) AI safety WP Site TW 17,110,796.00 11,355,246.00 0.00 200,000.00 0.00 0.00 5,555,550.00 0.00
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 14,756,250.00 0.00 7,703,750.00 2,652,500.00 150,000.00 3,750,000.00 500,000.00 0.00
Mercy For Animals (filter this donor) Animal welfare/Diet change/Veganism/Factory farming FB Tw WP Site TW 13,274,000.00 3,000,000.00 0.00 6,899,000.00 375,000.00 0.00 3,000,000.00 0.00
Redwood Research (filter this donor) 9,420,000.00 9,420,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Open Phil AI Fellowship (filter this donor) 7,060,000.00 1,300,000.00 2,300,000.00 2,325,000.00 1,135,000.00 0.00 0.00 0.00
Animal Equality (filter this donor) FB Tw WP Site 5,680,430.00 0.00 1,901,000.00 215,000.00 2,772,430.00 292,000.00 500,000.00 0.00
University of California, Berkeley (filter this donor) FB Tw WP Site 4,366,016.00 660,000.00 0.00 1,111,000.00 1,145,000.00 1,450,016.00 0.00 0.00
Foundation for Food and Agricultural Research (filter this donor) Animal welfare FB Tw Site 4,000,000.00 0.00 0.00 0.00 3,000,000.00 1,000,000.00 0.00 0.00
Ought (filter this donor) AI safety Site 3,118,333.00 0.00 1,593,333.00 1,000,000.00 525,000.00 0.00 0.00 0.00
L214 (filter this donor) 2,989,788.00 0.00 1,642,046.00 0.00 0.00 1,347,742.00 0.00 0.00
Centre for the Governance of AI (filter this donor) 2,987,600.00 2,537,600.00 450,000.00 0.00 0.00 0.00 0.00 0.00
Albert Schweitzer Foundation (filter this donor) 2,711,986.00 0.00 0.00 1,600,000.00 0.00 1,111,986.00 0.00 0.00
Montreal Institute for Learning Algorithms (filter this donor) AI capabilities/AI safety Site 2,400,000.00 0.00 0.00 0.00 0.00 2,400,000.00 0.00 0.00
Compassion in World Farming (filter this donor) FB Tw WP Site 2,228,407.00 0.00 1,228,407.00 0.00 0.00 1,000,000.00 0.00 0.00
Stanford University (filter this donor) FB Tw WP Site 2,137,455.00 661,584.00 6,500.00 0.00 106,771.00 1,362,600.00 0.00 0.00
Future of Humanity Institute (filter this donor) Global catastrophic risks/AI safety/Biosecurity and pandemic preparedness FB Tw WP Site TW 1,994,000.00 0.00 0.00 0.00 0.00 1,994,000.00 0.00 0.00
World Animal Protection (filter this donor) FB Tw WP Site 1,856,552.00 0.00 0.00 1,338,964.00 0.00 517,588.00 0.00 0.00
Massachusetts Institute of Technology (filter this donor) FB Tw WP Site 1,705,344.00 1,430,000.00 275,344.00 0.00 0.00 0.00 0.00 0.00
Anima International (filter this donor) 1,700,000.00 0.00 0.00 1,700,000.00 0.00 0.00 0.00 0.00
The Wilson Center (filter this donor) FB Tw WP Site 1,556,194.00 291,214.00 864,980.00 0.00 400,000.00 0.00 0.00 0.00
UCLA School of Law (filter this donor) Tw WP Site 1,536,222.00 0.00 0.00 0.00 0.00 1,536,222.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 1,508,890.00 0.00 150,000.00 955,000.00 0.00 403,890.00 0.00 0.00
The Humane Society of the United States (filter this donor) FB Tw WP Site 1,500,000.00 0.00 0.00 0.00 0.00 0.00 1,500,000.00 0.00
Future of Life Institute (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 1,286,000.00 0.00 0.00 0.00 0.00 100,000.00 0.00 1,186,000.00
Eurogroup for Animals (filter this donor) Animal welfare FB Tw WP Site 1,275,361.00 0.00 635,000.00 0.00 0.00 640,361.00 0.00 0.00
Sinergia Animal (filter this donor) 1,232,600.00 0.00 800,000.00 432,600.00 0.00 0.00 0.00 0.00
Royal Society for the Prevention of Cruelty to Animals (filter this donor) FB Tw WP Site 1,031,308.00 0.00 425,000.00 0.00 231,677.00 374,631.00 0.00 0.00
Humane Society International (filter this donor) FB Tw WP Site 1,000,000.00 0.00 0.00 0.00 0.00 0.00 1,000,000.00 0.00
FAI Farms (filter this donor) 944,600.00 600,000.00 105,000.00 239,600.00 0.00 0.00 0.00 0.00
University of Tübingen (filter this donor) 890,000.00 890,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for Welfare Metrics (filter this donor) 784,586.00 0.00 784,586.00 0.00 0.00 0.00 0.00 0.00
Federation of Indian Animal Protection Organisations (filter this donor) Animal welfare FB Tw WP Site 777,944.00 0.00 0.00 445,000.00 0.00 332,944.00 0.00 0.00
Animal Outlook (filter this donor) 750,000.00 0.00 0.00 250,000.00 0.00 0.00 500,000.00 0.00
Anima (filter this donor) Animal welfare/factory farming FB Tw WP Site 683,000.00 0.00 0.00 0.00 0.00 683,000.00 0.00 0.00
Group Nine Media (filter this donor) 680,448.00 0.00 680,448.00 0.00 0.00 0.00 0.00 0.00
Essere Animali (filter this donor) 612,974.00 0.00 462,974.00 150,000.00 0.00 0.00 0.00 0.00
Study and Training Related to AI Policy Careers (filter this donor) 594,420.00 0.00 594,420.00 0.00 0.00 0.00 0.00 0.00
University of Bern (filter this donor) Tw WP Site 560,000.00 0.00 410,000.00 0.00 150,000.00 0.00 0.00 0.00
WestExec (filter this donor) 540,000.00 0.00 540,000.00 0.00 0.00 0.00 0.00 0.00
Environmental & Animal Society of Taiwan (filter this donor) 521,000.00 0.00 521,000.00 0.00 0.00 0.00 0.00 0.00
University of Toronto (filter this donor) FB Tw WP Site 520,000.00 0.00 520,000.00 0.00 0.00 0.00 0.00 0.00
Global Animal Partnership (filter this donor) Animal welfare FB Tw WP Site 515,000.00 0.00 0.00 0.00 0.00 515,000.00 0.00 0.00
The Humane League UK (filter this donor) 507,900.00 0.00 507,900.00 0.00 0.00 0.00 0.00 0.00
Rethink Priorities (filter this donor) Cause prioritization Site 495,685.00 495,685.00 0.00 0.00 0.00 0.00 0.00 0.00
Otwarte Klatki (filter this donor) Animal welfare FB Tw Site 472,864.00 0.00 0.00 0.00 0.00 472,864.00 0.00 0.00
University of Oxford (filter this donor) FB Tw WP Site 429,770.00 0.00 0.00 0.00 429,770.00 0.00 0.00 0.00
Fórum Nacional de Proteção e Defesa Animal (filter this donor) Animal welfare FB Tw Site 400,000.00 0.00 100,000.00 0.00 200,000.00 0.00 100,000.00 0.00
Catalyst (filter this donor) 350,000.00 0.00 350,000.00 0.00 0.00 0.00 0.00 0.00
Carnegie Mellon University (filter this donor) FB Tw WP Site 330,000.00 330,000.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Southern California (filter this donor) FB Tw WP Site 320,000.00 320,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Yale University (filter this donor) FB Tw WP Site 299,320.00 0.00 0.00 0.00 0.00 299,320.00 0.00 0.00
George Mason University (filter this donor) FB WP Site 277,435.00 0.00 0.00 0.00 0.00 0.00 277,435.00 0.00
Animal Rights Center Japan (filter this donor) 274,000.00 0.00 0.00 274,000.00 0.00 0.00 0.00 0.00
University of California, Santa Cruz (filter this donor) 265,000.00 265,000.00 0.00 0.00 0.00 0.00 0.00 0.00
University of Cambridge (filter this donor) FB Tw WP Site 250,000.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Animal Kingdom Foundation (filter this donor) 237,866.00 0.00 0.00 237,866.00 0.00 0.00 0.00 0.00
Electronic Frontier Foundation (filter this donor) FB Tw WP Site 199,000.00 0.00 0.00 0.00 0.00 0.00 199,000.00 0.00
AI Impacts (filter this donor) AI safety Site 182,000.00 0.00 50,000.00 0.00 100,000.00 0.00 32,000.00 0.00
Daniel Dewey (filter this donor) 175,000.00 175,000.00 0.00 0.00 0.00 0.00 0.00 0.00
AI Scholarships (filter this donor) 159,000.00 0.00 0.00 0.00 159,000.00 0.00 0.00 0.00
Equalia (filter this donor) 150,000.00 0.00 150,000.00 0.00 0.00 0.00 0.00 0.00
Berryville Institute of Machine Learning (filter this donor) 150,000.00 150,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for a New American Security (filter this donor) 141,094.00 0.00 141,094.00 0.00 0.00 0.00 0.00 0.00
SPCA Selangor (filter this donor) 134,000.00 0.00 0.00 134,000.00 0.00 0.00 0.00 0.00
Alianima (filter this donor) 130,000.00 0.00 130,000.00 0.00 0.00 0.00 0.00 0.00
Hypermind (filter this donor) 121,124.00 121,124.00 0.00 0.00 0.00 0.00 0.00 0.00
Center for Strategic and International Studies (filter this donor) 118,307.00 0.00 118,307.00 0.00 0.00 0.00 0.00 0.00
Farm Forward (filter this donor) Animal welfare FB Tw WP Site GS 100,000.00 0.00 0.00 0.00 0.00 100,000.00 0.00 0.00
People for Animals (filter this donor) WP 89,392.00 0.00 0.00 0.00 0.00 0.00 89,392.00 0.00
Wageningen University & Research (filter this donor) 88,345.00 0.00 0.00 0.00 0.00 88,345.00 0.00 0.00
Institute for Advancement of Animal Welfare Science (filter this donor) 80,400.00 0.00 0.00 0.00 0.00 80,400.00 0.00 0.00
Compassion in World Farming USA (filter this donor) Animal welfare/corporate campaigns FB Tw Site 78,750.00 0.00 78,750.00 0.00 0.00 0.00 0.00 0.00
Animal Friends Jogja (filter this donor) 78,000.00 0.00 78,000.00 0.00 0.00 0.00 0.00 0.00
Center for International Security and Cooperation (filter this donor) WP 67,000.00 0.00 67,000.00 0.00 0.00 0.00 0.00 0.00
Brian Christian (filter this donor) 66,000.00 66,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Johns Hopkins University (filter this donor) FB Tw WP Site 55,000.00 0.00 55,000.00 0.00 0.00 0.00 0.00 0.00
World Economic Forum (filter this donor) FB Tw WP Site 50,000.00 0.00 50,000.00 0.00 0.00 0.00 0.00 0.00
Impact Alliance (filter this donor) 40,000.00 0.00 40,000.00 0.00 0.00 0.00 0.00 0.00
World Animal Net (filter this donor) 37,600.00 0.00 37,600.00 0.00 0.00 0.00 0.00 0.00
RAND Corporation (filter this donor) FB Tw WP Site 30,751.00 0.00 30,751.00 0.00 0.00 0.00 0.00 0.00
Distill (filter this donor) AI capabilities/AI safety Tw Site 25,000.00 0.00 0.00 0.00 0.00 25,000.00 0.00 0.00
Rice, Hadley, Gates & Manuel LLC (filter this donor) 25,000.00 0.00 25,000.00 0.00 0.00 0.00 0.00 0.00
Sankalpa (filter this donor) 22,000.00 0.00 0.00 22,000.00 0.00 0.00 0.00 0.00
Press Shop (filter this donor) 17,000.00 0.00 17,000.00 0.00 0.00 0.00 0.00 0.00
Andrew Lohn (filter this donor) 15,000.00 0.00 15,000.00 0.00 0.00 0.00 0.00 0.00
GoalsRL (filter this donor) AI safety Site 7,500.00 0.00 0.00 0.00 7,500.00 0.00 0.00 0.00
International Conference on Learning Representations (filter this donor) 3,500.00 0.00 3,500.00 0.00 0.00 0.00 0.00 0.00
Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai (filter this donor) 2,351.00 0.00 0.00 0.00 2,351.00 0.00 0.00 0.00
Smitha Milli (filter this donor) 370.00 0.00 370.00 0.00 0.00 0.00 0.00 0.00
Total -- -- 282,180,828.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 53,877,909.00 15,253,377.00 1,186,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2021 2020 2019 2018 2017 2016
Luke Muehlhauser 21 15 108,492,519.00 50,365,623.00 2,726,896.00 55,000,000.00 400,000.00 0.00 0.00
Lewis Bollard 60 33 52,380,034.00 600,000.00 4,716,608.00 11,088,066.00 16,729,107.00 10,556,861.00 8,689,392.00
Nick Beckstead 8 8 25,595,336.00 21,016,246.00 0.00 0.00 429,770.00 4,149,320.00 0.00
Daniel Dewey 27 16 13,640,498.00 1,550,000.00 77,370.00 5,591,000.00 3,180,622.00 3,241,506.00 0.00
Claire Zabel|Committee for Effective Altruism Support 2 1 10,356,250.00 0.00 7,703,750.00 2,652,500.00 0.00 0.00 0.00
Amanda Hungerford 14 12 8,427,371.00 0.00 3,262,407.00 5,164,964.00 0.00 0.00 0.00
Amanda Hungerford|Lewis Bollard 6 6 5,046,650.00 0.00 5,046,650.00 0.00 0.00 0.00 0.00
Lewis Bollard|Amanda Hungerford 2 2 4,642,046.00 3,000,000.00 1,642,046.00 0.00 0.00 0.00 0.00
Catherine Olsson|Daniel Dewey 5 4 4,540,000.00 2,240,000.00 2,300,000.00 0.00 0.00 0.00 0.00
Committee for Effective Altruism Support 2 2 2,043,333.00 0.00 2,043,333.00 0.00 0.00 0.00 0.00
Helen Toner 1 1 1,536,222.00 0.00 0.00 0.00 0.00 1,536,222.00 0.00
Catherine Olsson|Nick Beckstead 4 3 1,475,000.00 1,475,000.00 0.00 0.00 0.00 0.00 0.00
Catherine Olsson 3 2 991,584.00 991,584.00 0.00 0.00 0.00 0.00 0.00
Daniel Dewey|Catherine Olsson 1 1 520,000.00 0.00 520,000.00 0.00 0.00 0.00 0.00
Claire Zabel 2 2 300,000.00 0.00 150,000.00 0.00 150,000.00 0.00 0.00
Tom Davidson|Ajeya Cotra 1 1 50,000.00 0.00 50,000.00 0.00 0.00 0.00 0.00
Classified total 159 87 240,036,843.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 19,483,909.00 8,689,392.00
Unclassified total 9 9 42,143,985.00 0.00 0.00 0.00 0.00 34,394,000.00 6,563,985.00
Total 168 92 282,180,828.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 53,877,909.00 15,253,377.00

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

If you hover over a cell for a given disclosures and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Disclosures Number of donations Number of donees Total 2017 2016 2015
Paul Christiano 2 2 30,500,000.00 30,000,000.00 500,000.00 0.00
Dario Amodei 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Holden Karnofsky 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Daniel Dewey 4 4 5,171,435.00 4,394,000.00 777,435.00 0.00
Nick Beckstead 4 4 3,957,435.00 1,994,000.00 777,435.00 1,186,000.00
Chris Olah 1 1 2,400,000.00 2,400,000.00 0.00 0.00
Carl Shulman 1 1 1,994,000.00 1,994,000.00 0.00 0.00
Unknown, generic, or multiple 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Helen Toner 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Luke Muehlhauser 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Ben Hoffman 1 1 1,186,000.00 0.00 0.00 1,186,000.00
Jacob Steinhardt 1 1 500,000.00 0.00 500,000.00 0.00
Lewis Bollard 1 1 500,000.00 0.00 500,000.00 0.00
Classified total 7 7 36,857,435.00 34,394,000.00 1,277,435.00 1,186,000.00
Unclassified total 161 88 245,323,393.00 19,483,909.00 13,975,942.00 0.00
Total 168 92 282,180,828.00 53,877,909.00 15,253,377.00 1,186,000.00

Graph of spending by disclosures and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by disclosures and year (cumulative)

Graph of spending should have loaded here

Donation amounts by country and year

If you hover over a cell for a given country and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Country Number of donations Number of donees Total 2021 2020 2019 2018 2017 2016
United States 9 6 14,740,000.00 0.00 0.00 215,000.00 10,525,000.00 0.00 4,000,000.00
United States|Canada|Brazil|Mexico 1 1 6,638,000.00 0.00 0.00 6,638,000.00 0.00 0.00 0.00
United States|Latin America 1 1 3,000,000.00 3,000,000.00 0.00 0.00 0.00 0.00 0.00
France 2 1 2,989,788.00 0.00 1,642,046.00 0.00 0.00 1,347,742.00 0.00
United States|Brazil|Italy|Mexico|Spain 1 1 2,772,430.00 0.00 0.00 0.00 2,772,430.00 0.00 0.00
United Kingdom 4 2 2,031,308.00 0.00 425,000.00 0.00 231,677.00 1,374,631.00 0.00
Germany|Italy|Spain|United Kingdom 1 1 1,901,000.00 0.00 1,901,000.00 0.00 0.00 0.00 0.00
Ukraine|Norway|Denmark|Poland 1 1 1,700,000.00 0.00 0.00 1,700,000.00 0.00 0.00 0.00
European Union 3 1 1,275,361.00 0.00 635,000.00 0.00 0.00 640,361.00 0.00
India 4 3 1,159,336.00 0.00 0.00 445,000.00 0.00 624,944.00 89,392.00
Brazil|Mexico 1 1 1,000,000.00 0.00 0.00 0.00 0.00 0.00 1,000,000.00
United States|Canada 1 1 1,000,000.00 0.00 0.00 0.00 0.00 0.00 1,000,000.00
Latin America|Asia 1 1 1,000,000.00 0.00 0.00 0.00 0.00 0.00 1,000,000.00
Germany 1 1 1,000,000.00 0.00 0.00 0.00 0.00 1,000,000.00 0.00
China 4 1 944,600.00 600,000.00 105,000.00 239,600.00 0.00 0.00 0.00
Argentina|Chile|Colombia|Ecuador|Peru 1 1 800,000.00 0.00 800,000.00 0.00 0.00 0.00 0.00
Thailand|Indonesia 1 1 781,498.00 0.00 0.00 781,498.00 0.00 0.00 0.00
Scandinavia 1 1 683,000.00 0.00 0.00 0.00 0.00 683,000.00 0.00
Italy 2 1 612,974.00 0.00 462,974.00 150,000.00 0.00 0.00 0.00
Brazil 5 3 552,000.00 0.00 230,000.00 22,000.00 200,000.00 0.00 100,000.00
Taiwan 1 1 521,000.00 0.00 521,000.00 0.00 0.00 0.00 0.00
Poland|Ukraine 1 1 472,864.00 0.00 0.00 0.00 0.00 472,864.00 0.00
Thailand 1 1 350,000.00 0.00 350,000.00 0.00 0.00 0.00 0.00
Japan 1 1 274,000.00 0.00 0.00 274,000.00 0.00 0.00 0.00
Argentina|Chile|Colombia 1 1 245,000.00 0.00 0.00 245,000.00 0.00 0.00 0.00
Philippines 2 1 237,866.00 0.00 0.00 237,866.00 0.00 0.00 0.00
Spain 1 1 150,000.00 0.00 150,000.00 0.00 0.00 0.00 0.00
Classified total 56 31 49,156,011.00 3,600,000.00 7,300,020.00 11,081,964.00 13,729,107.00 6,255,528.00 7,189,392.00
Unclassified total 112 70 233,024,817.00 77,638,453.00 22,939,040.00 68,414,566.00 7,160,392.00 47,622,381.00 8,063,985.00
Total 168 92 282,180,828.00 81,238,453.00 30,239,060.00 79,496,530.00 20,889,499.00 53,877,909.00 15,253,377.00

Graph of spending by country and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by country and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (30 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Our Progress in 2020 and Plans for 20212021-04-29Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyAI safety|Biosecurity and pandemic preparedness|Criminal justice reform|Animal welfare|Scientific research|Effective altruism|COVID-19The post compares progress made by Open Philanthropy in 2020 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2019-and-plans-2020 and then lays out plans for 2021. The post notes that grantmaking, including grants to GiveWell top charities, was over $200 million. The post reviews the following from 2020: continued grantmaking, worldview investigations, other cause prioritization work, hiring and other capacity building, impact evaluation, outreach to external donors, and plans for 2021.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
Our Progress in 2019 and Plans for 20202020-05-08Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyCriminal justice reform|Animal welfare|AI safety|Effective altruismThe post compares progress made by the Open Philanthropy Project in 2019 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019 and then lays out plans for 2020. The post notes that grantmaking, including grants to GiveWell top charities, was over $200 million. The post reviews the following from 2019: continued grantmaking, growth of the operations team, impact evaluation (with good progress in evaluation of giving in criminal justice reform and animal welfare), worldview investigations (that was harder than anticipated, resulting in slower progress), other cause prioritization work, hiring and other capacity building, and outreach to external donors.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open PhilanthropyChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.
Co-funding Partnership with Ben Delo2019-11-11Holden Karnofsky Open PhilanthropyOpen Philanthropy Ben Delo PartnershipAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|Effective altruismBen Delo, co-founder of the cryptocurrency trading platform BitMEX, recently signed the Giving Pledge. He is entering into a partnership with the Open Philanthropy Project, providing funds, initially in the $5 million per year range, to support Open Phil's longtermist grantmaking, in areas including AI safety, biosecurity and pandemic preparedness, global catastrophic risks, and effective altruism. Later, the Machine Intelligence Research Institute (MIRI) would reveal at https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ that, of a $7.7 million grant from Open Phil, $1.46 million is coming from Ben Delo.
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)2019-09-10Ryan Carey Effective Altruism ForumFounders Pledge Open Philanthropy OpenAI Machine Intelligence Research Institute Broad donor strategyAI safety|Global catastrophic risks|Scientific research|PoliticsRyan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security2019-03-20Tate Williams Inside PhilanthropyOpen Philanthropy Center for Security and Emerging Technology Third-party coverage of donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|SecurityThe article focuses on grantmaking by the Open Philanthropy Project in the areas of global catastrophic risks and security, particularly in AI safety and biosecurity and pandemic preparedness. It includes quotes from Luke Muehlhauser, Senior Research Analyst at the Open Philanthropy Project and the investigator for the $55 million grant https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology to the Center for Security and Emerging Technology (CSET). Muehlhauser was previously Executive Director at the Machine Intelligence Research Institute. It also includes a quote from Holden Karnofsky, who sees the early interest of effective altruists in AI safety as prescient. The CSET grant is discussed in the context of the Open Philanthropy Project's hits-based giving approach, as well as the interest in the policy space in better understanding of safety and governance issues related to technology and AI.
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.2018-02-27Robert Wiblin Kieran Harris Holden Karnofsky 80,000 HoursOpen Philanthropy Broad donor strategyAI safety|Global catastrophic risks|Biosecurity and pandemic preparedness|Global health and development|Animal welfare|Scientific researchThis interview, with full transcript, is an episode of the 80,000 Hours podcast. In the interview, Karnofsky provides an overview of the cause prioritization and grantmaking strategy of the Open Philanthropy Project, and also notes that the Open Philanthropy Project is hiring for a number of positions.
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
How Will Hen Welfare Be Impacted by the Transition to Cage-Free Housing?2017-09-15Ajeya Cotra Open PhilanthropyOpen Philanthropy Reasoning supplementAnimal welfare/factory farming/chicken/cage-free campaignA followup to https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms which described the original cage-free campaign funding strategy. This report compares aviaries (cage-free living environments) with cages for hens. It tempers original enthusiasm for cage-free by noting higher mortality rates, but continues to support the position that cage-free is likely better on net for hens. Described in blog post https://www.openphilanthropy.org/blog/new-report-welfare-differences-between-cage-and-cage-free-housing that expresses regret for not investigating this more thoroughly earlier, and thanks Direct Action Everywhere for highlighting the issue. See https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/cnK5yNlYHuc for the announcement.
The Open Philanthropy Project AI Fellows Program2017-09-12Open PhilanthropyOpen Philanthropy Broad donor strategyAI safetyThis annouces an AI Fellows Program to support students doing Ph.D. work in AI-related fields who have interest in AI safety. See https://www.facebook.com/vipulnaik.r/posts/10213116327718748 and https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussions.
A major grant from the Open Philanthropy Project2017-09-08Malo Bourgon Machine Intelligence Research InstituteOpen Philanthropy Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
My current thoughts on MIRI’s highly reliable agent design work (GW, IR)2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin.
Our Progress in 2016 and Plans for 20172017-03-14Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2016 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2015-and-plans-2016 and then lays out plans for 2017. The post notes success in scaling up grantmaking, as hoped for in last year's plan. The spinoff from GiveWell is still not completed because it turned out to be more complex than expected, but it is expected to be finished in mid-2017. Open Phil highlights the hiring of three Scientific Advisors (Chris Somerville, Heather Youngs, and Daniel Martin-Alarcon) in mid-2016, as part of its scientific research work. The organization also plans to focus more on figuring out how to decide how much money to allocate between different cause areas, with Karnofsky's worldview diversification post https://www.openphilanthropy.org/blog/worldview-diversification also highlighted. There is no plan to scale up staff or grantmmaking (unlike 2016, when the focus was to scale up hiring, and 2015, when the focus was to scale up staff).
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas.
Grisly Undercover Video Shows Chickens Being Starved To Produce More Eggs2016-10-11Nico Pitney Huffington PostOpen Philanthropy Humane Society International Mercy For Animals Animal Equality People for Animals The Humane League Third-party coverage of donor strategyAnimal welfare/factory farming/chicken/cage-free campaign/internationalProvides some context for the move by the Open Philanthropy Project in mid-2016 to expand its cage-free campaign funding internationally.
Some Key Ways in Which I've Changed My Mind Over the Last Several Years2016-09-06Holden Karnofsky Open Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Reasoning supplementAI safetyIn this 16-page Google Doc, Holden Karnofsky, Executive Director of the Open Philanthropy Project, lists three issues he has changed his mind about: (1) AI safety (he considers it more important now), (2) effective altruism community (he takes it more seriously now), and (3) general properties of promising ideas and interventions (he considers feedback loops less necessary than he used to, and finding promising ideas through abstract reasoning more promising). The document is linked to and summarized in the blog post https://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email.
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06).
Here are the biggest things I got wrong in my attempts at effective altruism over the last ~3 years.2016-05-24Buck Shlegeris Buck Shlegeris Open Philanthropy Vegan Outreach Machine Intelligence Research Institute Broad donor strategyGlobal health|Animal welfare|AI safetyBuck Shlegeris, reflecting on his past three years as an effective altruist, identifies two mistakes he made in his past 3 years as an effective altruist: (1) "I thought leafleting about factory farming was more effective than GiveWell top charities. [...] I probably made this mistake because of emotional bias. I was frustrated by people who advocated for global poverty charities for dumb reasons. [...] I thought that if they really had that belief, they should either save their money just in case we found a great intervention for animals in the future, or donate it to the people who were trying to find effective animal right interventions. I think that this latter argument was correct, but I didn't make it exclusively." (2) "In 2014 and early 2015, I didn't pay as much attention to OpenPhil as I should have. [...] Being wrong about OpenPhil's values is forgivable, but what was really dumb is that I didn't realize how incredibly important it was to my life plan that I understand OpenPhil's values." (3) "I wish I'd thought seriously about donating to MIRI sooner. [...] Like my error #2, this is an example of failing to realize that when there's an unknown which is extremely important to my plans but I'm very unsure about it and haven't really seriously thought about it, I should probably try to learn more about it."
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority.
Our Progress in 2015 and Plans for 20162016-04-29Holden Karnofsky Open PhilanthropyOpen Philanthropy Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2015 against plans laid out in https://www.openphilanthropy.org/blog/open-philanthropy-project-progress-2014-and-plans-2015 and then lays out plans for 2016. The post notes the following in relation to its 2015 plans: it succeeded in hiring and expanding the team, but had to scale back on its scientific research ambitions in mid-2015. For 2016, Open Phil plans to focus on scaling up its grantmaking and reducing its focus on hiring. AI safety is declared as an intended priority for 2016, with Daniel Dewey working on it full-time, and Nick Beckstead and Holden Karnofsky also devoting significant time to it. The post also notes plans to continue work on separating the Open Philanthropy Project from GiveWell.
Initial Grants to Support Corporate Cage-free Reforms2016-03-31Lewis Bollard Open PhilanthropyOpen Philanthropy The Humane League Mercy For Animals The Humane Society of the United States Broad donor strategyAnimal welfare/factory farming/chicken/cage-free campaign/internationalWritten to explain a bunch of grants already made in 2016-02 to support cage-free reforms in the United States for egg-laying chicken. The blog post had a heated comment section, potentially influencing future Open Phil communication on the subject.
Potential Global Catastrophic Risk Focus Areas2014-06-26Alexander Berger Open PhilanthropyOpen Philanthropy Broad donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risksIn this blog post originally published at https://blog.givewell.org/2014/06/26/potential-global-catastrophic-risk-focus-areas/ Alexander Berger goes over a list of seven types of global catastrophic risks (GCRs) that the Open Philanthropy Project has considered. He details three promising areas that the Open Philanthropy Project is exploring more and may make grants in: (1) Biosecurity and pandemic preparedness, (2) Geoengineering research and governance, (3) AI safety. For the AI safety section, there is a note from Executive Director Holden Karnofsky saying that he sees AI safety as a more promising area than Berger does.
Thoughts on the Singularity Institute (SI) (GW, IR)2012-05-11Holden Karnofsky LessWrongOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit
Singularity Institute for Artificial Intelligence2011-04-30Holden Karnofsky GiveWellOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyIn this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI.

Full list of donations in reverse chronological order (168 donations)

Graph of top 10 donees by amount, showing the timeframe of donations

Graph of donations and their timeframes
DoneeAmount (current USD)Amount rank (out of 168)Donation dateCause areaURLInfluencerNotes
Centre for the Governance of AI2,537,600.00172021-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-field-buildingLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support activities related to building the field of AI governance research. GovAI intends to use this funding to conduct AI governance research and to develop a talent pipeline for those interested in entering the field."

Other notes: Grant made via the Centre for Effective Altruism. Intended funding timeframe in months: 24.
Redwood Research9,420,000.0062021-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/redwood-research-general-supportNick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. Redwood Research is a new research institution that conducts research to better understand and make progress on AI alignment in order to improve the long-run future."

Other notes: This is a total across four grants.
Stanford University (Earmark: Dimitis Tsipras)330,792.00912021-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsiprasCatherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Dimitris Tsipras on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
Stanford University (Earmark: Shibani Santurkar)330,792.00912021-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkarCatherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Shibani Santurkar on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
University of Southern California (Earmark: Robin Jia)320,000.00962021-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-researchCatherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Robin Jia on adversarial robustness and out-of-distribution generalization as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36
Center for Security and Emerging Technology38,920,000.0022021-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021Luke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "CSET is a think tank, incubated by our January 2019 support, dedicated to policy analysis at the intersection of national and international security and emerging technologies. This funding is intended to augment our original support for CSET, particularly for its work on security and artificial intelligence."

Other notes: Intended funding timeframe in months: 36.
Rethink Priorities495,685.00762021-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rethink-priorities-ai-governanceLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research projects on topics related to AI governance."

Donor reason for selecting the donee: The grant page says: "We believe that Rethink Priorities’ research outputs may help inform our AI policy grantmaking strategy."
Mercy For Animals3,000,000.00132021-06Animal welfare/factory farming/chicken/broiler chicken/cage-fre/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-campaigns-2021Lewis Bollard Amanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate engagement on animal welfare. MFA plans to use this funding to continue its cage-free and broiler welfare corporate campaigns in Latin America and the United States, respectively."

Donor reason for selecting the donee: The grant follows up on several past grants for similar uses, and reasons for past grants, including strong track record, probably apply. Nothing is explicitly mentioned on the grant page.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason is given for the amount, but the amount is about half the amount of the previous two-year grant. The smaller grant amount may reflect a shorter timeframe of this grant.

Donor reason for donating at this time (rather than earlier or later): The grant is made around the end of the two-year timeframe of the previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-campaigns-2019 (2019-07) that had a very similar intended use of funds. It is likely motivated by the end of the previous grant.

Other notes: Affected countries: United States|Latin America.
Carnegie Mellon University (Earmark: Zico Kolter)330,000.00932021-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/carnegie-mellon-adversarial-robustness-kolterCatherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor Zico Kolter on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes grants earlier and later in the year to early-stage researchers at UC Berkeley, University of Tübingen, Stanfard University, and University of Southern California.

Other notes: Intended funding timeframe in months: 36.
Daniel Dewey175,000.001172021-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/daniel-dewey-ai-alignment-projectNick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "work on an AI alignment project and related field-building efforts. Daniel plans to use this funding to produce writing and reports summarizing existing research and investigating potentially valuable projects relevant to AI alignment, with the goal of helping junior researchers and others understand how they can contribute to the field."
University of Cambridge (Earmark: David Krueger)250,000.001062021-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-cambridge-david-kruegerDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor David Krueger’s machine learning research."

Other notes: Grant made via Cambridge in America. Intended funding timeframe in months: 48.
Open Phil AI Fellowship (Earmark: Collin Burns|Jared Quincy Davis|Jesse Mu|Meena Jagadeesan|Tan Zhi-Xuan)1,300,000.00342021-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-classDaniel Dewey Donation process: According to the grant page "These [five] fellows were selected from 397 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship to five machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): An explicit reason for the amount is not specified, and the total amount is lower than previous years, but the amount per researcher ($260,000) is a little higher than previous years. It's likely that the amount per researcher is determined first and the total amount is the sum of these.

Donor reason for donating at this time (rather than earlier or later): This is the fourth of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Other notes: The initial grant page only listed four of the five fellows and an amount of $1,000,000. The fifth fellow, Tan Zhi-Xuan, was added later and the amount was increased to $1,300,000.
The Wilson Center291,214.001002021-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-training-programLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to pilot an AI policy training program. The Wilson Center is a non-partisan policy forum for tackling global issues through independent research and open dialogue."
Hypermind (Earmark: Metaculus)121,124.001282021-03AI safety/forecastinghttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/hypermind-ai-forecasting-tournamentLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to collaborate with Metaculus on an AI development forecasting tournament. Forecasts will cover the themes of hardware and supercomputing, performance and benchmarks, research trends, and economic and financial impact."
Brian Christian66,000.001462021-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/brian-christian-alignment-book-promotionNick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "with Brian Christian to support the promotion of his book The Alignment Problem: Machine Learning and Human Values."

Donor reason for selecting the donee: The grant page says: "Our potential risks from advanced artificial intelligence team hopes that the book will generate interest in AI alignment among academics and others."
FAI Farms600,000.00602021-03Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-egg-certification-and-summitLewis Bollard Donation process: This grant appears to be a result of successful progress funded by a previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-egg-investigation to launch a cage-free egg certification project. Also, the grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support cage-free certification work — in partnership with the China Chain Store and Franchise Association — and a summit promoting poultry welfare and cage-free egg production. The certification project’s aim is to develop a large-scale production and certification model for cage-free eggs in China, the world’s largest egg producer."

Donor reason for selecting the donee: No explicit reason is given, but the grant page hints at the scale of the problem being addressed: "The certification project’s aim is to develop a large-scale production and certification model for cage-free eggs in China, the world’s largest egg producer." Open Philanthropy has previously explained its support for cage-free campaigns at https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms and in other blog posts.

Other notes: Intended funding timeframe in months: 24; affected countries: China.
University of Tübingen (Earmark: Wieland Brendel)590,000.00622021-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tubingen-robustness-research-brendelCatherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support early-career research by Wieland Brendel on robustness as a means to improve AI safety."

Donor reason for selecting the donee: Open Phil made five grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research for "adversarial robustness research" in January and February 2021, around the time of this grant. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): Open Phil made five grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research for "adversarial robustness research" in January and February 2021, around the time of this grant. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
University of Tübingen (Earmark: Matthias Hein)300,000.00972021-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-heinCatherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Matthias Hein on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
University of California, Berkeley (Earmark: Dawn Song)330,000.00932021-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-songCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Dawn Song on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the foour other grants.It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in Januaay and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
University of California, Berkeley (Earmark: David Wagner)330,000.00932021-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagnerCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor David Wagner on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Massachusetts Institute of Technology (Earmark: Aleksander Madry)1,430,000.00312021-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-researchCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Aleksandr Madry on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Berryville Institute of Machine Learning (Earmark: Gary McGraw)150,000.001192021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berryville-institute-of-machine-learningCatherine Olsson Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research led by Gary McGraw on machine learning security. The research will focus on building a taxonomy of known attacks on machine learning, exploring a hypothesis of representation and machine learning risk, and performing an architectural risk analysis of machine learning systems."

Donor reason for selecting the donee: The grant page says: "Our potential risks from advanced artificial intelligence team hopes that the research will help advance the field of machine learning security."
University of California, Santa Cruz (Earmark: Cihang Xie)265,000.001042021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustnessCatherine Olsson Nick Beckstead Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support early-career research by Cihang Xie on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Center for Security and Emerging Technology8,000,000.0072021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-supportLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "This funding is intended to augment our original support for CSET, particularly for its work on the intersection of security and artificial intelligence."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021 for a much larger amount suggests continued satisfaction with the grantee.
Center for Human-Compatible AI11,355,246.0042021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2021Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "The multi-year commitment and increased funding will enable CHAI to expand its research and student training related to potential risks from advanced artificial intelligence."

Other notes: This is a renewal of the original founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai made August 2016. Intended funding timeframe in months: 60.
The Humane League UK507,900.00702020-12Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/the-humane-league-uk-general-supportAmanda Hungerford Lewis Bollard Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "THL-UK has secured a number of broiler welfare and cage-free commitments from major UK and international restaurant chains and food service companies. This funding is intended to enable THL-UK to fill positions focused on European and global corporate welfare campaigns."

Donor reason for selecting the donee: The grant page says: "THL-UK has secured a number of broiler welfare and cage-free commitments from major UK and international restaurant chains and food service companies." The grant page also links to past support https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-broiler-welfare-campaigns to The Humane League (not the UK branch).
University of Toronto (Earmark: Chris Maddison)520,000.00672020-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-researchDaniel Dewey Catherine Olsson Donation process: The researcher (Chris Maddison) whose students' work is to be funded with htis grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research on understanding, predicting, and controlling machine learning systems, led by Professor Chris Maddison, a former Open Phil AI Fellow. This funding is intended to enable three students and a postdoctoral researcher to work with Professor Maddison on the research."

Donor reason for selecting the donee: The researcher (Chris Maddison) whose students' work is to be funded with htis grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Other notes: Intended funding timeframe in months: 48.
Fórum Nacional de Proteção e Defesa Animal100,000.001342020-12Animal welfare/factory farming/chicken/layer chicken/pig/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-crate-and-cage-free-campaigning-in-brazil-2020Amanda Hungerford Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work campaigning to reduce the use of battery cages for layer hens and gestation crates for pigs in Brazil."

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the previous two-year grant reaching its end.

Other notes: Affected countries: Brazil.
Animal Friends Jogja78,000.001442020-12Animal welfare/factory farming/chicken/fishhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-friends-jogjaAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support its farm animal welfare work in Indonesia. This cooperation agreement will support Animal Friend Jogja’s animal welfare investigations, as well as its corporate campaigns and lobbying efforts promoting poultry and fish welfare."

Other notes: Intended funding timeframe in months: 24; affected countries: Indonesia.
Impact Alliance40,000.001502020-11Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impact-alliance-cage-free-programLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work to secure corporate participation in a cage-free program in Asia."

Donor reason for selecting the donee: The grant page says: "Our farm animal welfare team believes that this funding could help advance the implementation of cage-free systems across Asia."

Other notes: Grant made via Textile Exchange.
L2141,642,046.00252020-11Animal welfare/factory farming/chicken/cage-free/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/L214-broiler-chicken-campaigns-2020Lewis Bollard Amanda Hungerford Donation process: Based on the grant write-up, evaluation of L214's progress since the previous grant appears to have been part of the grantmaking process.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support fundraising, professionalization, investigations, and broiler welfare advocacy in France. [...] This funding is intended to support additional welfare campaigns, investigations, and fundraising.

Donor reason for selecting the donee: The grant page says: "Since our November 2017 support, L214 has secured broiler welfare and cage-free commitments from a number of major French supermarket chains and companies." The current grant is for continuing and expanding on similar activities.

Donor reason for donating that amount (rather than a bigger or smaller amount): This is a total across two grants. The grant page initially gave a smaller total of 1,432,130 (1,228,000 EUR) for just one grant, and was updated around June 2021 to be a total of two grants, with the updated amount. The donation was given as 1,408,000.00 EUR (conversion done via donor calculation).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed; the grant happens about one year after the expiration of the previous two-year grant.
Intended funding timeframe in months: 24

Other notes: This is a total across two grants. Affected countries: France.
University of Bern (Earmark: Michael Toscano)410,000.00832020-11Animal welfare/factory farming/chicken/layer chicken/cage-free/researchhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/university-of-bern-layer-hensAmanda Hungerford Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the University of Bern to support research led by Michael Toscano on breeding layer hens better adapted to cage-free environments."

Donor reason for selecting the donee: The grant fits in with Open Philanthropy's funding of corporate campaigns pushing for cage-free systems for chicken, an effort that https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms documents. The research focus of this grant is relatively unusual for Open Phil's cage-free campaign spending, but it is similar to a previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/university-of-bern-higher-welfare-cage-free-systems to the same grantee.

Other notes: Intended funding timeframe in months: 72.
AI Impacts50,000.001482020-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020Tom Davidson Ajeya Cotra Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."
Compassion in World Farming1,228,407.00352020-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-world-farming-farm-animal-welfare-in-asiaAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "work to advance farm animal welfare in Asia. CIWF plans to engage in corporate outreach on poultry welfare and to re-grant funds to farm animal welfare groups throughout Asia."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant amount is £964,600 ($1,228,407 at the time of conversion).

Other notes: This is a total across two grants.
Massachusetts Institute of Technology (Earmark: Neil Thompson)275,344.001022020-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/massachusetts-institute-of-technology-ai-trends-and-impacts-researchLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "The research will consist of projects to learn how algorithmic improvement affects economic growth, gather data on the performance and compute usage of machine learning methods, and estimate cost models for deep learning projects."
Smitha Milli (Earmark: Smitha Milli)370.001682020-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/smitha-milli-participatory-approaches-machine-learning-workshopDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Participatory Approaches to Machine Learning, a virtual workshop held during the 2020 International Conference on Machine Learning."

Donor reason for selecting the donee: The donee had previously been a recipient of the Open Phil AI Fellowship, so it is likely that that relationship helped make the way for this grant.

Donor reason for donating that amount (rather than a bigger or smaller amount): No specific reasons are given for the amount; this is an unusually small grant size by the donor's standards. The amount is likely determined by the limited funding needs of the grantee.

Donor reason for donating at this time (rather than earlier or later): The 2020 International Conference on Machine Learning was held in July 2020, so this grant seems to have been made after the thing it was supporting was already finished. No details on timing are provided.
Intended funding timeframe in months: 1
Center for a New American Security (Earmark: Paul Scharre)24,350.001572020-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projectsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work exploring possible projects related to AI governance."

Donor reason for selecting the donee: No explicit reason is provided for the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre) suggesting a broader endorsement.

Donor reason for donating at this time (rather than earlier or later): No explicit reason is provided for the timing of the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre).
Center for a New American Security (Earmark: Paul Scharre)116,744.001302020-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projectsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work by Paul Scharre on projects related to AI and security."

Donor reason for selecting the donee: No explicit reason is provided for the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre) suggesting a broader endorsement.

Donor reason for donating at this time (rather than earlier or later): No explicit reason is provided for the timing of the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre).
Center for Strategic and International Studies118,307.001292020-09AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competitionLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor retrospective of the donation: The increase in grant amount in May 2021, from $75,245 to $118,307, suggests that Open Phil was satisfied with initial progress on the grant.

Other notes: The grant amount was updated in May 2021. The original amount was $75,245.
Center for International Security and Cooperation67,000.001452020-09AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competitionLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.
Rice, Hadley, Gates & Manuel LLC25,000.001542020-09AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-riskLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.
The Humane League3,600,000.00122020-09Animal welfare/factory farming/chicken/broiler chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-open-wing-alliance-2020Amanda Hungerford Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to continue to support program grants and associated costs for the Open Wing Alliance. This funding will support members of the Open Wing Alliance who are working to secure corporate cage-free and broiler pledges and build an effective farm animal welfare movement in more nations."

Donor reason for selecting the donee: The grant page says: "Our farm animal welfare team believes that the Open Wing Alliance has a strong track record in identifying promising groups in new countries, training them in corporate campaigning, and coordinating them to achieve global corporate wins."

Donor reason for donating at this time (rather than earlier or later): The grant is made a few months before the timeframe for the previous grant to the Open Wing Alliance was scheduled to end; that might partly explain the timing.
Intended funding timeframe in months: 24
World Animal Net37,600.001512020-09Animal welfare/factory farming/chicken/broiler chicken/pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/world-animal-net-broiler-chicken-and-pig-welfare-guidelinesLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to engage with international financial institutions, including the World Bank, on the adoption of broiler chicken and pig welfare guidelines for agribusiness projects."

Other notes: Intended funding timeframe in months: 24.
Catalyst350,000.00892020-08Animal welfare/factory farming/pig/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/catalyst-farm-animal-welfare-in-thailandAmanda Hungerford Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to advocate for farm animal welfare in Thailand. This funding is intended to enable the new organization to advocate for pig and chicken welfare, specifically by working with the government to, among other things, provide welfare training and develop humane certification standards."

Other notes: Intended funding timeframe in months: 24; affected countries: Thailand.
Group Nine Media680,448.00572020-07Animal welfare/factory farming/chicken/fishhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/group-nine-media-factory-farming-videos-2020Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to continue to produce videos on factory farming topics. These videos could cover farm animal welfare campaigns, the welfare of chicken, fish, and other animals, and other relevant topics."

Donor reason for selecting the donee: The grant page says: "Our farm animal welfare team believes that the videos could increase the salience of farm animal welfare issues among the public."

Other notes: Intended funding timeframe in months: 24.
Andrew Lohn (Earmark: Andrew Lohn)15,000.001612020-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/andrew-lohn-paper-machine-learning-model-robustnessLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to write a paper on machine learning model robustness for safety-critical AI systems."

Donor reason for selecting the donee: Nothing is specified, but the grantee's work had previously been funded by Open Phil via the RAND Corporation for AI assurance methods.
The Wilson Center496,540.00752020-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to organize additional in-depth AI policy seminars as part of its seminar series."

Donor reason for selecting the donee: The grant page says "We continue to believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor reason for donating that amount (rather than a bigger or smaller amount): No reason is given for the amount. The grant is a little more than the original $368,440 two-year grant so it is likely that the additional amount is expected to double the frequency of AI policy seminars.

Donor reason for donating at this time (rather than earlier or later): The grant is a top-up rather than a renewal; the previous two-year grant was made in February 2020. No specific reasons for timing are given.

Donor retrospective of the donation: A later grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-training-program in the same general area suggests Open Philanthropy's continued satisfaction with the grantee.
Centre for the Governance of AI450,000.00792020-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-general-supportCommittee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "GovAI intends to use these funds to support the visit of two senior researchers and a postdoc researcher."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" but does not link to specific past writeups (Open Phil has not previously made grants directly to GovAI).

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public.

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-field-building (December 2021) suggests continued satisfaction with the grantee.

Other notes: Grant made via the Berkeley Existential Risk Initiative.
International Conference on Learning Representations3,500.001662020-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ICLR-machine-learning-paper-awardsDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to the International Conference on Learning Representations to provide awards for the best papers submitted as part of the “Towards Trustworthy Machine Learning” virtual workshop.
Eurogroup for Animals635,000.00582020-05Animal welfare/factory farming/chicken/broiler chicken/layer chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/eurogroup-animals-eu-chicken-welfare-advocacy-2020Amanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support EU advocacy work for layer hen and broiler chicken welfare. This funding will enable Eurogroup for Animals to carry out EU welfare campaigns, provide regrants to cage-free advocacy groups, and research layer hen and broiler chicken welfare."

Other notes: Currency info: donation given as 586,000.00 EUR (conversion done via donor calculation); intended funding timeframe in months: 24; affected countries: European Union.
Sinergia Animal800,000.00512020-05Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/sinergia-animal-corporate-cage-free-campaignsAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate cage-free campaigns and investigations across Latin America. Sinergia Animal intends to use this funding to secure cage-free corporate commitments and carry out investigations in Colombia, Argentina, Chile, Ecuador, and Peru, which have a combined total of approximately 184 million layer hens."

Donor reason for selecting the donee: The grant page hints at the scale of factory farming in the target countries as being a factor: "Sinergia Animal intends to use this funding to secure cage-free corporate commitments and carry out investigations in Colombia, Argentina, Chile, Ecuador, and Peru, which have a combined total of approximately 184 million layer hens."

Other notes: Intended funding timeframe in months: 24; affected countries: Argentina|Chile|Colombia|Ecuador|Peru.
Open Phil AI Fellowship (Earmark: Alex Tamkin|Clare Lyle|Cody Coleman|Dami Choi|Dan Hendrycks|Ethan Perez|Frances Ding|Leqi Liu|Peter Henderson|Stanislav Fort)2,300,000.00202020-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-classCatherine Olsson Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship to ten machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests." In a comment reply https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020#BCvuhRCg9egAscpyu (GW, IR) on the Effectiive Altruism Forum, grant investigator Catherine Olsson writes: "But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is comparable to the total amount of the 2019 fellowship grants, though it is distributed among a slightly larger pool of people.

Donor reason for donating at this time (rather than earlier or later): This is the third of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirms that the program would continue.

Other notes: Announced: 2020-05-12.
World Economic Forum50,000.001482020-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/world-economic-forum-global-ai-council-workshopDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a workshop hosted by the Global AI Council and co-developed with the Center for Human-Compatible AI at UC Berkeley. The workshop will facilitate the development of AI policy recommendations that could lead to future economic prosperity, and is part of a series of workshops examining solutions to maximize economic productivity and human wellbeing."

Other notes: Intended funding timeframe in months: 1.
Equalia150,000.001192020-04Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/equalia-broiler-welfare-cage-free-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate campaigns to improve the welfare of chickens and caged hens in Spain and international investigations into welfare standards for chickens and laying hens in cages."

Other notes: Affected countries: Spain.
Compassion in World Farming USA78,750.001432020-04Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/CIWF-USA-global-eggtrack-programAmanda Hungerford Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support the global expansion of its EggTrack program. This funding will support CIWF USA’s work tracking and reporting on multinational companies’ progress implementing cage-free egg commitments."
Johns Hopkins University (Earmark: Jared Kaplan|Brice Ménard)55,000.001472020-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/johns-hopkins-kaplan-menardLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support the initial research of Professors Jared Kaplan and Brice Ménard on principles underlying neural network training and performance."
Study and Training Related to AI Policy Careers (Earmark: Emefa Agawu|Karson Elmgren|Matthew Gentzel|Becca Kagan|Benjamin Mueller)594,420.00612020-03AI safety/talent pipelinehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-and-training-related-to-ai-policy-careersLuke Muehlhauser Donation process: This is a scholarship program run by Open Philanthropy. Applications were sought at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers with the last date for applications being 2019-10-15.

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant is "flexible support to enable individuals to pursue and explore careers in artificial intelligence policy." Recipients include Emefa Agawu, Karson Elmgren, Matthew Gentzel, Becca Kagan, and Benjamin Mueller. The ways that specific recipients intend to use the funds is not described, but https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#examples gives general guidance on the kinds of uses Open Philanthropy was expecting to see when it opened applications.

Donor reason for selecting the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#goal says: "The goal of this program is to provide flexible support that empowers exceptional people who are interested in positively affecting the long-run effects of transformative AI via careers in AI policy, which we see as an important and neglected issue." https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#appendix provides links to Open Philanthropy's other writing on the importance of the issue.

Donor reason for donating that amount (rather than a bigger or smaller amount): https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#summary says: "There is neither a maximum nor a minimum number of applications we intend to fund; rather, we intend to fund the applications that seem highly promising to us."

Donor reason for donating at this time (rather than earlier or later): Timing is likely determined by the time taken to review all applications after the close of applications on 2019-10-15.

Donor retrospective of the donation: As of early 2022, there do not appear to have been further rounds of grantmaking from Open Philanthropy for this purpose.

Other notes: Open Philanthropy runs a related fellowship program called the Open Phil AI Fellowship, that has an annual cadence of announcing new grants, though individual grants are often multi-year. The Open Phil AI Fellowship grantees are mostly people working on technical AI safety, whereas this grant is focused on AI policy work. Moreover, the Open Phil AI Fellowship targets graduate-level research,, whereas this grant targets study and training.
Royal Society for the Prevention of Cruelty to Animals425,000.00822020-03Animal welfare/factory farming/chicken/broiler chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/rspca-broiler-chicken-welfare-outreachLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support its outreach to improve the welfare of broiler chickens in the United Kingdom. RSPCA plans to use these funds to support corporate campaigns, industry events and awards, advertising, reports and materials, and other outreach expenses."

Other notes: Currency info: donation given as 329,000.00 GBP (conversion done via donor calculation); intended funding timeframe in months: 24; affected countries: United Kingdom.
Alianima130,000.001272020-03Animal welfare/factory farming/chicken/layer chicken/pig/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/alianima-general-supportLewis Bollard Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Alianima works to secure corporate pledges to reduce the use of battery cages for layer hens and gestation crates for pigs in Brazil."

Other notes: Affected countries: Brazil.
WestExec540,000.00642020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/westexec-report-on-assurance-in-machine-learning-systemsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to support the production and distribution of a report on advancing policy, process, and funding for the Department of Defense’s work on test, evaluation, verification, and validation for deep learning systems."

Donor retrospective of the donation: The increases in grant amounts suggest that the donor was satisfied with initial progress.

Other notes: The grant amount was updated in October and November 2020 and again in May 2021. The original grant amount had been $310,000. Announced: 2020-03-20.
Animal Equality1,901,000.00232020-02Animal welfare/factory farming/chicken/broiler chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-cage-free-and-broiler-welfareLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support cage-free and broiler welfare. Animal Equality plans to use these funds to support work in Italy, Spain, Germany, and the UK, including investigations, fundraising, and general operations."

Donor reason for selecting the donee: The grant page says: "Animal Equality has helped secure cage-free and broiler welfare wins and conducted investigations in Europe, and plans to use these funds to continue its work."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is very similar to a similar two-year grant ($2,110,460) made to the same four countries in November 2017. However, there was a separate grant made June 2018 covering two of the countries, which confuses the comparison.

Donor reason for donating at this time (rather than earlier or later): The timing roughly coincides with the expiration of the November 2017 support. No explicit reasons for the timing are given.
Intended funding timeframe in months: 24

Other notes: This is a total of four grants (presumably one grant per country). Affected countries: Germany|Italy|Spain|United Kingdom.
Machine Intelligence Research Institute7,703,750.0082020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" with the most similar previous grant being https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 (February 2019). Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Centre for Effective Altruism ($4,146,795), 80,000 Hours ($3,457,284), and Ought ($1,593,333).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation.
Intended funding timeframe in months: 24

Other notes: The donee describes the grant in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ (2020-04-27) along with other funding it has received ($300,000 from the Berkeley Existential Risk Initiative and $100,000 from the Long-Term Future Fund). The fact that the grant is a two-year grant is mentioned here, but not in the grant page on Open Phil's website. The page also mentions that of the total grant amount of $7.7 million, $6.24 million is coming from Open Phil's normal funders (Good Ventures) and the remaining $1.46 million is coming from Ben Delo, co-founder of the cryptocurrency trading platform BitMEX, as part of a funding partnership https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo announced November 11, 2019. Announced: 2020-04-10.
The Wilson Center368,440.00882020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-february-2020Luke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to continue support for a series of in-depth AI policy seminars."

Donor reason for selecting the donee: The grat page says: "We continue to believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is similar to the previous grant of $400,000 over a similar time period (two years).

Donor reason for donating at this time (rather than earlier or later): The grant is made almost two years after the original two-year grant, so its timing is likely determined by the original grant running out.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 suggests ongoing satisfaction with the grant outcomes. A later grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-training-program in the same general area suggests Open Philanthropy's continued satisfaction with the grantee.
Stanford University (Earmark: Dorsa Sadigh)6,500.001652020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-ai-safety-seminarDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant "is intended to fund the travel costs for experts on AI safety to present at the [AI safety] seminar [led by Dorsa Sadigh]."

Other notes: Intended funding timeframe in months: 1.
RAND Corporation (Earmark: Andrew Lohn)30,751.001532020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rand-corporation-research-on-the-state-of-ai-assurance-methodsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support exploratory research by Andrew Lohn on the state of AI assurance methods."

Donor retrospective of the donation: A few months later, Open Phil would make a grant directly to Andrew Lohn for machine learning robustness research, suggesting that they were satisfied with the outcoms from this grant.

Other notes: Announced: 2020-03-19.
Press Shop (Earmark: Stuart Russell)17,000.001592020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/press-shop-human-compatibleDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the publicity firm Press Shop to support expenses related to publicizing Professor Stuart Russell’s book Human Compatible: Artificial Intelligence and the Problem of Control."

Donor reason for selecting the donee: The grant page links this grant to past support for the Center for Human-Compatible AI (CHAI) where Russell is director, so the reason for the grant is likely similar to reasons for that past support. Grant pages: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019

Donor reason for donating at this time (rather than earlier or later): The grant is made shortly after the release of the book (book release date: October 8, 2019) so the timing is likely related to the release date.
FAI Farms105,000.001332020-01Animal welfare/factory farming/chicken/layer chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-egg-investigationLewis Bollard Donation process: The grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support its work launching a cage-free egg certification project in partnership with the China Chain Store and Franchise Association. The project’s aim is to develop a large-scale production and certification model for cage-free eggs in China, the world’s largest egg producer."

Donor reason for selecting the donee: No explicit reason is given, but the grant page hints at the scale of the problem being addressed: "The project’s aim is to develop a large-scale production and certification model for cage-free eggs in China, the world’s largest egg producer." Open Philanthropy has previously explained its support for cage-free campaigns at https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms and in other blog posts.

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-egg-certification-and-summit for continuation and scaling up of the work suggest that Open Philanthropy would be satisfied with the outcome of the grant.

Other notes: Affected countries: China.
Essere Animali462,974.00782020-01Animal welfare/factory farming/fish/chicken/pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/essere-animali-farm-animal-welfare-work-italy-2020Lewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support investigations and corporate campaigns on fish, chicken, and pig welfare in Italy."

Donor reason for selecting the donee: The grant page says: "Approximately 260 million farmed land animals and 140 million farmed fish are alive in Italy at any time. Essere Animali investigations at farms where fish, pigs, and chickens are raised and slaughtered have generated media coverage in Italy and elsewhere."

Other notes: Currency info: donation given as 420,000.00 EUR (conversion done via donor calculation); intended funding timeframe in months: 24; affected countries: Italy.
Center for Welfare Metrics784,586.00522020-01Animal welfare/factory farming/chicken/layer chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/center-for-welfare-metrics-impacts-of-animal-welfare-reforms-2020Lewis Bollard Donation process: The grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "Among other projects, the Center for Welfare Metrics plans to produce a report on the welfare impact of reforms for egg-laying hens, including a comparison of the prevalence, duration, and intensity of harms under various systems, including cages, enriched cages, and cage-free aviaries."

Donor reason for selecting the donee: The grant page says: "This analysis could inform farm animal welfare grantmaking decisions and assessment." The grapnt page also links to https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/cynthia-schuck-wladimir-alonso-daly-project-2019 as a similar previous grant.

Other notes: This is a total across two grants (both contracts). Intended funding timeframe in months: 36.
Environmental & Animal Society of Taiwan521,000.00662020-01Animal welfare/factory farming/chicken/fishhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/environmental-animal-society-taiwan-farm-animal-welfare-campaignsAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support campaigns for layer hens, farmed fish, broiler hens, water fowl, and humane slaughter in Taiwan. EAST plans to hire scientists, campaigners, and outreach staff."

Other notes: This is a total across two grants. Intended funding timeframe in months: 24; affected countries: Taiwan.
Berkeley Existential Risk Initiative150,000.001192020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-general-supportClaire Zabel Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "BERI seeks to reduce existential risks to humanity, and collaborates with other long-termist organizations, including the Center for Human-Compatible AI at UC Berkeley. This funding is intended to help BERI establish new collaborations."
Ought1,593,333.00272020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020Committee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing potential risks from advanced artificial intelligence."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter"

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Machine Intelligence Research Institute ($7,703,750), Centre for Effective Altruism ($4,146,795), and 80,000 Hours ($3,457,284).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation

Other notes: Announced: 2020-02-14.
Berkeley Existential Risk Initiative705,000.00552019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
University of California, Berkeley (Earmark: Jacob Steinhardt)1,111,000.00392019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-research-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "This funding will allow Professor Steinhardt to fund students to work on robustness, value learning, aggregating preferences, and other areas of machine learning."

Other notes: This is the third year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Jacob Steinhardt) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 36; announced: 2020-02-19.
Center for Human-Compatible AI200,000.001132019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "CHAI plans to use these funds to support graduate student and postdoc research."

Other notes: Open Phil makes a $705,000 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 to the Berkeley Existential Risk Initiative (BERI) at the same time (November 2019) to collaborate with CHAI. Intended funding timeframe in months: 24; announced: 2019-12-20.
Ought1,000,000.00402019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought

Other notes: Intended funding timeframe in months: 24; announced: 2020-02-14.
SPCA Selangor134,000.001252019-10Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/spca-selangor-farm-animal-welfareAmanda Hungerford Donation process: Grant made by the Open Philanthropy Action Fund, because of the funding being used for lobbying efforts

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to start a farm animal welfare program largely focused on a cage-free campaign for layer and broiler hens. SPCA Selangor plans to build connections to retailers and producers, attend trainings, workshops, and meetings, and reach out to the government in Malaysia, where millions of farmed birds are consumed each year."

Other notes: Intended funding timeframe in months: 24; affected countries: Malaysia.
Anima International1,700,000.00242019-10Animal welfare/factory farming/chicken/cage-free/broiler chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/anima-international-chicken-welfare-campaignsAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support cage-free and broiler corporate campaigns. Anima International plans to use these funds to support campaigns, investigations, and communications, with a focus on cage-free egg campaigns in Ukraine and a mixture of cage-free egg and broiler chicken campaigns in Norway, Denmark, and Poland."

Other notes: Intended funding timeframe in months: 24; affected countries: Ukraine|Norway|Denmark|Poland.
FAI Farms132,400.001262019-09Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-promoting-poultry-welfare-cage-free-eggs-chinaLewis Bollard Donation process: The grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support events promoting poultry welfare and cage-free egg production in China. The events include a summit for food companies and producers, a poultry welfare conference in partnership with the China Animal Health and Food Safety Innovation Alliance, and a technical seminar promoting cage-free production."

Donor retrospective of the donation: Later grants such as https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-egg-investigation for similar work suggests continued satisfaction with the grantee.

Other notes: Affected countries: China.
Animal Kingdom Foundation220,866.001112019-09Animal welfare/factory farming/chicken/layer chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-kingdom-foundation-corporate-campaigns-september-2019Amanda Hungerford Donation process: Grant made by the Open Philanthropy Action Fund, because of the funding being used for lobbying efforts

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate campaigns for layer hens, a model commercial farm, and efforts to secure certification standards and guidelines from the government in the Philippines, which is home to millions of farmed land animals."

Donor reason for selecting the donee: The grant page hints at the scale of factory farming in the Philippines: "the Philippines, which is home to millions of farmed land animals."

Other notes: This grant is announced concurrently with another grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-kingdom-foundation-corporate-campaigns-may-2019 (2019-05) and the (identical) pages for both grants refer to the totality of the two grants. Intended funding timeframe in months: 24; affected countries: Philippines.
Albert Schweitzer Foundation1,600,000.00262019-08Animal welfare/factory farming/chicken/fishhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-general-support-2019Amanda Hungerford Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "The funding will allow the Albert Schweitzer Foundation to continue to pursue animal welfare reforms across Europe, including campaigns and litigation to improve the welfare of egg-laying hens, broiler chickens, farmed fish, and other animals."

Other notes: Intended funding timeframe in months: 24.
World Animal Protection557,466.00632019-08Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/world-animal-protection-broiler-chicken-welfare-august-2019Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support campaigns aimed at mobilizing the food industry to improve their chicken welfare standards. WAP plans to use these funds to develop and produce campaign materials, engage with key stakeholders, and support travel, research, and salaries."

Other notes: Intended funding timeframe in months: 24.
Mercy For Animals6,638,000.0092019-07Animal welfare/factory farming/chicken/broiler chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-campaigns-2019Lewis Bollard Donation process: This larger grant appears to have been under consideration at the time of https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-us-broiler-chicken-welfare-corporate-campaigns (May 2018) that said: "We expect to evaluate the merits of a longer renewal of our support to MFA closer to the end of 2018."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate engagement on animal welfare and capacity building. MFA plans to continue its broiler chicken campaigns and cage-free egg enforcement work in the U.S. and Canada and its cage-free egg campaigns in Brazil and Mexico."

Donor reason for selecting the donee: The grant follows up on several past grants for similar uses, and reasons for past grants, including strong track record, probably apply. Nothing is explicitly mentioned on the grant page.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is provided; this is a much larger grant than any past grant to MFA. The grant page says: "The grant amount was updated in February, March, and July 2020, and in March 2021."

Donor reason for donating at this time (rather than earlier or later): This larger grant appears to have been under consideration at the time of https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-us-broiler-chicken-welfare-corporate-campaigns (May 2018) that said: "We expect to evaluate the merits of a longer renewal of our support to MFA closer to the end of 2018." The timing of the grant is likely determined by the evaluation being completed.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-campaigns-2021 (2021-06) for a very similar intended use of funds suggests continued satisfaction with the grantee.

Other notes: Affected countries: United States|Canada|Brazil|Mexico.
Animal Rights Center Japan274,000.001032019-07Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-rights-center-japan-broiler-layer-hen-campaignsAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support broiler and layer hen campaigns, including campaigns on humane slaughter."

Donor reason for selecting the donee: The grant page says: "Hundreds of millions of farmed birds are consumed in Japan each year."

Other notes: Intended funding timeframe in months: 24; affected countries: Japan.
Sankalpa22,000.001582019-07Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/sankalpa-farm-animal-welfare-workshopLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to host a workshop on cage-free egg production in Brazil. Sankalpa had a commercial-scale free-range egg company from the UK and a Brazilian cage-free model farm lead a workshop with local producers, industry representatives, NGOs, certifiers, retailers, and investors that they hope will kick off a technical assistance process for cage-free egg production in Brazil."

Other notes: Affected countries: Brazil.
Sinergia Animal187,600.001162019-06Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/sinergia-animal-southeast-asia-animal-welfareAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support farm animal investigations and corporate campaigns in Southeast Asia. Sinergia Animal specifically plans to use these funds to launch corporate cage-free egg campaigns, as the region has a large number of farmed birds."

Other notes: Intended funding timeframe in months: 24.
Federation of Indian Animal Protection Organisations445,000.00802019-06Animal welfare/factory farming/chicken/cattle/pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/federation-indian-animal-protection-organisations-india-farm-animal-welfare-2019Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "This funding will support work to improve the welfare of chickens at slaughter and dairy cows as well as support movement building and exploratory work on fish farming in India."

Donor reason for donating at this time (rather than earlier or later): The grant timing is around the end of the timeframe of the previous two-year grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/federation-indian-animal-protection-organisations-india-animal-welfare-reform (2017-07).
Intended funding timeframe in months: 24

Other notes: Affected countries: India.
Animal Kingdom Foundation17,000.001592019-05Animal welfare/factory farming/chicken/layer chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-kingdom-foundation-corporate-campaigns-may-2019Amanda Hungerford Donation process: Grant made by the Open Philanthropy Action Fund, because of the funding being used for lobbying efforts

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate campaigns for layer hens, a model commercial farm, and efforts to secure certification standards and guidelines from the government in the Philippines, which is home to millions of farmed land animals."

Donor reason for selecting the donee: The grant page hints at the scale of factory farming in the Philippines: "the Philippines, which is home to millions of farmed land animals."

Other notes: This grant is announced concurrently with another grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-kingdom-foundation-corporate-campaigns (2019-09) and the (identical) pages for both grants refer to the totality of the two grants. Intended funding timeframe in months: 24; affected countries: Philippines.
Open Phil AI Fellowship (Earmark: Aidan Gomez|Andrew Ilyas|Julius Adebayo|Lydia T. Liu|Max Simchowitz|Pratyusha Kullari|Siddharth Karamcheti|Smitha Milli)2,325,000.00192019-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-classDaniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship support to eight machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is about double the amount of the 2018 grant, although the number of people supported is just one more (8 instead of 7). No explicit comparison of grant amounts is done in the grant page.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirm that the program would continue. Among the grantees, Smitha Milli would receive further support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/smitha-milli-participatory-approaches-machine-learning-workshop from Open Philanthropy, indicating continued confidence in the support.

Other notes: Announced: 2019-05-17.
FAI Farms107,200.001322019-04Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-cage-free-eggs-chinaLewis Bollard Donation process: Discretionary grant made via the Open Philanthropy Action Fund. The grant page says: "This project was supported through a contractor agreement. While we do not typically publish pages for contractor agreements, we chose to write about this funding because we view it as conceptually similar to an ordinary grant, despite its structure as a contract due to the recipient’s organizational form."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Chinese farm animal welfare auditor trainings, egg farm audits, and a cage-free conference. These projects will promote cage-free production in China, the world’s largest egg producer, and aim to reduce the suffering of egg-laying hens."

Donor retrospective of the donation: The later grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/fai-farms-promoting-poultry-welfare-cage-free-eggs-china for very similarr work, as well as more grants in the coming years, suggest continued satisfaction with the grantee.

Other notes: Affected countries: China; announced: 2019-06-07.
World Animal Protection781,498.00532019-04Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/world-animal-protection-se-asia-broilerAmanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate broiler chicken campaigns in Southeast Asia with a focus on Thailand and Indonesia. WAP plans to increase its broiler chicken campaigns in Thailand and perform scoping research to lay the groundwork for future campaigns in Indonesia, as both Thailand and Indonesia have large numbers of farmed birds."

Other notes: Intended funding timeframe in months: 24; affected countries: Thailand|Indonesia; announced: 2019-06-26.
The Humane League1,565,000.00282019-03Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-open-wing-alliance-2019Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support program grants, events, and associated costs for the Open Wing Alliance. This funding will support members of the Open Wing Alliance who are working to secure corporate cage-free pledges and build an effective farm animal welfare movement in more nations.

Donor reason for selecting the donee: No explicit reasons given but likely the same as the reasons for the original support https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-open-wing-alliance-2017 (September 2017).

Donor reason for donating at this time (rather than earlier or later): Timing is not explicitly discussed, but it is likely because the timeframe for the earlier grants is ending.
Intended funding timeframe in months: 24

Other notes: Announced: 2019-04-26.
Sinergia Animal245,000.001092019-03Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/sinergia-animal-general-supportLewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says the grant is "to pursue corporate campaigns across Latin America."

Donor reason for selecting the donee: The grant page says: "Approximately 500 million layer hens and 2 billion broiler chickens are alive in Latin America at any time, and corporate campaigners have had some success in Latin America, securing numerous cage-free commitments in the last two years. We believe Sinergia Animal played a significant role in some of those campaigns, including some of the first wins in Argentina, Chile, and Colombia."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/sinergia-animal-corporate-cage-free-campaigns suggests continued satisfaction with the grantee.

Other notes: Affected countries: Argentina|Chile|Colombia.
Essere Animali150,000.001192019-02Animal welfare/factory farming/fish/chicken/pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/essere-animali-farm-animal-welfare-work-in-italyLewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: Grant "to conduct farm investigations and scale up media outreach and corporate campaigning in Italy."

Donor reason for selecting the donee: The grant page says: "Approximately 260 million farmed land animals and 140 million farmed fish are alive in Italy at any time. Essere Animali investigations at farms where fish, pigs, and chickens are raised and slaughtered have generated media coverage in Italy and elsewhere, and we believe these investigations are useful to others working on animal welfare globally."

Donor retrospective of the donation: A followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/essere-animali-farm-animal-welfare-work-in-italy in 2020-01 suggests continued satisfaction with the grantee.

Other notes: Affected countries: Italy.
Machine Intelligence Research Institute2,652,500.00162019-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.

Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount decided by the Committee for Effective Altruism Support (CEAS) https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by CEAS, made at the same time and therefore likely drawing from the same money pot, are to the Centre for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The original amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later. In November 2019, additional funding would bring the total award amount to $2,652,500.

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round.
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant.

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 with a very similar writeup suggests that Open Phil and the Committee for Effective Altruism Support would continue to stand by the reasoning for the grant.

Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-01.
The Humane League750,000.00542019-01Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-broiler-welfare-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support corporate campaigns to improve the welfare of broiler chickens. Broiler chickens are the most numerous land farm animals. Broiler welfare campaigns seek to address these causes of suffering.

Donor reason for selecting the donee: Open Phil considers broiler chicken welfare a high-impact cause: "Broiler chickens are the most numerous land farm animals, with more than a billion alive at any time and approximately 9 billion slaughtered annually in the U.S. alone. Their welfare is impacted by genetics, overcrowding, inhumane slaughter, and environmental factors like chronic sleep deprivation due to lighting schedules optimized for growth." Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken The Humane League is selected for reasons outlined in earlier grants, such as the August 2018 general support https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018

Donor reason for donating at this time (rather than earlier or later): Likely based on funding needs and the using up of funds from previous grants. No explicit reasons for timing are given

Other notes: Announced: 2019-04-30.
Mercy For Animals261,000.001052019-01Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-welfare-campaignsLewis Bollard Donation process: This seems like a followup grant to https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-us-broiler-chicken-welfare-corporate-campaigns and is likely informed by considerations affecting that and earlier grants, and also by the progress since then.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate campaigns to improve the welfare of broiler chickens. [...] Their welfare is impacted by genetics, overcrowding, inhumane slaughter, and environmental factors like chronic sleep deprivation due to lighting schedules optimized for growth. Broiler welfare campaigns seek to address these causes of suffering."
Animal Equality215,000.001122019-01Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-broiler-welfare-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate campaigns to improve the welfare of broiler chickens. [...] Their welfare is impacted by genetics, overcrowding, inhumane slaughter, and environmental factors like chronic sleep deprivation due to lighting schedules optimized for growth. Broiler welfare campaigns seek to address these causes of suffering."

Donor reason for selecting the donee: The grant page hints at scale: "Broiler chickens are the most numerous land farm animals, with more than a billion alive at any time and approximately 9 billion slaughtered annually in the U.S. alone."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-cage-free-and-broiler-welfare (2020-02) suggests continued satisfaction with the grantee.

Other notes: Affected countries: United States.
Animal Outlook250,000.001062019-01Animal welfare/factory farming/chicken/fishhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-exit-grant-2019Amanda Hungerford Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support farm animal welfare outreach and investigations related to chickens and fish. The new funding represents an “exit grant” that will provide Animal Outlook with approximately one year of operating support to allow them to secure other funding."

Donor reason for selecting the donee: The donor had previously supported the donee in 2016 https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-over-killing-us-broiler-welfare-campaigns The new grant is an exit grant to give the donee time to find other sources of funding.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely selected as a reasonable amount for a one-year exit grant.

Donor reason for donating at this time (rather than earlier or later): Timing likely determined by the end of the previous grant, and the need to provide more funding for a smooth exit grant.
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: There is no plan for a next donation; this is an exit grant.

Donor retrospective of the donation: Despite this being an exit grant, Open Philanthropy would make a later grant to the grantee (albeit a much smaller amount with a narrow goal).

Other notes: The grantee name at the time, and listed in the grant, is Compassion Over Killing. Announced: 2019-05-06.
Center for Security and Emerging Technology55,000,000.0012019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technologyLuke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: Grant via Georgetown University for the Center for Security and Emerging Technology (CSET), a new think tank led by Jason Matheny, formerly of IARPA, dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders.

Donor reason for selecting the donee: Open Phil thinks that one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. As AI grows more powerful, calls for government to play a more active role are likely to increase, and government funding and regulation could affect the benefits and risks of AI. Thus: "Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice." Despite risks and uncertainty, the grant is described as worthwhile under Open Phil's hits-based giving framework

Donor reason for donating that amount (rather than a bigger or smaller amount): The large amount over an extended period (5 years) is explained at https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant "In the case of the new Center for Security and Emerging Technology, we think it will take some time to develop expertise on key questions relevant to policymakers and want to give CSET the commitment necessary to recruit key people, so we provided a five-year grant."

Donor reason for donating at this time (rather than earlier or later): Likely determined by the timing that the grantee plans to launch. More timing details are not discussed
Intended funding timeframe in months: 60

Other notes: Donee is entered as Center for Security and Emerging Technology rather than as Georgetown University for consistency with future grants directly to the organization once it is set up. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.
Berkeley Existential Risk Initiative250,000.001062019-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineersDaniel Dewey Donation process: The Open Philanthropy Project described the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-04.
Foundation for Food and Agricultural Research3,000,000.00132018-12Animal welfare/factory farming/chicken/chick cullinghttps://www.openphilanthropy.org/giving/grants/foundation-food-and-agriculture-research-egg-tech-challengeLewis Bollard Donation process: Nothing specific is stated on the grant page, but a similar grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/foundation-food-and-agriculture-research-farm-animal-welfare-research was made in April 2017 so the progress with that grant likely informed this grant.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research into, and a prize for, developing a technology that can sex select male chicks at scale in ovo, eliminating the need for chick culling. This funding includes approximately $2,000,000 for FFAR to support research teams to compete for the prize, and approximately $1,000,000 for the prize itself, which will be awarded only if certain conditions are met."

Donor reason for selecting the donee: The grant page says: "Lewis Bollard, our Program Officer for Farm Animal Welfare, believes this technology will end the acute suffering at death of ~6.5 billion chicks per year and will spare ~29 million hens per year from factory farming entirely because the aborted eggs will replace their output in the market."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page gives this breakdown of funding: "This funding includes approximately $2,000,000 for FFAR to support research teams to compete for the prize, and approximately $1,000,000 for the prize itself, which will be awarded only if certain conditions are met."

Donor retrospective of the donation: Followup grants in 2020 suggest continued satisfaction from Open Philanthropy in the grantee and the reasoning informing the grant.

Other notes: Announced: 2019-03-20.
Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai (Earmark: Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai)2,351.001672018-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-robustness-machine-learning-modelsDaniel Dewey Donation process: The grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to reimburse technology costs for their efforts to study the robustness of machine learning models, especially robustness to unforeseen adversaries."

Donor reason for selecting the donee: The grant page says "We believe this will accelerate progress in adversarial, worst-case robustness in machine learning."
University of Bern (Earmark: Michael Toscano)150,000.001192018-11Animal welfare/factory farming/chicken/cage-free/researchhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/university-of-bern-higher-welfare-cage-free-systemsLewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to develop and implement a pilot project for U.S. egg producers, equipment installers, and USDA extension agents to learn about management of high-welfare, cage-free systems in Switzerland, Sweden, Holland, and Belgium. The funds will support Dr. Michael Toscano, Group Leader of Switzerland’s Centre for Proper Housing of Poultry and Rabbits, and colleagues to develop the educational program and deploy it with approximately 20 U.S. producers, installers, and extension agents. Due to Switzerland’s ban of battery cages in 1992, its producers and scientists have more than 25 years of experience managing cage-free systems."

Donor reason for selecting the donee: The grant fits in with Open Philanthropy's funding of corporate campaigns pushing for cage-free systems for chicken, an effort that https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms documents. Unlike the other grants that are focused on corporate campaigns, this grant takes more of a learning/educational approach.

Donor retrospective of the donation: A later grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/university-of-bern-layer-hens also for Michael Toscano and also for research related to cage-free system, suggests continued satisfaction with the grantee.

Other notes: Affected countries: United States; announced: 2018-12-11.
University of California, Berkeley (Earmark: Pieter Abeel|Aviv Tamar)1,145,000.00372018-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems."

Other notes: This is the second year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Pieter Abbeel) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 36; announced: 2018-12-11.
GoalsRL (Earmark: Ashley Edwards)7,500.001632018-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learningDaniel Dewey Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.
The Humane League10,000,000.0052018-08Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018Lewis Bollard Intended use of funds (category): Organizational general support

Intended use of funds: Grant renews four previous grants: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns (US corporate cage-free), https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-international-cage-free-advocacy (international cage-free), and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support (general support), https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-open-wing-alliance-2017 (Open Wing Alliance). THL used previous funding to secure corporate cage-free and broiler welfare pledges that, if fully implemented, will benefit approximately 150 million hens and 50 million broiler chickens alive at any time. The new fundings helps THL continue current programs and strengthen infrastructure through initiatives like increasing staff salaries and benefits to be in line with industry standards.

Donor reason for selecting the donee: The reason for selecting donee is not discussed explicitly, but likely includes the same reasons as for the previous grants, and continued satisfaction with progress made through those grants.

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount breakdown is not explicitly discussed, but at about $3 million per year, it is similar to grant amounts per year for the previous grants, when added up.

Donor reason for donating at this time (rather than earlier or later): Timing is not explicitly discussed, but it is likely because the timeframe for the earlier grants is ending.
Intended funding timeframe in months: 42

Other notes: Affected countries: United States; announced: 2018-09-28.
Fórum Nacional de Proteção e Defesa Animal200,000.001132018-08Animal welfare/factory farming/chicken/layer chicken/pig/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-crate-and-cage-free-campaigning-in-brazilLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for campaigning to reduce the use of battery cages for layer hens and gestation crates for pigs in Brazil." [Grantee] intends to use these funds to continue its corporate campaigns, to start a tracker of corporate implementation of cage-free pledges, and to host a conference with egg producers, food companies, scientists, and activists to discuss implementation."

Donor reason for selecting the donee: No explicit reasons are provided, but the grant page suggests satisfaction with the grantee's progress after the previous grant, and with their intended use of the funds for this grant.

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the previous two-year grant reaching its end.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-crate-and-cage-free-campaigning-in-brazil-2020 suggests continued satisfaction with the grantee.

Other notes: Affected countries: Brazil; announced: 2018-09-27.
Stanford University (Earmark: Dan Boneh|Florian Tremer)100,000.001342018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramerDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer."

Donor reason for selecting the donee: The grant page gives three reasons: (1) Florian Tremer is a very strong Ph.D. student, (2) excellent machine learning security work is important for AI safety, (3) increased funding in areas relevant to AI safety, like machine learning security, is expected to lead to more long-term benefits for AI safety.

Other notes: Grant is structured as an unrestricted "gift" to Stanford University Computer Science. Announced: 2018-09-06.
University of Oxford (Earmark: Allan Dafoe)429,770.00812018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom. The Open Philanthropy Project recommended additional funds to support this work in 2017, while Professor Dafoe was at Yale. Continuation of grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe. Announced: 2018-07-20.
The Wilson Center400,000.00852018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-seriesLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a series of in-depth AI policy seminars."

Donor reason for selecting the donee: The grant page says: "We believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-february-2020 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 suggest that the donor was satisfied with the outcome of the grant.

Other notes: Intended funding timeframe in months: 24; announced: 2018-08-01.
Animal Equality2,772,430.00152018-06Animal welfare/factory farming/chicken/broiler chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-corporate-animal-welfare-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support corporate cage-free and broiler welfare campaigns. Animal Equality plans to expand its corporate campaigns in Brazil, Italy, Mexico, Spain, and the U.S."

Donor reason for selecting the donee: The grant is framed as a renewal of the past grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-international-cage-free-advocacy (August 2016) and also cites other past grants https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-india-animal-welfare-reform (2017, India) and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-eu-farm-animal-welfare (2017, Europe). It is likely made for similar reasons: track record of successful investigations and confidence of Open Phil staff in Animal Equality leadership.

Donor reason for donating at this time (rather than earlier or later): The grant is made around the time that the original two-year grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-international-cage-free-advocacy expires, and is framed as a renewal, so its timing is likely determined by the original grant expiring.
Intended funding timeframe in months: 36

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-broiler-welfare-campaigns (2019-01) and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-cage-free-and-broiler-welfare (2020-02) with somem overlapping countries suggests continued endorsement of Animal Equality by Open Philanthropy.

Other notes: This is a total of five grants (presumably one grant per country). Affected countries: United States|Brazil|Italy|Mexico|Spain; announced: 2018-07-11.
Machine Intelligence Research Institute150,000.001192018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."

Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-27.
AI Impacts100,000.001342018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018Daniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2020 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020 suggest continued satisfaction with the grantee, though the amount of the renewal grant is lower (just $50,000).

Other notes: The grant is via the Machine Intelligence Research Institute. Announced: 2018-06-27.
Mercy For Animals375,000.00862018-05Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-us-broiler-chicken-welfare-corporate-campaignsLewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support its broiler chicken welfare corporate campaigns in the U.S."

Donor reason for selecting the donee: The grant page links the grant to two past grants https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-chicken-welfare-corporate-campaigns (broiler chicken welfare) and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns (cage-free egg campaign).

Donor reason for donating at this time (rather than earlier or later): The grant happens around two years after the linked previous two-year grants https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-chicken-welfare-corporate-campaigns (broiler chicken welfare) and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns (cage-free egg campaign) suggesting that its timing is related to their expiration.

Donor thoughts on making further donations to the donee: The grant page says: "We expect to evaluate the merits of a longer renewal of our support to MFA closer to the end of 2018."

Donor retrospective of the donation: Followup grants from Open Phil to Mercy For Animals (including https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-welfare-campaigns in January 2019 with a similar scope) suggest continued satisfaction with the grantee.

Other notes: Affected countries: United States; announced: 2018-06-14.
Royal Society for the Prevention of Cruelty to Animals231,677.001102018-05Animal welfare/factory farming/chicken/broiler chicken/researcchhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/rspca-broiler-breed-studyLewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a broiler chicken breed welfare study. The study, to be conducted by the Royal Veterinary College under RSPCA supervision, will test the welfare of two new breeds and will validate two new behavioral measures to enhance future breed tests."

Donor reason for selecting the donee: The grant page says: "Farm Animal Welfare Program Officer Lewis Bollard believes the research is likely to assist broiler welfare campaigns in the U.S. and Europe."

Other notes: Currency info: donation given as 171,600.00 GBP (conversion done via donor calculation); affected countries: United Kingdom; announced: 2018-06-14.
Open Phil AI Fellowship (Earmark: Aditi Raghunathan|Chris Maddison|Felix Berkenkamp|Jon Gauthier|Michael Janner|Noam Brown|Ruth Fong)1,135,000.00382018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 180 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research"

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship support to seven machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating at this time (rather than earlier or later): This is the first of annual sets of grants, decided through an annual application process.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The corresponding grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class (2019), https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020), and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirm that these grants will be made annually. Among the grantees, Chris Maddison would continue receiving support from Open Philanthropy in the future in the form of support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-research for his students, indicating continued endorsement of his work.

Other notes: Announced: 2018-05-31.
Ought525,000.00652018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."

Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought

Other notes: Intended funding timeframe in months: 36; announced: 2018-05-30.
Stanford University6,771.001642018-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learningDaniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775

Donor reason for selecting the donee: No specific reasons are included in the grant, but several of the workshop presenters for the previous year's conference (2017) would have their research funded by Open Philanthropy, including Jacob Steinhardt, Percy Liang, and Dawn Song.

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount was likely determined by the cost of running the workshop. The original amount of $2,539 was updated in June 2020 to $6,771.

Donor reason for donating at this time (rather than earlier or later): The timing was likely determined by the timing of the conference.
Intended funding timeframe in months: 1

Other notes: The original amount of $2,539 was updated in June 2020 to $6,771. Announced: 2018-04-18.
AI Scholarships (Earmark: Dmitrii Krasheninnikov|Michael Cohen)159,000.001182018-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018Daniel Dewey Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.
Otwarte Klatki472,864.00772017-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/otwarte-klatki-chicken-welfare-campaigns-poland-ukraineLewis Bollard Grant to support farm animal welfare campaigns and organizational capacity building in Poland and Ukraine. The funding will allow Otwarte Klatki to launch broiler chicken welfare campaigns in Poland and cage-free campaigns in Ukraine, as well as support expenses related to a planned merger with the Danish animal rights organization, Anima. Affected countries: Poland|Ukraine; announced: 2017-11-21.
L2141,347,742.00322017-11Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/L214-broiler-chicken-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work on broiler chicken welfare in France. Using this funding, L214 will conduct a campaign advocating for reduced chicken meat consumption as well as a corporate campaign targeting higher welfare standards for broiler chickens. Additionally, it plans to establish a new campus outreach program for movement building purposes, and will apply some funding toward capacity building such as software, training, and fundraising expenses."

Donor reason for selecting the donee: Open Phil's "Program Officer for Farm Animal Welfare, Lewis Bollard, is excited to support L214 due to its track record securing large wins to date, such as cage-free pledges from some of France’s largest retailers; his impression of its leadership team; and the organization’s strategic alignment with our goal to build a stronger farm animal welfare movement in Europe."

Donor retrospective of the donation: The write-up for a followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/L214-broiler-chicken-campaigns-2020 (November 2020) indicates Open Phil's satisfaction with L214's progress since this grant.

Other notes: Currency info: donation given as 1,140,000.00 EUR (conversion done via donor calculation); intended funding timeframe in months: 24; affected countries: France; announced: 2017-12-08.
Royal Society for the Prevention of Cruelty to Animals374,631.00872017-10Animal welfare/factory farming/chicken/broiler chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/rspca-broiler-chicken-welfare-campaign-UKLewis Bollard Donation process: RSPCA's budget https://www.openphilanthropy.org/files/Grants/RSPCA/RSPCA_Budget_2018_2019.pdf was prepared as part of the grantmaking process.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a corporate chicken welfare campaign in the United Kingdom. Using this funding, RSPCA will launch a campaign encouraging retailers and food companies to adopt higher welfare broiler chicken practices."

Donor reason for donating that amount (rather than a bigger or smaller amount): https://www.openphilanthropy.org/files/Grants/RSPCA/RSPCA_Budget_2018_2019.pdf has a full budget. The donation was given as 282,000.00 GBP (conversion done via donor calculation).

Donor retrospective of the donation: Followup grants from Open Phil to RSPCA suggest continued satisfaction with the grantee.

Other notes: Intended funding timeframe in months: 24; affected countries: United Kingdom; announced: 2017-11-08.
Anima (Earmark: Otwarte Klatki)683,000.00562017-10Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/anima-corporate-campaigns-merger-supportLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Anima’s corporate chicken welfare campaigns and organizational capacity building in Scandinavia. The funding will allow Anima to launch hen and broiler chicken welfare campaigns over the next two years, as well as support expenses related to a planned merger with the Polish animal rights organization, Otwarte Klatki."

Donor reason for selecting the donee: The grant page says: "Our Program Officer for Farm Animal Welfare, Lewis Bollard, is excited to support Anima due to its track record securing Danish animal welfare reforms to date; his impression of its leadership team; and the organization’s strategic alignment with our goal to build a stronger farm animal welfare movement in Europe."

Other notes: This is a total across two grants. Affected countries: Scandinavia; announced: 2017-11-21.
Machine Intelligence Research Institute3,750,000.00112017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) suggests that work on the review had been going on well before the grant renewal date

Intended use of funds (category): Organizational general support

Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."

Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."

Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36

Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."

Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." is discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
University of California, Berkeley (Earmark: Sergey Levine|Anca Dragan)1,450,016.00302017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-draganDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "The work will be led by Professors Sergey Levine and Anca Dragan, who will each devote approximately 20% of their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems, though they have the flexibility to adjust their focus during the grant period." The project narrative is at https://www.openphilanthropy.org/files/Grants/UC_Berkeley/Levine_Dragan_Project_Narrative_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our broad goals for this funding are to encourage top researchers to work on AI alignment and safety issues in order to build a pipeline for young researchers; to support progress on technical problems; and to generally support the growth of this area of study."

Other notes: This is the first year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It would begin an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Anca Dragan) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 48; announced: 2017-10-20.
Compassion in World Farming1,000,000.00402017-10Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-world-farming-end-the-cage-age-campaignLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support [the] “End the Cage Age” campaign in the UK and Europe. The campaign will seek to end the use of cages and crates for all farmed animal species in the UK and Europe through advocacy and outreach, including an EU-wide citizens’ ballot measure. [The] funds will support staffing needs related to the campaign in six regional EU offices as well as its headquarters in the United Kingdom; marketing, social media, and exhibition activities; advocacy work; investigations; as well as technical and operational costs over the next two years."

Donor reason for donating that amount (rather than a bigger or smaller amount): Budget available at https://www.openphilanthropy.org/files/Grants/CIWF/CIWF_End_the_Cage_Age_Campaign_2017.pdf

Other notes: Intended funding timeframe in months: 24; affected countries: United Kingdom; announced: 2017-11-14.
The Humane League2,000,000.00212017-09Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-open-wing-alliance-2017Lewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the Open Wing Alliance to expand corporate campaigns in Europe. The Alliance, started by The Humane League, supports global efforts to eliminate battery cages. The new grant will bolster these campaigns in Europe and allow Alliance members to expand into campaigns to improve the welfare of broiler (meat) chickens.

Donor reason for selecting the donee: Grant investigator Lewis Bollard, is excited to continue supporting the Open Wing Alliance (which grew out of a previous Open Phil grant to The Humane League) due to the coalition’s strong track record of securing corporate cage-free pledges; his confidence in its leadership team; and the project’s strategic fit with our goal to build a stronger farm animal welfare movement in Europe.

Donor reason for donating at this time (rather than earlier or later): Likely determined by the development timeline of the Open Wing Alliance, which grew out of an earlier grant about a year earlier, in February 2016: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The general support grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018 in 2018 renews this grant among others.

Other notes: This and other grants from Open Philanthropy Project to The Humane League are discussed in https://ssir.org/articles/entry/giving_in_the_light_of_reason as part of an overview of the Open Philanthropy Project grantmaking strategy. Announced: 2017-10-09.
Eurogroup for Animals625,400.00592017-09Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/eurogroup-animals-eu-chicken-welfare-advocacyLewis Bollard Donation process: Grant by the Open Philanthropy Action Fund

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support EU advocacy work for chicken welfare. Eurogroup for Animals plans to use these funds on either broiler chicken or egg-laying hen welfare campaigns, depending upon which campaign appears most tractable."

Donor reason for selecting the donee: The grant page says the grant "is one of several other recent grants made to strengthen the farm animal welfare movement in Europe."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/eurogroup-animals-eu-chicken-welfare-advocacy-2020 suggests continued satisfaction with the grantee.

Other notes: Currency info: donation given as 530,000.00 EUR (conversion done via donor calculation); intended funding timeframe in months: 24; affected countries: European Union; announced: 2017-11-28.
Albert Schweitzer Foundation1,000,000.00402017-09Animal welfare/factory farming/chicken/turkey/pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-general-support-2017Lewis Bollard Donation process: The grant page suggests that evaluation of results of previous grants played a role in deciding to make this grant.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "The funding will allow the Albert Schweitzer Foundation to significantly expand their corporate outreach on broiler chicken welfare, increase their fundraising capacity, and hire a law firm to pursue litigation related to turkey and pig welfare."

Donor reason for selecting the donee: The grant page says: "Our Program Officer for Farm Animal Welfare, Lewis Bollard, is excited to increase our support due to the organization’s track record securing cage-free pledges from major German retailers; his confidence in its leadership team; and the organization’s strategic alignment with our goal to build a stronger farm animal welfare movement in Europe."

Donor reason for donating at this time (rather than earlier or later): Timing likely determined based on Open Philanthropy having had enough time to evaluate the outcome of the previous grants and the grantee's overall track record.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-general-support-2019 (2019-08) suggests continued satisfaction with the grantee.

Other notes: Affected countries: Germany; announced: 2017-10-25.
Montreal Institute for Learning Algorithms (Earmark: Yoshua Bengio)2,400,000.00182017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research-- Grant to support research to improve the positive long-term impact of artificial intelligence on society. Mainly due to star power of researcher Yoshua Bengio who influences many young ML/AI researchers. Detailed writeup available. See also https://www.facebook.com/permalink.php?story_fbid=10110258359382500&id=13963931 for a Facebook share by David Krueger, a member of the grantee organization. The comments include some discussion about the grantee. Announced: 2017-07-19.
Yale University (Earmark: Allan Dafoe)299,320.00982017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research into the global politics of artificial intelligence, led by Assistant Professor of Political Science, Allan Dafoe, who will conduct part of the research at the Future of Humanity Institute in Oxford, United Kingdom over the next year. Funds from the two gifts will support the hiring of two full-time research assistants, travel, conferences, and other expenses related to the research efforts, as well as salary, relocation, and health insurance expenses related to Professor Dafoe’s work in Oxford. Announced: 2017-09-28.
Federation of Indian Animal Protection Organisations332,944.00902017-07Animal welfare/factory farming/chicken/cattlehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/federation-indian-animal-protection-organisations-india-animal-welfare-reformLewis Bollard Donation process: Grantee submitted a budget at https://www.openphilanthropy.org/files/Grants/FIAPO/FIAPO_Budget.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "The grant will provide funding to support reform of poultry slaughter and dairy industry practices; grassroots advocacy including capacity building for farm animal welfare; and a pilot corporate/institution campaign to reduce animal product usage."

Donor reason for selecting the donee: The grant page says: "We are excited about the grant primarily because of FIAPO’s broad network of grassroots members across India; our Program Officer for Farm Animal Welfare, Lewis Bollard’s, confidence in FIAPO’s relevant leadership; and the potential opportunity we see in India—one of the world’s largest producers of eggs, fish, and chicken—to encourage farm animal welfare reforms and advocacy."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount in Indian rupees is consistent with the budget in https://www.openphilanthropy.org/files/Grants/FIAPO/FIAPO_Budget.pdf submitted by the grantee.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made to animal welfare groups in India at around the same time.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/federation-indian-animal-protection-organisations-india-farm-animal-welfare-2019 (2019-06) suggests continued satisfaction with the grantee.

Other notes: Affected countries: India; announced: 2017-08-21.
Berkeley Existential Risk Initiative403,890.00842017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaborationDaniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx

Other notes: Announced: 2017-09-28.
Stanford University (Earmark: Percy Liang)1,337,600.00332017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liangDaniel Dewey Donation process: The grant is the result of a proposal written by Percy Liang. The writing of the proposal was funded by a previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grant written March 2017. The proposal was reviewed by two of Open Phil's technical advisors, who both felt largely positive about the proposed research directions.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant is intended to fund about 20% of Percy Liang's time as well as about three graduate students. Liang expects to focus on a subset of these topics: robustness against adversarial attacks on ML systems, verification of the implementation of ML systems, calibrated/uncertainty-aware ML, and natural language supervision.

Donor reason for selecting the donee: The grant page says: "Both [technical advisors who reviewed te garnt proposal] felt largely positive about the proposed research directions and recommended to Daniel that Open Philanthropy make this grant, despite some disagreements [...]."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is likely determined by the grant proposal details; it covers about 20% of Percy Liang's time as well as about three graduate students.

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the timing of the grant proposal being ready.
Intended funding timeframe in months: 48

Donor thoughts on making further donations to the donee: The grant page says: "At the end of the grant period, we will decide whether to renew our support based on our technical advisors’ evaluation of Professor Liang’s work so far, his proposed next steps, and our assessment of how well his research program has served as a pipeline for students entering the field. We are optimistic about the chances of renewing our support. We think the most likely reason we might choose not to renew would be if Professor Liang decides that AI alignment research isn’t a good fit for him or for his students."

Other notes: Announced: 2017-09-26.
UCLA School of Law (Earmark: Edward Parson,Richard Re)1,536,222.00292017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governanceHelen Toner Grant to support work on governance related to AI risk led by Edward Parson and Richard Re. Announced: 2017-07-27.
Animal Equality292,000.00992017-05Animal welfare/factory farming/chicken/chick culling|Animal welfare/diet changehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-india-animal-welfare-reformLewis Bollard Donation process: The grant is one of five grants made around the same time supporting farm animal welfare work in India.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support farm animal welfare work in India." The grant "will help support a pro-vegetarian messaging campaign, a corporate and/or institution-directed campaign encouraging animal product alternatives, organization capacity building, and advocacy related to in-ovo sex selection technology and other chicken welfare reforms." https://www.openphilanthropy.org/files/Grants/Animal_Equality/Animal_Equality_India_Animal_Welfare_Reform_Budget.pdf has the budget proposal (with red background for unfunded items).

Donor reason for selecting the donee: The grant page says: "We are excited about the grant primarily because of Animal Equality’s track record of successful undercover investigations and subsequent media coverage in India; our Program Officer for Farm Animal Welfare, Lewis Bollard’s, confidence in Animal Equality’s relevant leadership staff; and the potential opportunity we see in India—one of the world’s largest producers of eggs, fish, and chicken—to encourage farm animal welfare reforms and advocacy."

Donor reason for donating that amount (rather than a bigger or smaller amount): https://www.openphilanthropy.org/files/Grants/Animal_Equality/Animal_Equality_India_Animal_Welfare_Reform_Budget.pdf has the budget proposal (with red background for unfunded items).

Donor reason for donating at this time (rather than earlier or later): The grant is one of five grants recommended around the same time for farm animal welfare work in India, so the timing is likely determined by the timing of the decision to make this batch of grants.
Intended funding timeframe in months: 24

Other notes: Affected countries: India; announced: 2017-07-27.
Eurogroup for Animals14,961.001622017-05Animal welfare/factory farming/chicken/broiler chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/eurogroup-animals-broiler-chicken-welfare-campaignLewis Bollard Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a two-day International Broiler Advocacy Meeting in Brussels in June 2017. During the meeting, participants—which included representatives from various European animal welfare advocacy groups—discussed issues and strategy related to broiler chicken welfare campaigns. Our funds covered associated organizing costs, including travel expenses for representatives of smaller advocacy groups."

Donor reason for selecting the donee: The grant page says: "In preparation for the meeting, Eurogroup for Animals conducted an inventory of broiler welfare campaigns and an initial analysis of the relevant economic, legislative, and policy climate in Europe. Recent cage-free campaigns have been successful in Europe, and we hope a convening of this kind will facilitate collaboration and knowledge-sharing among various European groups as they consider launching new campaigns related to broiler chicken welfare."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is likely determined by the total of the expenses being covered. The donation was given as 13,242.00 EUR (conversion done via donor calculation).

Donor reason for donating at this time (rather than earlier or later): The timing (May 2017) is likely determined by the timing of the conference (June 2017).
Intended funding timeframe in months: 1

Other notes: Affected countries: European Union; announced: 2017-08-08.
Future of Life Institute100,000.001342017-05Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant for general support. However, the primary use of the grant will be to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant

Donor retrospective of the donation: The followup grants in 2018 and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee.

Other notes: Announced: 2017-09-27.
Foundation for Food and Agricultural Research1,000,000.00402017-04Animal welfare/factory farming/chicken and pighttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/foundation-food-and-agriculture-research-farm-animal-welfare-researchLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to co-fund requests for applications (RFAs) for research on solutions to what we see as two major problems in farm animal welfare: bone fractures in cage-free hens and the painful castration of male piglets. It is our impression that both of these problems are scientifically tractable. FFAR plans to use this grant and at least $1 million of its own funding to fund scientific projects focused on solving these problems." The request for applications https://www.openphilanthropy.org/files/Grants/FFAR/FFAR_Accelerating_Advances_in_Animal_Welfare_Final.pdf is linked.

Donor reason for selecting the donee: The grant page says: "We are excited about this grant because a) we believe that it is an efficient way to fund research on farm animal welfare, since FFAR is co-funding the research and plans to handle the logistics of the RFAs and distribute the results of its research among industry, b) it is an opportunity for us to learn about co-funding with a Congressionally created and funded 501(c)(3) organization, which we believe could be a useful avenue for funding research to solve other problems in farm animal welfare, and c) it may increase FFAR’s interest in co-funding other animal welfare projects."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount seems to be chosen to target a 1:1 match with what FFAR was willing to fund with other funds.

Donor retrospective of the donation: Further grants from Open Phil to FFAR for similar purposes suggest continued endorsement of the thinking behind the grant.

Other notes: Announced: 2017-05-11.
Stanford University (Earmark: Percy Liang)25,000.001542017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grantDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to enable Professor Liang to spend significant time engaging in our process to determine whether to provide his research group with a much larger grant." The larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang would be made.

Donor thoughts on making further donations to the donee: The grant is a planning grant intended to help Percy Liang write up a proposal for a bigger grant.

Donor retrospective of the donation: The bigger proposal whose writing was funded by this grant would lead to a bigger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang in May 2017.

Other notes: Announced: 2017-09-26.
Wageningen University & Research (Earmark: Marc Bracke)88,345.001412017-03Animal welfare/factory farming/chicken/broiler chicken/researchhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/wageningen-ur-broiler-welfare-reviewLewis Bollard Donation process: The grant page says: "We initially decided to recommend this grant in April 2016. At that time, we anticipated that the results of this research would help to guide our decision-making around grants to support corporate campaigns to improve the welfare of the approximately 9 billion broiler chickens raised each year in the U.S. However, due to difficulties and delays in finalizing the details of the grant, funds were only transferred in March 2017, after we had already begun to make grants to support broiler chicken welfare reforms."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to create a systematic assessment of broiler chicken welfare. [...] Dr. Bracke will assess the relative importance of the various attributes that together define broiler chicken welfare. Welfare attributes are factors such as stocking density, litter quality, breed, and lighting. Depending on the results of an initial investigation, he will produce either a basic broiler chicken welfare model or a review of expert opinion."

Donor reason for selecting the donee: The grant page says: "We initially decided to recommend this grant in April 2016. At that time, we anticipated that the results of this research would help to guide our decision-making around grants to support corporate campaigns to improve the welfare of the approximately 9 billion broiler chickens raised each year in the U.S."

Donor reason for donating at this time (rather than earlier or later): The grant page says: "We initially decided to recommend this grant in April 2016. [...] However, due to difficulties and delays in finalizing the details of the grant, funds were only transferred in March 2017, after we had already begun to make grants to support broiler chicken welfare reforms."
Intended funding timeframe in months: 12

Other notes: The grant is made via the King Baudouin Foundation. Currency info: donation given as 82,105.00 EUR (conversion done via donor calculation); announced: 2017-05-08.
Institute for Advancement of Animal Welfare Science80,400.001422017-03Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/colorado-state-university-planning-giftLewis Bollard Grant goes for Colorado State University research on broiler chicken welfare. Discretionary grant. Amount increased from original value of $25,300 to $80,400 on 2018-02-16. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Announced: 2017-06-26.
Future of Humanity Institute1,994,000.00222017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support-- Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.
Distill25,000.001542017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-supportDaniel Dewey Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.
World Animal Protection517,588.00682017-03Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/world-animal-protection-broiler-chicken-welfareLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for campaigns to improve the welfare of broiler chickens. Activities: (1) Producing and promoting campaign materials to raise awareness of broiler chicken suffering (2) Developing and launching a corporate chicken welfare scorecard (3) Building evidence of the suffering endured by broiler chickens in factory farming operations (4) Staff time, creative development, and travel (5) Indirect costs such as occupancy, technical support, and administrative support.

Donor reason for selecting the donee: For more background on Open Phil grants related to broiler chicken, see https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken

Donor reason for donating that amount (rather than a bigger or smaller amount): Donee's budget proposal is at https://www.openphilanthropy.org/files/Grants/World_Animal_Protection/Revised_WAP_Chicken_Campaign_Proposal_REDACTED.xlsx

Other notes: Intended funding timeframe in months: 24; announced: 2017-06-26.
OpenAI30,000,000.0032017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support-- Donation process: According to the grant page Section 4 Our process: "OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors."

Intended use of funds (category): Organizational general support

Intended use of funds: The funds will be used for general support of OpenAI, with 10 million USD per year for the next three years. The funding is also accompanied with Holden Karnofsky (Open Phil director) joining the OpenAI Board of Directors. Karnofsky and one other board member will oversee OpenAI's safety and governance work.

Donor reason for selecting the donee: Open Phil says that, given its interest in AI safety, it is looking to fund and closely partner with orgs that (a) are working to build transformative AI, (b) are advancing the state of the art in AI research, (c) employ top AI research talent. OpenAI and Deepmind are two such orgs, and OpenAI is particularly appealing due to "our shared values, different starting assumptions and biases, and potential for productive communication." Open Phil is looking to gain the following from a partnership: (i) Improve its understanding of AI research, (ii) Improve its ability to generically achieve goals regarding technical AI safety research, (iii) Better position Open Phil to promote its ideas and goals.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page Section 2.2 "A note on why this grant is larger than others we’ve recommended in this focus area" explains the reasons for the large grant amount (relative to other grants by Open Phil so far). Reasons listed are: (i) Hits-based giving philosophy, described at https://www.openphilanthropy.org/blog/hits-based-giving in depth, (ii) Disproportionately high importance of the cause if transformative AI is developed in the next 20 years, and likelihood that OpenAI will be very important if that happens, (iii) Benefits of working closely with OpenAI in informing Open Phil's understanding of AI safety, (iv) Field-building benefits, including promoting an AI safety culture, (v) Since OpenAI has a lot of other funding, Open Phil can grant a large amount while still not raising the concern of dominating OpenAI's funding.

Donor reason for donating at this time (rather than earlier or later): No specific timing considerations are provided. It is likely that the timing of the grant is determined by when OpenAI first approached Open Phil and the time taken for the due diligence.
Intended funding timeframe in months: 36

Other notes: External discussions include http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ cross-posted to https://www.lesswrong.com/posts/2z5vrsu7BoiWckLby/an-openai-board-seat-is-surprisingly-expensive (GW, IR) (post by Ben Hoffman, attracting comments at both places), https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.
Global Animal Partnership515,000.00692017-02Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/global-animal-partnership-broiler-chicken-welfare-researchLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research into broiler chicken welfare at the University of Guelph." The study is expected to "help to identify which breeds of broiler chicken have the best welfare outcomes."

Donor retrospective of the donation: Several followup grants from Open Philanthropy to Global Animal Partnership indicate continued satisfaction of Open Philanthropy in the grantee.

Other notes: Announced: 2018-10-05.
Albert Schweitzer Foundation111,986.001312017-01Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-international-cage-free-advocacyLewis Bollard Donation process: Grantee submitted a budget proposal https://www.openphilanthropy.org/files/Grants/Albert_Schweitzer/Albert_Schweitzer_Expansion_Budget_Poland.xlsx that included total expenses and a breakdown between what would be covered by the grant versus by the grantee's own resources.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [grantee's] work to end the confinement of hens in battery cages."

Donor reason for selecting the donee: The linked blog post https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms lists several reasons for the general focus on cage-free reforms, and http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d describes the reasons for the internationalization phase.

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount granted in euros matches the total in https://www.openphilanthropy.org/files/Grants/Albert_Schweitzer/Albert_Schweitzer_Expansion_Budget_Poland.xlsx that should be covered by the grant. The donation was given as 102,000.00 EUR (conversion done via donor calculation).

Donor reason for donating at this time (rather than earlier or later): Timing matches the timing of other grants in this second phase (internationalization) of corporate cage-free campaign spending.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup general support grants https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-general-support-2017 (2017-09) and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/albert-schweitzer-foundation-general-support-2019 (2019-08) include support for work similar to this grant. The grant page for the first of these explicitly refers to Open Philanthropy's satisfaction with this grant's outcome.

Other notes: Affected countries: Poland; announced: 2017-03-21.
Farm Forward100,000.001342017-01Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/farm-forward-broiler-chicken-welfare-advocacyLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work to secure pledges from institutions including universities, technology companies, and religious organizations to source higher-welfare animal products through The Leadership Circle. While Farm Forward typically works with institutions that purchase fewer animal products than the large food companies that other advocacy groups work with, it also seeks stronger welfare commitments, such as sourcing 100% of chicken from farms that are certified to at least Global Animal Partnership (GAP) Step 2 within two years. The Leadership Circle also asks institutions to commit to continuous improvement and investments in highest-welfare farms and ranches. Project description available at https://www.openphilanthropy.org/files/Grants/Farm_Forward/The_Leadership_Circle_Project_Description.pdf

Donor reason for selecting the donee: Open Phil writes: "It seems plausible to us that the institutions that Farm Forward works with may exert cultural influence that may influence much larger food companies."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget, available at https://www.openphilanthropy.org/files/Grants/Farm_Forward/The_Leadership_Circle_Budget_Public.xlsx that gives a total of $100,000 from January 1, 2017 to December 31, 2017

Donor retrospective of the donation: The February 2018 renewal https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/farm-forward-leadership-circle-2018 suggests that the grant was considered at least somewhat successful. The renewal writeup says that the grantee says that the grant "helped enable its work with the University of California system, Dr. Bronner’s, Airbnb, Duke University, Villanova University, Johns Hopkins University, and others to commit to source some of their animal products from farms certified to higher-welfare standards."

Other notes: Recipient works with institutions that purchase animal food products, and pushes them to raise the standards of treatment of animals for the food they purchased, through the Leadership Circle. Example: sourcing 100% of chicken from farms that are certified to at least Global Animal Partnership (GAP) Step two in two years. Intended funding timeframe in months: 12; announced: 2017-03-30.
Animal Outlook500,000.00712016-12Animal welfare/factory farming/chicken/broiler chicken/research/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/compassion-over-killing-us-broiler-welfare-campaignsLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support broiler chicken welfare research and costs of United States corporate campaigns against the abuse of broiler chickens."

Other notes: The grantee name at the time, and listed in the grant, is Compassion Over Killing. Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Affected countries: United States; announced: 2017-02-16.
AI Impacts32,000.001522016-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-- Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewals in 2018 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018 and 2020 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020 suggest continued satisfaction with the grantee.

Other notes: Announced: 2017-02-02.
The Humane Society of the United States1,000,000.00402016-11Animal welfare/factory farming/chickenhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-society-united-states-new-broiler-welfare-corporate-campaignsLewis Bollard Part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See also https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken. Affected countries: United States; announced: 2016-12-15.
Electronic Frontier Foundation (Earmark: Peter Eckersley)199,000.001152016-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social-- Grant funded work by Peter Eckersley, whom the Open Philanthropy Project believed in. Followup conversation with Peter Eckersley and Jeremy Gillula of grantee organization at https://www.openphilanthropy.org/sites/default/files/Peter_Eckersley_Jeremy_Gillula_05-26-16_%28public%29.pdf on 2016-05-26. Announced: 2016-12-15.
Mercy For Animals1,000,000.00402016-11Animal welfare/factory farming/chicken/broiler chicken/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-broiler-chicken-welfare-corporate-campaignsLewis Bollard Donation process: A budget proposal https://www.openphilanthropy.org/files/Grants/Mercy_For_Animals/Final_MFA_Broiler_Welfare_Campaign_Proposal_for_the_Open_Philanthropy_Project.pdf was sought. The grant page lacks further detail.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support corporate campaigns to promote the welfare of broiler chickens."

Donor reason for selecting the donee: The grant is part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken for more.

Donor reason for donating that amount (rather than a bigger or smaller amount): The budget proposal gives total annual campaign costs of $500,000 / year for two years. The breakdown is as follows: six broiler welfare corporate outreach staff positions ($300,000), broiler welfare corporate campaign expenses ($150,000), public relations to secure media coverage on broiler welfare issues and campaigns ($25,000), and campaign volunteer recruitment to increase number of active broiler welfare campaign volunteers ($25,000).

Donor reason for donating at this time (rather than earlier or later): The grant is part of a strategy focus on broiler chicken welfare in late 2016, though no overarching document on this has been posted. See https://www.facebook.com/groups/EffectiveAnimalActivism/search/?query=broiler%20chicken for more.
Intended funding timeframe in months: 24

Donor retrospective of the donation: Several followup grants from Open Phil to Mercy For Animals suggest continued satisfaction with the grantee.

Other notes: Affected countries: United States|Canada; announced: 2017-01-10.
Fórum Nacional de Proteção e Defesa Animal100,000.001342016-10Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-international-cage-free-advocacyLewis Bollard Donation process: The grantee submitted a grant proposal, available at https://www.openphilanthropy.org/files/Grants/FNDPA/FNPDA_Grant_proposal_edited_Jul_16.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support [grantee's] work to end the confinement of hens in battery cages." The grant proposal https://www.openphilanthropy.org/files/Grants/FNDPA/FNPDA_Grant_proposal_edited_Jul_16.pdf says: "In 2016, we plan to launch various campaigns targeting – one by one – the largest food retailers in Brazil. We will carry out investigations and these campaigns will have online petitions, ongoing efforts to get media attention, direct outreach to the senior leadership and a direct action in front of one of their stores, creating a good photo opportunity for media attention."

Donor reason for selecting the donee: No reasons specific to the grantee are listed, but https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms lists several reasons for the general focus on cage-free reforms, and http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d describes the reasons for the internationalization phase.

Donor reason for donating at this time (rather than earlier or later): Timing matches the timing of other grants in this second phase (internationalization) of corporate cage-free campaign spending.
Intended funding timeframe in months: 24

Donor retrospective of the donation: Followup grants https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-crate-and-cage-free-campaigning-in-brazil and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/forum-nacional-de-protecao-e-defesa-animal-crate-and-cage-free-campaigning-in-brazil-2020 suggest continued satisfaction with the grantee. The first of these says of the grantee's progress: "FNPDA has played a role in securing crate-free pledges from Brazil’s four largest pork producers and cage-free pledges from 26 Brazilian food companies"

Other notes: Affected countries: Brazil; announced: 2016-11-07.
People for Animals89,392.001402016-08Animal welfare/factory farming/chicken/cage-free campaign/international/Indiahttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/people-animals-international-cage-free-advocacyLewis Bollard Second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. Followup conversation with Gauri Mulekhi of grantee organization at https://www.openphilanthropy.org/sites/default/files/Gauri_Maulekhi_02-06-17_%28public%29.pdf on 2017-02-06. Affected countries: India; announced: 2016-10-03.
Mercy For Animals1,000,000.00402016-08Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-international-cage-free-advocacyLewis Bollard Donation process: The donation is part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more background. For this specific grant, a budget https://www.openphilanthropy.org/files/Grants/Mercy_For_Animals/MFA_Budget_International_Cage-Free_Campaigns_Expansion_8-1-16.pdf was obtained from the grantee.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "end the confinement of hens in battery cages. [...] [The grant will] support Mercy For Animals’ work in Latin America and Asia." A simplified budget ($500,000/year) is at https://www.openphilanthropy.org/files/Grants/Mercy_For_Animals/MFA_Budget_International_Cage-Free_Campaigns_Expansion_8-1-16.pdf with breakdown of $212,500 for Brazil, $192,500 for Mexico, $40,000 for Asia, and $55,000 for international campaign coordination from the United States.

Donor reason for selecting the donee: No reasons specific to the grantee are listed, but https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms lists several reasons for the general focus on cage-free reforms, and http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d describes the reasons for the internationalization phase.

Donor reason for donating at this time (rather than earlier or later): Timing matches the timing of other grants in this second phase (internationalization) of corporate cage-free campaign spending.
Intended funding timeframe in months: 24

Donor retrospective of the donation: Several further grants from Open Philanthropy to Mercy For Animals suggest continued satisfaction with the grantee.

Other notes: Affected countries: Brazil|Mexico; announced: 2016-10-03.
Animal Equality500,000.00712016-08Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/animal-equality-international-cage-free-advocacyLewis Bollard Donation process: The donation is part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more background.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support international advocacy to end the confinement of hens in battery cages." "The present funding, part of a new series of grants focusing on international cage-free advocacy, will support Animal Equality’s work in Latin America, Europe, and Asia."

Donor reason for selecting the donee: No reasons specific to the grantee are listed, but https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms lists several reasons for the general focus on cage-free reforms, and http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d describes the reasons for the internationalization phase.

Donor reason for donating at this time (rather than earlier or later): Timing matches the timing of other grants in this second phase (internationalization) of corporate cage-free campaign spending.
Intended funding timeframe in months: 24

Donor retrospective of the donation: Several further grants from Open Philanthropy to Animal Equality, with continued endorsement of the work, suggest satisfaction by Open Philanthropy with the grant.

Other notes: Announced: 2016-10-03.
Humane Society International1,000,000.00402016-08Animal welfare/factory farming/chicken/cage-freehttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-society-international-international-cage-free-outreachLewis Bollard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [grantee's] work to end the confinement of hens in battery cages. [..] The present funding, part of a new series of grants focusing on international cage-free advocacy, will support Humane Society International’s work in Latin America and Asia."

Donor reason for selecting the donee: The linked blog post https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms lists several reasons for the general focus on cage-free reforms, and http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d describes the reasons for the internationalization phase.

Donor reason for donating at this time (rather than earlier or later): Timing matches the timing of other grants in this second phase (internationalization) of corporate cage-free campaign spending.
Intended funding timeframe in months: 24

Donor retrospective of the donation: Further grants to the grantee suggest continued satisfactioon with the outcome of this grant.

Other notes: Affected countries: Latin America|Asia; announced: 2016-10-03.
Machine Intelligence Research Institute500,000.00712016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."

Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety

Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."

Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."

Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant

Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Center for Human-Compatible AI5,555,550.00102016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Donation process: The grant page section https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process says: "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."

Intended use of funds (category): Organizational general support

Intended use of funds: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding says: "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year." The funding from Open Phil will be used toward this budget. An earlier section of the grant page says that the Center's research topics will include value alignment, value functions defined by partially observable and partially defined terms, the structure of human value systems, and conceptual questions including the properties of ideal value systems.

Donor reason for selecting the donee: The grant page gives these reasons: (1) "We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it." (2) "The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research." (3) "We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence." Also, counterfactual impact: "Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. [...] We are not aware of other potential [substantial] funders, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is based on budget estimates in https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year."

Donor reason for donating at this time (rather than earlier or later): Timing seems to have been determined by the time it took to work out the details of the new center after Open Phil decided to make AI safety a major priority in 2016. According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 in November 2019, five-year renewal https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2021 in January 2021, as well as many grants to Berkeley Existential Risk Initiative (BERI) to collaborate with the grantee, suggest that Open Phil would continue to think highly of the grantee, and stand by its reasoning.

Other notes: Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the name of the center for easier cross-referencing. Announced: 2016-08-29.
The Humane League1,000,000.00402016-07Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-international-cage-free-advocacyLewis Bollard Donation process: No details are provided for this grant, but it likely builds on past vetting of the organization for the previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns and general interest in cage-free campaigns described at https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support international advocacy to end the confinement of hens in battery cages, complementing a similar grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns focused on the United States.

Donor reason for selecting the donee: The grant page does not discuss reasons, but reasons are likely similar to those for the previous grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns (both for the donee as an organization and for cage-free campaigns).

Donor reason for donating at this time (rather than earlier or later): No timing-related reasons are discussed, but the timing is likely a result of the Open Philanthropy Project's general push for cage-free campaigning, and promise shown by the first round of cage-free campaign grants made earlier in the year.

Donor retrospective of the donation: The general support grant https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018 in 2018 renews this grant among others.

Other notes: Part of a second phase (focused on internationalization) of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for description of overall cage-free effort and see http://www.huffingtonpost.com/entry/chickens-animal-abuse-video_us_57fac5c5e4b0e655eab5485d for description of internationalization phase. This and other grants from Open Philanthropy Project to The Humane League are discussed in https://ssir.org/articles/entry/giving_in_the_light_of_reason as part of an overview of Open Philanthropy's grantmaking strategy. Announced: 2016-10-03.
George Mason University (Earmark: Robin Hanson)277,435.001012016-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios-- Earmarked for Robin Hanson research. Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence for background. Original amount $264,525. Increased to $277,435 through the addition of $12,910 in July 2017 to cover an increase in George Mason University’s instructional release costs (teaching buyouts). Announced: 2016-07-07.
The Humane Society of the United States500,000.00712016-02Animal welfare/factory farming/chicken/cage-free campaign/United Stateshttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-society-united-states-corporate-cage-free-campaignsLewis Bollard Part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more. Followup conversation with Paul Shapiro of grantee organization at https://www.openphilanthropy.org/sites/default/files/Paul_Shapiro_07-20-16_%28public%29.pdf on 2016-07-20. Affected countries: United States; announced: 2016-03-10.
The Humane League1,000,000.00402016-02Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaignsLewis Bollard Donation process: The donation is part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more background. The specific process for The Humane League is not discussed in detail; see https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#Our_process

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support corporate cage-free campaigns. The grant page says: "THL plans to use this grant to roughly triple the size of its corporate campaign team by hiring eight new staff, including: three campaign coordinators, a corporate outreach specialist, a lawyer, an in-house designer, a website developer, and a media specialist. THL plans to use this extra capacity to launch more and larger campaigns, especially targeting the grocery sector (which has so far largely resisted pressure to go cage-free). THL has shared its plans with us for reaching out to the nation’s 400 largest food buyers (ranging from fast food restaurants to regional grocery chains) and launching campaigns against them if necessary."

Donor reason for selecting the donee: The donor's positive assessment of the donee as a corporate campaigner is described at https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#The_organization The donor's positive assessment of cage-free campaigns is described at https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#The_cause and https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms The donor believes the donee's effectiveness will increase with scale; this is part of the reason for the grant, explained more at https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#Case_for_the_grant

Donor reason for donating that amount (rather than a bigger or smaller amount): From https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-corporate-cage-free-campaigns#Budget_and_room_for_more_funding (Section 2.2): "THL shared two potential two-year budgets for its corporate campaign expansion with us: for an additional $250,000/year, or $500,000/year. We have decided to fund THL’s full corporate campaign expansion budget of $500,000/year for the next two years."

Donor reason for donating at this time (rather than earlier or later): The grant is part of a push by the Open Philanthropy Project to fund corporate cage-free campaigning, explained in more detail at https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms The timing is therefore controlled by the timing of that push.
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: Next donation is not directly discussed, but follow-up plans are described in Section 2.4 "Follow-up expectations": a followup with THL staff every 3-6 months, an update at the one-year mark, and a holistic evaluation at the end of the grant period.

Donor retrospective of the donation: Followup conversation at https://www.openphilanthropy.org/sites/default/files/The_Humane_League_08-22-16_%28public%29.pdf on 2016-08-22. There are many followup grants for international expansion and general support, suggesting that the grant is considered a success. A renewal and expansion grant is made in August 2018: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/humane-league-general-support-2018

Other notes: This and other grants from Open Philanthropy to The Humane League are discussed in https://ssir.org/articles/entry/giving_in_the_light_of_reason as part of an overview of Open Philanthropy's grantmaking strategy. Affected countries: United States; announced: 2016-02-24.
Mercy For Animals1,000,000.00402016-02Animal welfare/factory farming/chicken/cage-free/corporate campaignhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaignsLewis Bollard Donation process: This donation is part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more. The grant page https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns#Our_process says: "As MFA is one of the main organizations running corporate animal welfare campaigns, we contacted MFA to discuss the possibility of funding the organization for corporate cage-free campaigns."

Intended use of funds (category): Direct project expenses

Intended use of funds: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns#Proposed_activities says: "MFA will use this grant to build a corporate cage-free egg campaigning team. Now that advocates have gotten almost all major fast food and food service chains to go cage-free, MFA’s goal is to get the rest of the grocery industry to go cage-free as well."

Donor reason for selecting the donee: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns#Case_for_the_grant says: "We believe corporate cage-free egg campaigns are a particularly cost-effective approach for reducing farm animal suffering [...] [MFA] seems well-positioned to campaign for corporate cage-free reforms, particularly given its past experience with campaigns in the grocery sector. [...] more than two million Facebook followers, 200,000+ member email list, celebrity contacts, network news connections, top investigations unit, and grassroots network [...] We believe the most likely outcome [...] slightly worse than the estimate of 120 hens spared per dollar that we gave previously. [...] Even if returns are sublinear, we believe cage-free egg campaigns would still be relatively cost-effective; if, for example, our $1 million grant to MFA only generates one major grocer victory over two years [...] 25 hens spared per dollar."

Donor reason for donating at this time (rather than earlier or later): This donation is part of a bunch of corporate cage-free campaign spending. See https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms for more.
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/mercy-animals-corporate-cage-free-campaigns#Follow-up_expectations says: "We expect to have a conversation with MFA staff every 3-6 months for the next two years, with public notes if the conversation warrants it. At the one-year mark, we expect to provide an update on this grant, either by publishing public notes or by producing a brief write-up. Towards the end of the grant, we plan to attempt a more holistic and detailed evaluation of the grant’s performance."

Donor retrospective of the donation: A followup conversation with Nick Cooney of grantee organization at https://www.openphilanthropy.org/sites/default/files/Nick_Cooney_08-01-16_%28public%29.pdf would happen on 2016-08-01. Several followup grants from Open Phil to MFA suggest continued satisfaction with the grantee.

Other notes: Affected countries: United States; announced: 2016-03-10.
Future of Life Institute1,186,000.00362015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.

Similarity to other donors

Sorry, we couldn't find any similar donors.