Open Philanthropy Project donations made (filtered to cause areas matching AI safety)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy+Project

Brief history: The Open Philanthropy Project (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell in June 2017

Brief notes on broad donor philosophy and major focus areas: The Open Philanthropy Project is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Project Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy Project database are not listed on Donations List Website as being under Open Philanthropy Project. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by the Open Philanthropy Project that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by the Open Philanthropy Project are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 42 500,000 3,104,909 2,539 30,751 100,000 200,000 310,000 500,000 1,135,000 1,450,016 2,300,000 3,750,000 55,000,000
AI safety 40 500,000 1,882,654 2,539 25,000 100,000 200,000 310,000 500,000 1,111,000 1,337,600 1,994,000 2,652,500 30,000,000
Global catastrophic risks 1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
Security 1 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2020 2019 2018 2017 2016 2015
AI safety (filter this donor) 40 23 75,306,176.00 11,937,834.00 8,243,500.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Security (filter this donor) 1 1 55,000,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Global catastrophic risks (filter this donor) 1 1 100,000.00 0.00 0.00 0.00 100,000.00 0.00 0.00
Total 42 24 130,406,176.00 11,937,834.00 63,243,500.00 4,153,809.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2020 2019 2018 2017 2016 2015
AI safety 40 23 75,306,176.00 11,937,834.00 8,243,500.00 4,153,809.00 43,221,048.00 6,563,985.00 1,186,000.00
Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety 1 1 55,000,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
Global catastrophic risks/AI safety 1 1 100,000.00 0.00 0.00 0.00 100,000.00 0.00 0.00
Classified total 42 24 130,406,176.00 11,937,834.00 63,243,500.00 4,153,809.00 43,321,048.00 6,563,985.00 1,186,000.00
Unclassified total 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 42 24 130,406,176.00 11,937,834.00 63,243,500.00 4,153,809.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2020 2019 2018 2017 2016 2015
Center for Security and Emerging Technology (filter this donor) 55,000,000.00 0.00 55,000,000.00 0.00 0.00 0.00 0.00
OpenAI (filter this donor) AI safety FB Tw WP Site TW 30,000,000.00 0.00 0.00 0.00 30,000,000.00 0.00 0.00
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 14,756,250.00 7,703,750.00 2,652,500.00 150,000.00 3,750,000.00 500,000.00 0.00
Open Phil AI Fellowship (filter this donor) 5,760,000.00 2,300,000.00 2,325,000.00 1,135,000.00 0.00 0.00 0.00
Center for Human-Compatible AI (filter this donor) AI safety Site TW 5,755,550.00 0.00 200,000.00 0.00 0.00 5,555,550.00 0.00
University of California, Berkeley (filter this donor) FB Tw WP Site 3,706,016.00 0.00 1,111,000.00 1,145,000.00 1,450,016.00 0.00 0.00
Ought (filter this donor) AI safety Site 3,118,333.00 1,593,333.00 1,000,000.00 525,000.00 0.00 0.00 0.00
Montreal Institute for Learning Algorithms (filter this donor) AI capabilities/AI safety Site 2,400,000.00 0.00 0.00 0.00 2,400,000.00 0.00 0.00
Future of Humanity Institute (filter this donor) Global catastrophic risks/AI safety/Biosecurity and pandemic preparedness FB Tw WP Site TW 1,994,000.00 0.00 0.00 0.00 1,994,000.00 0.00 0.00
UCLA School of Law (filter this donor) Tw WP Site 1,536,222.00 0.00 0.00 0.00 1,536,222.00 0.00 0.00
Stanford University (filter this donor) FB Tw WP Site 1,465,139.00 0.00 0.00 102,539.00 1,362,600.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 1,358,890.00 0.00 955,000.00 0.00 403,890.00 0.00 0.00
Future of Life Institute (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 1,286,000.00 0.00 0.00 0.00 100,000.00 0.00 1,186,000.00
University of Oxford (filter this donor) FB Tw WP Site 429,770.00 0.00 0.00 429,770.00 0.00 0.00 0.00
The Wilson Center (filter this donor) FB Tw WP Site 400,000.00 0.00 0.00 400,000.00 0.00 0.00 0.00
WestExec (filter this donor) 310,000.00 310,000.00 0.00 0.00 0.00 0.00 0.00
Yale University (filter this donor) FB Tw WP Site 299,320.00 0.00 0.00 0.00 299,320.00 0.00 0.00
George Mason University (filter this donor) FB WP Site 277,435.00 0.00 0.00 0.00 0.00 277,435.00 0.00
Electronic Frontier Foundation (filter this donor) FB Tw WP Site 199,000.00 0.00 0.00 0.00 0.00 199,000.00 0.00
AI Scholarships (filter this donor) 159,000.00 0.00 0.00 159,000.00 0.00 0.00 0.00
AI Impacts (filter this donor) AI safety Site 132,000.00 0.00 0.00 100,000.00 0.00 32,000.00 0.00
RAND Corporation (filter this donor) FB Tw WP Site 30,751.00 30,751.00 0.00 0.00 0.00 0.00 0.00
Distill (filter this donor) AI capabilities/AI safety Tw Site 25,000.00 0.00 0.00 0.00 25,000.00 0.00 0.00
GoalsRL (filter this donor) AI safety Site 7,500.00 0.00 0.00 7,500.00 0.00 0.00 0.00
Total -- -- 130,406,176.00 11,937,834.00 63,243,500.00 4,153,809.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2020 2019 2018 2017
Luke Muehlhauser 4 4 55,740,751.00 340,751.00 55,000,000.00 400,000.00 0.00
Daniel Dewey 19 10 12,006,545.00 0.00 5,591,000.00 3,174,039.00 3,241,506.00
Claire Zabel|Committee for Effective Altruism Support 2 1 10,356,250.00 7,703,750.00 2,652,500.00 0.00 0.00
Nick Beckstead 4 4 4,579,090.00 0.00 0.00 429,770.00 4,149,320.00
Catherine Olsson|Daniel Dewey 1 1 2,300,000.00 2,300,000.00 0.00 0.00 0.00
Committee for Effective Altruism Support 1 1 1,593,333.00 1,593,333.00 0.00 0.00 0.00
Helen Toner 1 1 1,536,222.00 0.00 0.00 0.00 1,536,222.00
Claire Zabel 1 1 150,000.00 0.00 0.00 150,000.00 0.00
Classified total 33 19 88,262,191.00 11,937,834.00 63,243,500.00 4,153,809.00 8,927,048.00
Unclassified total 9 9 42,143,985.00 0.00 0.00 0.00 34,394,000.00
Total 42 24 130,406,176.00 11,937,834.00 63,243,500.00 4,153,809.00 43,321,048.00

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

If you hover over a cell for a given disclosures and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Disclosures Number of donations Number of donees Total 2017 2016 2015
Paul Christiano 2 2 30,500,000.00 30,000,000.00 500,000.00 0.00
Dario Amodei 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Holden Karnofsky 1 1 30,000,000.00 30,000,000.00 0.00 0.00
Daniel Dewey 4 4 5,171,435.00 4,394,000.00 777,435.00 0.00
Nick Beckstead 4 4 3,957,435.00 1,994,000.00 777,435.00 1,186,000.00
Chris Olah 1 1 2,400,000.00 2,400,000.00 0.00 0.00
Carl Shulman 1 1 1,994,000.00 1,994,000.00 0.00 0.00
Unknown, generic, or multiple 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Helen Toner 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Luke Muehlhauser 2 2 1,686,000.00 0.00 500,000.00 1,186,000.00
Ben Hoffman 1 1 1,186,000.00 0.00 0.00 1,186,000.00
Jacob Steinhardt 1 1 500,000.00 0.00 500,000.00 0.00
Classified total 6 6 36,357,435.00 34,394,000.00 777,435.00 1,186,000.00
Unclassified total 36 20 94,048,741.00 8,927,048.00 5,786,550.00 0.00
Total 42 24 130,406,176.00 43,321,048.00 6,563,985.00 1,186,000.00

Graph of spending by disclosures and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by disclosures and year (cumulative)

Graph of spending should have loaded here

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (111 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Our Progress in 2019 and Plans for 20202020-05-08Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCriminal justice reform|Animal welfare|AI safety|Effective altruismThe post compares progress madee by the Open Philanthropy Project in 2019 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2018-and-plans-2019 and then lays out plans fr 2020. The post notes that grantmaking, including grants to GiveWell topo charities, was over $200 million. The post reviews the following from 2019: continued grantmaking, growth of the operations team, impact evaluation (with good progress in evaluation of giving in criminal justice reform and animal welfare), worldview investigations (that was harder than anticipated, resulting in slower progress), other cause prioritization work, hiring and other capacity building, and outreach to external donors.
How Philanthropists are Tackling COVID-192020-03-18Abby Schultz Barron'sOpen Philanthropy Project Bill and Melinda Gates Foundation Wellcome Trust Mastercard Impact Fund Schmidt Futures COVID-19 Therapeutics Accelerator Sherlock Biosciences Johns Hopkins Center for Health Security University of Washington (Institute for Protein Design) Review of current state of cause areaBiosecurity and pandemic preparednessThe article describes how private philanthropy is helping in the fight against COVID-19 and the coronavirus pandemic caused by it. The role of Open Philanthropy Project in funding Sherlock Biosciences as well as the Johns Hopkins Center for Health Security in prior years is described. The article also describes the joint financing of the COVID-19 Therapeutics Accelerator by the Gates Foundation, Wellcome Trust, and Mastercard Impact Fund.
Update on the Global Priorities Institute's (GPI) activities (GW, IR)2019-12-24Hilary Greaves Global Priorities InstituteOpen Philanthropy Project Global Priorities Institute Donee periodic updateCause prioritizationThe Global Priorities Institute shares a short annual report, also available at https://globalprioritiesinstitute.org/global-priorities-institute-annual-report-2018-19/ on its website. In addition, the post contains links for following GPI's research and current opportunities. The annual report has three sections: (1) Research (agenda focused on "longtermism") (2) Academic outreach (various two-day workshops and the Early Career Conference Programme (ECCP)) (3) Current team and growth ambitions (plans to expand, helped by £2.5m from the Open Philanthropy Project and £3m from other private donors; fundraising is ongoing).
Effective Altruism Foundation: Plans for 2020 (GW, IR)2019-12-23Jonas Vollmer Effective Altruism FoundationOpen Philanthropy Project Effective Altruism Foundation Raising for Effective Giving Wild-Animal Suffering Research Utility Farm Wild Animal Initiative Sentience politics Donee periodic updateEffective altruism/movement growth/s-risk reductionThe document includes the 2019 review and plans for 2020 of the Effective Altruism Foundation (EAD). Key highlights: EAD plans to change its name in 2020 as a rebranding effort to highlight its focus on s-risk reduction, rather than the effective altruism; as part of this, the Foundational Research Institute brand will also be deprecated. Wild-Animal Suffering Research and Utility Farm merged to form Wild Animal Initiative, which is now completely separate from EAF. Raising for Effective Giving and Sentience Politics continue to be housed under EAF. The post also describes communication guidelines developed along with Nick Beckstead of the Open Philanthropy Project (that also made a $1 million grant to EAF). The guidelines "recommend highlighting beliefs and priorities that are important to the s-risk-oriented community" and "recommend communicating in a more nuanced manner about pessimistic views of the long-term future by considering highlighting moral cooperation and uncertainty, focusing more on practical questions if possible, and anticipating potential misunderstandings and misrepresentations." The post also says the guidelines will soon be made public, and that it was a mistake to not announce the guidelines earlier; doing so might have addressed https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/ and related concerns
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Project Survival and Flourising Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open Philanthropy ProjectChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views
The Center for Election Science Year End EA Appeal (GW, IR)2019-12-17Aaron Hamlin Effective Altruism ForumOpen Philanthropy Project The Center for Election Science Donee donation casePoliticsAaron Hamlin of the Center for Election Science (CES), an organization that promotes approval voting in the United States, posts an end-of-year fundraising appeal post for CES to the Effective Altruism Forum. The post talks about the finances of CES, and compares the funding of CES to the much larger total funding going to ranked choice voting (RCV), a competing effort that he considers inferior. He argues that with slightly more funds, CES could show much more than RCV in terms of victories in adoption of approval voting, per dollar spent
How frequently do ACE and Open Phil agree about animal charities? (GW, IR)2019-12-17Ben West Effective Altruism ForumOpen Philanthropy Project Effective Altruism Funds: Animal Welfare Fund Animal Charity Evaluators Compassion in World Farming International Animal Ethics Faunalytics Sociedade Vegetariana Brasileira Miscellaneous commentaryAnimal welfareBen West compares the grantees of the Open Philanthropy Project (Open Phil) in its focus area of farm animal welfare against the charities recommended by Animal Charity Evaluators (ACE). He finds a substantial overlap: Open Phil has made grants to all charities that ACE has ever given top charity status, about half of the charities ACE has ever given standout charity status, and only one charity that ACE reviewed but did not recommend. Also, "5% of the charities ACE did an "exploratory" review of received a grant, as did 3% of the ones they "considered" but did not review." A spreadsheet https://docs.google.com/spreadsheets/d/1NRSVnSgg33vtOByfYwCFhB6VrytZGYeJ/edit with the data is linked. The post also notes: "Three charities which were named “Standout Charities” by ACE but did not receive Open Phil grants did receive grants from the Centre for Effective Altruism’s Animal Welfare Fund (Animal Ethics, Faunalytics, and Sociedade Vegetariana Brasileira)."
Recommendation to Open Philanthropy for Grants to Top Charities2019-11-26GiveWellOpen Philanthropy Project Good Ventures/GiveWell top and standout charities Malaria Consortium Helen Keller International Sightsavers Against Malaria Foundation END Fund GiveDirectly Development Media International Dispenses for Safe Water Food Fortification Initiative Global Alliance for Improved Nutrition Georgetown University Initiative on Innovation, Development, and Evaluation Iodine Global Network Living Goods Project Healthy Children Periodic donation list documentationGlobal health and developmentThe document details GiveWell's recommendation in 2019 for grants by Good Ventures (via the Open Philanthropy Project) to GiveWell top and standout charities. The overall amount of money recommended for allocation is $54.6 million, and the document explains that Open Phil's calculation that it may make sense to spend down more slowly was the reason for reducing the allocation from last year. It discusses the principles used for allocation: (1) Put significant weight on cost-effectiveness estimates, (2) Consider additional information not explicitly modeled about the organization, (3) Consider additional information not explicitly modeled about the funding gap, (4) Assess funding gaps at the margin, (5) Default to not imposing restrictions on charity spending, (6) Default to funding on a 3-year horizon, and (7) Ensure charities are incentivized to engage with the process. The three charities that get significant grants are Malaria Consortium for its SMC program ($33.9 million), Helen Keller International ($9.7 million), and Sightsavers ($2.7 million). Against Malaria Foundation, END Fund, and GiveDirectly receive the minimum "incentive grant" amount of $2.5 million that all top charities should receive. The top charity Deworm the World Initiative is not given an incentive grant because it received a previous grant through GiveWell discretionary grant that more than covers the incentive grant amount. 8 standout charities get $100,000 each
ALLFED 2019 Annual Report and Fundraising Appeal (GW, IR)2019-11-23Aron Mill Alliance to Feed the Earth in DisastersBerkeley Existential Risk Initiative Donor lottery Effective Altruism Grants Open Philanthropy Project Alliance to Feed the Earth in Disasters Future of Humanity Institute Donee donation caseAlternative foodsAron Mill provides a summary of the work of the Alliance to Feed the Earth in Disasters (ALLFED) in 2019. He lists key supporters as well as partners that ALLFED worked with during the year. The blog post proceeds to make an appeal and a case for fundraising ALLFED. Sections of the blog post include: (1) research output, (2) preparedness and alliance-building, (3) ALLFED team, (4) current projects, and (5) projects in need of funding.
Message exchange with EAF2019-11-12Simon Knutsson Open Philanthropy Project Effective Altruism Foundation Reasoning supplementEffective altruism|Global catastrophic risksThis is a supplement to https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/ The supplement documents an email exchange between Knutsson and Stefan Torges of the Effective Altruism Foundation where Knutsson asks Torges for comment on some of the points in the article. Torges's reply is not quoted as he did not give permission to quote the replies, but Knutsson summarizes the replies as saying that EAF can't share further information, and does not wish to engage Knutsson on the issue.
Co-funding Partnership with Ben Delo2019-11-11Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Ben Delo PartnershipAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|Effective altruismBen Delo, co-founder of the cryptocurrency trading platform BitMEX, recently signed the Giving Pledge. He is entering into a partnership with the Open Philanthropy Project, providing funds, initially in the $5 million per year range, to support Open Phil's longtermist grantmaking, in areas including AI safety, biosecurity and pandemic preparedness, global catastrophic risks, and effective altruism. Later, the Machine Intelligence Research Institute (MIRI) would reveal at https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ that, of a $7.7 million grant from Open Phil, $1.46 million is coming from Ben Delo.
E-mail exchange with the Open Philanthropy Project2019-11-10Simon Knutsson Open Philanthropy Project Effective Altruism Foundation Reasoning supplementEffective altruism|Global catastrophic risksThis is a supplement to https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/ The supplement documents an email exchange between Knutsson and Michael Levine of the Open Philanthropy Project where Knutsson asks Levine for comment on some of the points in the article. Levine's reply is not quoted as he did not give permission to quote the replies, but Knutsson summarizes the replies as saying that "[Open Phil] do not have anything to add beyond the grant page https://www.openphilanthropy.org/giving/grants/effective-altruism-foundation-research-operations
Problems in effective altruism and existential risk and what to do about them2019-10-16Simon Knutsson Open Philanthropy Project Effective Altruism Foundation Centre for Effective Altruism Effective Altruism Foundation Future of Humanity Institute Miscellaneous commentaryEffective altruism|Global catastrophic risksSimon Knutsson, a Ph.D. student who previously worked at GiveWell and has, since then, worked on animal welfare and on s-risks, writes about what he sees as problematic dynamics in the effective altruism and x-risk communities. Specifically, he is critical of what he sees as behind-the-scenes coordination work on messaging, between many organizations in the space, notably the Open Philanthropy Project and the Effective Altruism Foundation, and the possible use of grant money to pressure EAF into pushing for guidelines for writers to not talk about s-risks in specific ways. He is also critical of what he sees as the one-sided nature of the syllabi and texts produced by the Centre for Effective Altruism (CEA). The author notes that people have had different reactions to his text, with some considering the behavior described as unproblematic, while others agreeing with him that it is problematic and deserves the spotlight. The post is also shared to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/EescnoaBJsQWz4rii/problems-in-effective-altruism-and-what-to-do-about-them (GW, IR) where it gets a lot of criticism in the comments from people including Peter Hurford and Holly Elmore.
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)2019-09-10Ryan Carey Effective Altruism ForumFounders Pledge Open Philanthropy Project OpenAI Machine Intelligence Research Institute Broad donor strategyAI safety|Global catastrophic risks|Scientific research|PoliticsRyan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.
How Life Sciences Actually Work: Findings of a Year-Long Investigation (GW, IR)2019-08-16Alexey Guzey Effective Altruism ForumNational Institutes of Health Howard Hughes Medical Institute Chan Zuckerberg Initiative Open Philanthropy Project Amgen Life Sciences Research Foundation Harvard University Massachusetts Institute of Technology Stanford University Review of current state of cause areaBiomedical researchGuzey surveys the current state of biomedical research, primarily in academia in the United States. His work is the result of interviewing about 60 people. Emergent Ventures provided financial support. His takeaways: (1) Life science is not slowing down (2) Nothing works the way you would naively think it does (for better or for worse) (3) If you're smart and driven, you'll find a way in (4) Nobody cares if you're a genius (5) Almost all biologists are solo founders. This is probably suboptimal (6) There's insufficient space for people who just want to be researchers and not managers (7) Peer review is a disaster (8) Nobody agrees on whether big labs are good or bad (9) Senior scientists are bound by their students' incentives (10) Universities seem to maximize their profits, with good research being a side-effect (11) Large parts of modern scientific literature are wrong (12) Raising money is very difficult even for famous scientists. Final conclusion: "academia has a lot of problems but it's less broken than it seems from the outside."
Questions We Ask Ourselves Before Making a Grant2019-08-06Michael Levine Open Philanthropy ProjectOpen Philanthropy Project Sandler Foundation Center for Security and Emerging Technology University of Washington (Institute for Protein Design) Broad donor strategyMichael Levine describes some guidance that the Open Philanthropy Project has put together for program officers on questions to consider before making a grant. This complements guidance published three years ago about internal grant writeups: https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process
GiveWell’s Top Charities Are (Increasingly) Hard to Beat2019-07-09Alexander Berger Open Philanthropy ProjectOpen Philanthropy Project GiveDirectly Against Malaria Foundation Schistosomiasis Control Initiative Target Malaria JustLeadershipUSA Broad donor strategyGlobal health and development|Criminal justice reform|Scientific researchIn the blog post, Alexander Berger discusses how, originally, Open Philanthropy Project donations for near-term human well-being (primarily in the areas of criminal justice reform and scientific research) are compared against a cost-effectiveness benchmark of direct cash transfers, which is set as 100x (every $1 donated should yield $100 in benefits). However, since GiveWell has recently made its cost-effectiveness calculations for top charities more thorough, and now estimates that top charities are 5-15x as cost-effective as cash (or 500-1500x, with 1000x as a median), Berger is now comparing all the existing near-term human well-being grants against the 1000x benchmarks. He finds that, using the back-of-the-envelope calculations (BOTECs) done at the time of justifying the grants, many of the criminal justice reform grants do not clear the bar; in total only $32 million of the grants clears the bar, and about half of it is a single grant to Target Malaria. Berger links to https://docs.google.com/document/d/1GsE2_TNWn0x6MWL1PTdkZT2vQNFW8VFBslC5qjk4sgo/edit?ts=5cc10604 for some sample BOTECs
Explaining Our Bet on Sherlock Biosciences’ Innovations in Viral Diagnostics2019-06-10Heather Youngs Chris Somerville Open Philanthropy ProjectOpen Philanthropy Project Sherlock Biosciences Single donation documentationScientific researchIn this new-style blog post, the reasons for the Open Philanthropy Project grant https://www.openphilanthropy.org/focus/scientific-research/miscellaneous/sherlock-biosciences-research-viral-diagnostics to Sherlock Biosciences are explained in a conversational style. The conversation participants include Michael Levine (Communications Officer) and the grant investigators Chris Somerville and Heather Youngs
Has the Giving Pledge Changed Giving? A proposal unveiled nearly a decade ago was intended to turbocharge philanthropy. There’s little evidence so far it’s doing that.2019-06-04Marc Gunther Chronicle of PhilanthropyWarren Buffett Bill and Melinda Gates Foundation Bloomberg Philanthropies Chan Zuckerberg Initiative Dalio Philanthropies George Lucas and Mellody Hobson Good Ventures Open Philanthropy Project Simons Foundation Miscellaneous commentaryIn a long-form article for the Chronicle of Philanthropy, Marc Gunther describes the history of the Giving Pledge, created ten years ago at a meeting including Bill and Melinda Gates, Warren Buffett, Ted Turner, Michael Bloomberg, Charles Feeney, George Soros, Eli Broad, and Oprah Winfrey. Gunther writes that the Giving Pledge has failed to increase the overall level of charitable giving in general, and has not inspired much more charitable giving even among the superrich, to whom it was targeted. The article says that fewer than one in six billionaires in the United States have taken the pledge, and moreover, many of those who took the pledge had either already given or already been planning to give large amounts to charity, so the counterfactual impact of the pledge was low. The article includes a table of the current net worth and total donations so far by the wealthiest signatories of the Giving Pledge, as well as profiles of several Giving Pledge signatories.
80,000 Hours Annual Review – December 20182019-05-07Benjamin Todd 80,000 HoursOpen Philanthropy Project Berkeley Existential Risk Initiative Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund 80,0000 Hours Donee periodic updateEffective altruism/movement growth/career counselingThis blog post is the annual self-review by 80,000 Hours, originally written in December 2018. Publication was deferred because 80,000 Hours was waiting to hear back on the status of some large grants (in particular, one from the Open Philanthropy Project), but most of the content is still from the December 2018 draft. The post goes into detail about 80,000 Hours' progress in 2018, impact and plan changes, and future expansion plans. Funding gaps are discussed (the funding gap for 2019 is $400,000, and further money will be saved for 2020 and 2021). Grants from the Open Philanthropy Project, BERI, and the Effective Altruism Funds (EA Meta Fund) are mentioned
Will splashy philanthropy cause the biosecurity field to focus on the wrong risks?2019-04-25Filippa Lentzos Bulletin of the Atomic ScientistsOpen Philanthropy Project Third-party coverage of donor strategyBiosecurity and pandemic preparednessFilippa Lentzos examines the Open Philanthropy Project's funding in the biosecurity field. She argues that the scale and speed of Open Phil's grantmaking may hurt the field by shaping the agenda of the field to be too focused on global catastrophic risks, and to be less diverse on the whole. The post is linked and discussed on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/Kkw8uDwGuNnBhiYHi/will-splashy-philanthropy-cause-the-biosecurity-field-to (GW, IR) by Tessa Alexanian. Howie Lempel, in the comments, describes more of the post author's views based on her past article https://thebulletin.org/2017/07/ignore-bill-gates-where-bioweapons-focus-really-belongs/ Others who share thoughts in the comments include Alex Foster, Denise Melchin, and Rob Bensinger.
Our Progress in 2018 and Plans for 20192019-04-15Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCriminal justice reform|Animal welfareThe post compares progress made by the Open Philanthropy Project in 2018 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2017-and-plans-2018 and then lays out plans for 2019. The post notes that grantmaking was sustained at over $100 million. Hints of impact in the areas of criminal justice reform and animal welfare continue to be seen. Hiring to grow research analyst capacity was a top focus, led by Luke Muehlhauser, with the results detailed in the blog post https://www.openphilanthropy.org/blog/reflections-our-2018-generalist-research-analyst-recruiting by Muehlhauser. Operations capacity grew significantly under Beth Jones, who joined in May as Director of Operations
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Project Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
With Launch Of New CRISPR Company, Competition Extends To Diagnostics2019-03-21Ellie Kincaid ForbesOpen Philanthropy Project Sherlock Biosciences LaunchScientific researchThe article describes the launch of Sherlock Biosciences, a company that aims to use CRISPR technology for diagnostics. It mentions the $17.5 million donation https://www.openphilanthropy.org/focus/scientific-research/miscellaneous/sherlock-biosciences-research-viral-diagnostics plus undisclosed investment from the Open Philanthropy Project, as well as separate investment. Together, Sherlock Biosciences has raised $35 million
Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security2019-03-20Tate Williams Inside PhilanthropyOpen Philanthropy Project Center for Security and Emerging Technology Third-party coverage of donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|SecurityThe article focuses on grantmaking by the Open Philanthropy Project in the areas of global catastrophic risks and security, particularly in AI safety and biosecurity and pandemic preparedness. It includes quotes from Luke Muehlhauser, Senior Research Analyst at the Open Philanthropy Project and the investigator for the $55 million grant https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology to the Center for Security and Emerging Technology (CSET). Muehlhauser was previously Executive Director at the Machine Intelligence Research Institute. It also includes a quote from Holden Karnofsky, who sees the early interest of effective altruists in AI safety as prescient. The CSET grant is discussed in the context of the Open Philanthropy Project's hits-based giving approach, as well as the interest in the policy space in better understanding of safety and governance issues related to technology and AI
Committee for Effective Altruism Support2019-02-27Open Philanthropy ProjectOpen Philanthropy Project Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20182018-12-20Holden Karnofsky Open Philanthropy ProjectChloe Cockburn Lewis Bollard Amanda Hungerford Alexander Berger Luke Muelhhauser National Council for Incarcerated and Formerly Incarcerated Women and Girls Texas Organizing Project Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund The Humane League Center for Global Development International Refugee Assistance Project Donor lottery Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruismOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups. The post continues a tradition of similar posts published once a year
Scaling OFTW: Our First Hire And Funding From The Open Philanthropy Project2018-08-01Rossa O'Keeffe-O'Donovan One for the WorldOpen Philanthropy Project Luke Ding One for the World Donee periodic updateEffective altruism/fundraisingOne for the World announces grants to it recommended by GiveWell, of $153,750 from the Open Philanthropy Project and $51,250 from Luke Ding. The funding is to cover two years of expenses, including hiring a COO for the first year, and a CEO in the second year. The post also announces the hiring of Evan McVail as COO, fulfilling part of the plan for the grant
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Project Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
The Most Unorthodox Big Foundation in America2018-05-18Marc Gunther Nonprofit ChroniclesOpen Philanthropy Project Third-party coverage of donor strategyThe article primarily links to and explains https://ssir.org/articles/entry/giving_in_the_light_of_reason which is a much longer article about the Open Philanthropy Project and its grantmaking. Unlike the linked article, the author goes more into his personal take on the subject, including how his recent visit to Rwanda, and how that has shifted him in the direction of donating to meet present-day needs
Giving in the Light of Reason2018-05-17Marc Gunther Stanford Social Innovation ReviewOpen Philanthropy Project Bill and Melinda Gates Foundation Future Justice Fund Good Ventures The Humane League Direct Action Everywhere Target Malaria University of Washington (Institute for Protein Design) Alliance for Safety and Justice The Marshall Project Third-party coverage of donor strategyCriminal justice reform|Animal welfare|Scientific researchAn in-depth profile of the Open Philanthropy Project and its grantmaking, with a particular focus on discussion of the top grants in animal welfare and scientific research. The organizational history, grantmaking process, and internal culture are also discussed. Referenced in https://nonprofitchronicles.com/2018/05/18/the-most-unorthodox-big-foundation-in-america/ by the same author
Update on Partnerships with External Donors2018-05-16Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Future Justice Fund Accountable Justice Action Fund Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Accountable Justice Action Fund Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Miscellaneous commentaryCriminal justice reform,Animal welfareThe Open Philanthropy Project describes how it works with donors other than Good Ventures (the foundation under Dustin Moskovitz and Cari Tuna that accounts for almost all Open Phil grantmaking). The blog post reiterates that the long-term goal is to inform many different funders, but that is not a short-term priority because the Open Philanthropy Project is not moving enough money to even achieve the total spend that Good Ventures is willing to go up to. The post mentions that Chloe Cockburn, the program officer for criminal justice reform, is working with other funders in criminal justice reform, and they have created a separate vehicle, the Accountable Justice Action Fund, to pool resources. Also, Mike and Kaitlyn Krieger, who previously worked with the Open Philanthropy Project, now have their own criminal justice-focused Future Justice Fund, and are getting help from Cockburn to allocate money from the fund. For causes outside of criminal justice reform, the role of Effective Altruism Funds (whose grantmaking is managed by Open Philanthropy Project staff members) is mentioned. Also, Lewis Bollard is said to have moved ~10% as much money through advice to other donors as he has moved through the Open Philanthropy Project
With the Backing of Top Funders, This Group is Taking the Criminal Justice System to Court2018-04-24Philip Rojc Inside PhilanthropyMacArthur Foundation Laura and John Arnold Foundation Open Philanthropy Project Chan Zuckerberg Initiative Civil Rights Corps Evaluator review of doneeCriminal justice reform/litigationThe article describes the efforts of Civil Rights Corps, an organization dedicated to challenging criminal justice abuses in court. It includes the Open Philanthropy Project and Chan Zuckerberg Initiative among its funders
This Powerhouse Funder is Still New to Scientific Research. Where Are Grants Going?2018-04-17Paul Karon Inside PhilanthropyOpen Philanthropy Project MIT Synthetic Neurobiology Group Beth Israel Deaconess Medical Center University of Washington (Institute for Protein Design) Third-party coverage of donor strategyScientific researchThe article discusses grantmaking by the Open Philanthropy Project in the domain of scientific research, noting that the grants were often made in areas overlapping with other interests (such as global health). The large donation to the Institute for Protein Design in connection with influenza research is highlighted
Hiring analytical thinkers to help give away billions2018-03-30Ajeya Cotra MediumOpen Philanthropy Project Job advertisementOpen Philanthropy Project research analyst Ajeya Cotra speaks highly of the work there, and highlights the new research analyst positions the organization is hiring for. The post would be shared on Facebook by Claire Zabel at https://www.facebook.com/claire.zabel/posts/10216805589078395 and 80,000 Hours at https://www.facebook.com/80000Hours/posts/1703309639750767
Managing Funder-Grantee Dynamics Responsibly2018-03-30Michael Levine Open Philanthropy ProjectOpen Philanthropy Project Miscellaneous commentaryMichael Levine of the Open Philanthropy Project discusses how big donors (like the Open Philanthropy Project) can unduly influence the plans of existing and potential grantees, and what the organization is doing to mitigate that impact
Hi, I'm Holden Karnofsky. AMA about jobs at Open Philanthropy2018-03-26Holden Karnofsky Effective Altruism ForumOpen Philanthropy Project Job advertisementHolden Karnofsky opens himself up to questions about what it is like to work at the Open Philanthropy Project. This is part of a concerted push by Open Phil to increase its number of research analysts
Our Progress in 2017 and Plans for 20182018-03-20Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCriminal justice reform|Animal welfare|Scientific research|Cause prioritizationThe post compares progress made by the Open Philanthropy Project in 2017 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2016-and-plans-2017 and then lays out plans fo 2018. The post notes that grantmaking was sustained at the expected level of over $100 million, and that hints of impact are being seen in the areas where they would be expected, namely criminal justice reform and animal welfare. Deep independent investigations, such as https://www.openphilanthropy.org/files/Focus_Areas/Criminal_Justice_Reform/The_impacts_of_incarceration_on_crime_10.pdf by David Roodman for criminal justice reform and https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/how-will-hen-welfare-be-impacted-transition-cage-free-housing by Ajeya Cotra for animal welfare, are highlighted. Scientific research is identified as an area of strong progress, with the transformative R01 second chance program https://www.openphilanthropy.org/blog/our-second-chance-program-nih-transformative-research-applicants highlighted. The separation from GiveWell was completed in 2017. For 2018, hiring is a top priority, while the level of giving is expected to be maintained at the current level of over $100 million
An Update to How We’re Thinking About Grant Check-Ins2018-03-09Morgan Davis Open Philanthropy ProjectOpen Philanthropy Project Miscellaneous commentaryMorgan Davis of the Open Philanthropy Project describes the process that the organization uses to check in on and learn from past grants. A check-in has three goals: updates (most frequent, and quite minor), lessons (less frequent, more important, and more wide-ranging), and impact (most rare, but really important when it occurs)
The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.2018-02-27Robert Wiblin Kieran Harris Holden Karnofsky 80,000 HoursOpen Philanthropy Project Broad donor strategyAI safety|Global catastrophic risks|Biosecurity and pandemic preparedness|Global health and development|Animal welfare|Scientific researchThis interview, with full transcript, is an episode of the 80,000 Hours podcast. In the interview, Karnofsky provides an overview of the cause prioritization and grantmaking strategy of the Open Philanthropy Project, and also notes that the Open Philanthropy Project is hiring for a number of positions
New Job Opportunities2018-02-14Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Job advertisementHolden Karnofsky links to job opening pages for generalist Research Analyst and Senior Research Analyst roles, specialized roles related to AI risk, roles such as Grants Associate, Operations Associate, and General Counsel, and the Director of Operations
Where, why and how I donated in 20172018-02-01Ben Kuhn Ben Kuhn Open Philanthropy Project Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Grants GiveWell GiveWell top charities EA Giving Group Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Periodic donation list documentationGlobal health and developmentKuhn describes his decision to allocate his donation amount ($60,000, calculated as 50% of his income for the year) between GiveWell, GiveWell top charities, and his own donor-advised fund managed by Fidelity. Kuhn also discusses the Open Philanthropy Project, EA Funds, and EA Grants, and the EA Giving Group he donated to the previous year
Update on Cause Prioritization at Open Philanthropy2018-01-26Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCause prioritizationThis very long blog post describes how the Open Philanthropy Project currently views its trade-off between near-termist human welfare, near-termist animal welfare, and long-termism. It also discusses allocation to different causes within these broad cause types. It builds upon ideas discussed at http://www.openphilanthropy.org/blog/worldview-diversification and http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update
Fish: The Forgotten Farm Animal2018-01-18Lewis Bollard Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAnimal welfare/factory farming/fishThe blog post, cross-posted from a newsletter published by the author, makes the case that fish welfare is neglected within the domain of factory farming, and provides suggestions for how to address that problem, including suggestions that the Open Philanthropy Project (where Bollard is the Program Officer for Farm Animal Welfare) is acting upon
A Research Funder Knocks on the NIH's Door Looking for Ideas—And Big Grants Flow2018-01-11Tate Williams Inside PhilanthropyOpen Philanthropy Project Arizona State University University of Notre Dame Rockefeller University University of California, San Francisco Third-party coverage of donor strategyScientific researchThe article discusses the Open Philanthropy Project second chance funding program for rejected applicants of the National Institutes of Health transformative R01 program
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups
Our ‘Second Chance’ Program for NIH Transformative Research Applicants2017-12-20Heather Youngs Open Philanthropy ProjectOpen Philanthropy Project Arizona State University University of Notre Dame Rockefeller University Univesity of California San Francisco Broad donor strategyScientific research/transformative R01The blog post describes a "second chance" program that the Open Philanthropy Project ran for rejected applications to the National Institutes of Health transformative R01 program https://commonfund.nih.gov/tra Four grants were made based on this, totaling $10.8 million. The grants were also covered in Nature at https://www.nature.com/articles/d41586-017-08795-0
Staff Members’ Personal Donations for Giving Season 20172017-12-18Holden Karnofsky Open Philanthropy ProjectHolden Karnofsky Alexander Berger Nick Beckstead Helen Toner Claire Zabel Lewis Bollard Ajeya Cotra Morgan Davis Michael Levine GiveWell top charities GiveWell GiveDirectly EA Giving Group Berkeley Existential Risk Initiative Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Sentience Institute Encompass The Humane League The Good Food Institute Mercy For Animals Compassion in World Farming USA Animal Equality Donor lottery Against Malaria Foundation GiveDirectly Periodic donation list documentationOpen Philanthropy Project staff members describe where they are donating this year, and the considerations that went into the donation decision. By policy, amounts are not disclosed. This is the first standalone blog post of this sort by the Open Philanthropy Project; in previous years, the corresponding donations were documented in the GiveWell staff members donation post
Reasoning Transparency2017-12-12Open Philanthropy ProjectOpen Philanthropy Project Reasoning supplementThe document describes what sort of document structure for discourse and research exposition is most helpful to the Open Philanthropy Project as a consumer of the work. Announced at https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/i2F6YxE14O8
Update on Investigating Neglected Goals in Biological Research2017-11-30Nick Beckstead Open Philanthropy ProjectOpen Philanthropy Project Good Ventures/not recommended by GiveWell or Open Philanthropy Project Target Malaria Broad donor strategyScientific research,Global health,Biosecurity and pandemic preparedness,AgricultureThe blog post describes the way the Open Philanthropy Project is identifying neglected goals in biological research. Previously the hope was to investigate sub-areas deeply and produce write-ups. Now, the approach is more "opportunistic": rather than do public write-ups, staff look out for good opportunities for shovel-ready or highly promising grants in the specific topics identified as having strong potential
How to end animal agriculture as soon as possible2017-09-27Robert Wiblin Lewis Bollard 80,000 HoursOpen Philanthropy Project Mercy For Animals Compassion in World Farming The Humane League The Humane Society of the United States Humane Society International The Good Food Institute Animal Equality Animal Charity Evaluators Broad donor strategyAnimal welfare/factory farmingPodcast with interview of Lewis Bollard (Farm Animal Welfare Program Officer at the Open Philanthropy Project) by Robert Wiblin of 80000 Hours, along with transcript. The podcast covers the strategy of the Open Philanthropy Project. 80000 Hours is an Open Philanthropy Project grant recipient and Wiblin was also on the board of Animal Charity Evaluators, an animal welfare-focused grant recipient that is discussed in the podcast
The impacts of inacercation on crime2017-09-25David Roodman Open Philanthropy ProjectOpen Philanthropy Project Reasoning supplementCriminal justice reformThe document reviews three mechanisms through which incarceration might reduce crime: deterrence, incapacitation, and aftereffects. It is also published in the form of four blog posts https://www.openphilanthropy.org/blog/reasonable-doubt-new-look-whether-prison-growth-cuts-crime https://www.openphilanthropy.org/blog/deterrence-de-minimis https://www.openphilanthropy.org/blog/incapacitation-how-much-does-putting-people-inside-prison-cut-crime-outside https://www.openphilanthropy.org/blog/aftereffects-us-evidence-says-doing-more-time-typically-leads-more-crime-after and is also available as http://files.openphilanthropy.org/files/Focus_Areas/Criminal_Justice_Reform/impacts_of_incarceration_v4.mobi (Kindle) and http://files.openphilanthropy.org/files/Focus_Areas/Criminal_Justice_Reform/impacts_of_incarceration_v4.epub (Mobi)
How Will Hen Welfare Be Impacted by the Transition to Cage-Free Housing?2017-09-15Ajeya Cotra Open Philanthropy ProjectOpen Philanthropy Project Reasoning supplementAnimal welfare/factory farming/chicken/cage-free campaignA followup to https://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms which described the original cage-free campaign funding strategy. This report compares aviaries (cage-free living environments) with cages for hens. It tempers original enthusiasm for cage-free by noting higher mortality rates, but continues to support the position that cage-free is likely better on net for hens. Described in blog post https://www.openphilanthropy.org/blog/new-report-welfare-differences-between-cage-and-cage-free-housing that expresses regret for not investigating this more thoroughly earlier, and thanks Direct Action Everywhere for highlighting the issue. See https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/cnK5yNlYHuc for the announcement
The Open Philanthropy Project AI Fellows Program2017-09-12Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAI safetyThis annouces an AI Fellows Program to support students doing Ph.D. work in AI-related fields who have interest in AI safety. See https://www.facebook.com/vipulnaik.r/posts/10213116327718748 and https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussions
A major grant from the Open Philanthropy Project2017-09-08Malo Bourgon Machine Intelligence Research InstituteOpen Philanthropy Project Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
Should EAs think twice before donating to GFI? (GW, IR)2017-08-31Kevin Watkinson Effective Altruism ForumOpen Philanthropy Project The Good Food Institute Third-party case against donationAnimal welfareThe post argues against donations to The Good Food Institute, noting its limited track record as well as the huge amount of funding it is already receiving from the Open Philanthropy Project. This post is made shortly after an exchange between the post author (Kevin Watkinson) and Holden Karnofsky of the Open Philanthropy Project in http://www.openphilanthropy.org/blog/march-2017-open-thread?page=1#comment-305 (the open thread of the Open Philanthropy Project). The post also critiques Animal Charity Evaluators (ACE) for a positive assessment of GFI, and comments include a response from an ACE employee and an ACE board member (neither in an official capacity)
Relationship Disclosure Policy2017-08-30Open Philanthropy ProjectOpen Philanthropy Project Miscellaneous commentaryThe document, announced on a mailing list at https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/4-0KIw2aVmQ (2017-08-30) describes a change in relationship disclosure policy on grant pages published by the Open Philanthropy Project. Relationship disclosures would now no longer be included on grant pages. See https://www.facebook.com/vipulnaik.r/posts/10212973153219475 (cross-posted at https://github.com/vipulnaik/working-drafts/blob/master/open-phil/relationship-disclosure-policy.txt to GitHub) for a critique
Fear and Loathing at Effective Altruism Global 20172017-08-16Scott Alexander Slate Star CodexOpen Philanthropy Project GiveWell Centre for Effective Altruism Center for Effective Global Action Raising for Effective Giving 80,000 Hours Wild-Animal Suffering Research Qualia Research Institute Foundational Research Institute Miscellaneous commentaryScott Alexander describes his experience at Effective ALtruism Global 2017. He describes how the effective altruism movement has both the formal-looking, "suits" people who are in charge of large amounts of money, and the "weirdos" who are toying around with ideas that seem strange and are not mainstream even within effective altruism. However, he feels that rather than being two separate groups, the two groups blend into and overlap with each other. He sees this as a sign that the effective altruism movement is composed of genuinely good people who are looking to make a difference, and explains why he thinks they are succeeding
Grants to Support Farm Animal Welfare Work in China2017-08-09Lewis Bollard Open Philanthropy ProjectOpen Philanthropy Project Compassion in World Farming WildAid World Animal Protection Royal Society for the Prevention of Cruelty to Animals Humane Slaughter Association Jeanne Marchig Centre Animal Welfare Standards Project Green Monday Griffith University Brighter Green Broad donor strategyAnimal welfare/factory farming/ChinaThe document describes the strategy of the Open Philanthropy Project to focus on farm animal welfare advocacy in China, and lists ten grants that are part of this strategy. It is announced 2017-08-09 at https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/ngrjni1iKLg on the mailing list; this comes 9.5 months after the strategy was unofficially announced by Lewis Bollard at https://www.facebook.com/groups/EffectiveAnimalActivism/permalink/656583861179155/ (2016-10-25) on Facebook
My current thoughts on MIRI’s highly reliable agent design work (GW, IR)2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin
Hi, I’m Luke Muehlhauser. AMA about Open Philanthropy’s new report on consciousness and moral patienthood2017-06-28Luke Muehlhauser Effective Altruism ForumOpen Philanthropy Project Dyrevernalliansen Albert Schweitzer Foundation for Our Contemporaries Eurogroup for Animals Reasoning supplementMoral patienthood/animal welfareLuke Muehlhauser hosts an Ask Me Anything (AMA) on the Effective Altruism Forum about his recently published report https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood (2017-06-06). The post gets 61 comments
The Open Philanthropy Project Is Now an Independent Organization2017-06-12Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Good Ventures Status changeThe Open Philanthropy Project announces that it is now a separate entity from GiveWell, and that it has incorporated as a LLC. The change was effective 2017-06-01. See https://blog.givewell.org/2017/06/12/separating-givewell-open-philanthropy-project/ for the complementary post on the GiveWell blog
2017 Report on Consciousness and Moral Patienthood2017-06-06Luke Muehlhauser Open Philanthropy ProjectOpen Philanthropy Project Dyrevernalliansen Albert Schweitzer Foundation for Our Contemporaries Eurogroup for Animals Reasoning supplementMoral patienthood/animal welfareThe writeup announced at https://www.openphilanthropy.org/blog/new-report-consciousness-and-moral-patienthood provides an overview of the findings of Luke Muehlhauser on moral patienthood -- a broad subject covering what creatures are the subject of moral concern. As described at https://www.openphilanthropy.org/blog/radical-empathy Open Phil identifies with radical empathy, extending concern to beings considered of moral concern, even if they are not traditionally subjects of empathy and concern. See https://www.facebook.com/groups/effective.altruists/permalink/1426329927423360/ for a discussion of the post on the Effective Altruism Facebook group, and see http://effective-altruism.com/ea/1c3/hi_im_luke_muehlhauser_ama_about_open/ for a related AMA. The writeup influenced the Open Philanthropy Project Farm Animal Welfare Officer Lewis Bollard to investigate and donate in the domain of fish welfare; see http://effective-altruism.com/ea/1c3/hi_im_luke_muehlhauser_ama_about_open/b8o for a comment clarifying this effect
An Open Letter to SOZE and the Open Philanthropy Project: The Right of Return Fellowship and Ethics in Funding2017-04-27Taylar Nuevelle MediumOpen Philanthropy Project The Soze Agency Third-party case against donationCriminal justice reformThe writer, a contestant for the Right of Return Fellowship, feels that the contest was rigged, and is writing to bring that to the attention of the Open Philanthropy Project, that funded the Soze Agency for this work
Soros Connected Groups Dominate Ayala’s Personal & Professional Life2017-04-19Jacob Engels Central Florida PostOpen Philanthropy Project Florida Rights Restoration Coalition Fair and Just Prosecution Third-party case against donationCriminal justice reformThe writer notes how the Open Philanthropy Project (that he mistakenly believes to be a Soros-funded group) has been attempting to influence Orange and Osceola County State Attorney Aramis Ayala, and argues for more openness. See https://www.facebook.com/vipulnaik.r/posts/10212752692588097 for a discussion
Why Are the US Corporate Cage-Free Campaigns Succeeding?2017-04-11Lewis Bollard Open Philanthropy ProjectOpen Philanthropy Project The Humane League Mercy For Animals The Humane Society of the United States Compassion in World Farming USA Review of current state of cause areaAnimal welfare/factory farming/cage-free campaignLewis Bollard, Open Philanthropy Project Program Officer for Animal Welfare, who brought passion about cage-free campaigns to the organization when he joined, provides a timeline of cage-free campaigns and an assessment of the success of these campaigns, and the role of the Open Philanthropy Project as a funder
Open Philanthropy Project non-grant funding2017-04-02Issa Rice Open Philanthropy Project Miscellaneous commentaryThe document lists some funding by the Open Philanthropy Project that is publicly disclosed (either by Open Philanthropy Project or by the donee or another reliable source) but is not part of the Open Philanthropy Project grants database, and is not included in employee salaries and benefits.
Criminal Justice Reform Strategy2017-03-27Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCriminal justice reformExplanation of the criminal justice reform strategy of the Open Philanthropy Project in the United States, under the leadership of Chloe Cockburn. Discusses broad goals, types of organizations funded, other funders in the space, and expected impact. Announced in email https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/_aKeLKRqtQY by Devin Jacob on 2017-03-27
Our Progress in 2016 and Plans for 20172017-03-14Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2016 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2015-and-plans-2016 and then lays out plans for 2017. The post notes success in scaling up grantmaking, as hoped for in last year's plan. The spinoff from GiveWell is still not completed because it turned out to be more complex than expected, but it is expected to be finished in mid-2017. Open Phil highlights the hiring of three Scientific Advisors (Chris Somerville, Heather Youngs, and Daniel Martin-Alarcon) in mid-2016, as part of its scientific research work. The organization also plans to focus more on figuring out how to decide how much money to allocate between different cause areas, with Karnofsky's worldview diversification post https://www.openphilanthropy.org/blog/worldview-diversification also highlighted. There is no plan to scale up staff or grantmmaking (unlike 2016, when the focus was to scale up hiring, and 2015, when the focus was to scale up staff)
A conversation with Lewis Bollard, February 23, 20172017-02-23Lewis Bollard Luke Muehlhauser Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaAnimal welfareFarm animal welfare program officer Lewis Bollard speaks with Luke Muehlhauser, investigator into moral patienthood, on the history of the animal rights and welfare movements as well as recent developments
Daniel May: "Open Science: little room for more funding."2017-02-15Daniel May Oxford Prioritisation ProjectOxford Prioritisation Project Laura and John Arnold Foundation Open Philanthropy Project Review of current state of cause areaScientific researchThe summary states: "I consider open science as a cause area, by reviewing Open Phil’s published work, as well as some popular articles and research, and assessing the field for scale, neglectedness, and tractability. I conclude that the best giving opportunities will likely be filled by foundations such as LJAF and Open Phil, and recommend that the Oxford Prioritisation Project focusses elsewhere." Also available as a Google Doc at https://docs.google.com/document/d/13wsMAugRacu52EPZo6-7NJh4QuYayKyIbjChwU0KsVU/edit?usp=sharing and at the Effective Altruism Forum at http://effective-altruism.com/ea/17g/daniel_may_open_science_little_room_for_more/ (10 comments)
Forget Washington. Criminal Justice Funders Have Big Plans at the Local Level2017-02-08Philip Rojc Inside PhilanthropyOpen Philanthropy Project Laura and John Arnold Foundation MacArthur Foundation Third-party coverage of donor strategyCriminal justice reformThe post compares the criminal justice reform strategies followed by, on the one hand, the Arnold and MacArthur Foundation (working on the inside with government agencies and power players), on the other hand, the Open Philanthropy Project (keeping the pressure for reform from the outside). It says that the two strategies are complementary, and taken together, improve the expected amount of reform
Good Ventures and Giving Now vs. Later (2016 Update)2016-12-28Holden Karnofsky Open Philanthropy ProjectGood Ventures/GiveWell top and standout charities GiveWell top charities Against Malaria Foundation Schistosomiasis Control Initiative Deworm the World Initiative GiveDirectly Malaria Consortium Sightsavers END Fund Development Media International Food Fortification Initiative Global Alliance for Improved Nutrition Iodine Global Network Living Goods Project Healthy Children Reasoning supplementGlobal health and developmentExplanation of reasoning that led to $50 million allocation to GiveWell top charities
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas
Worldview Diversification2016-12-13Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCause prioritizationThe blog post discusses the challenge of comparing donation opportunities in very different cause areas, and the importance of relying on a diversity of worldviews to inform grantmaking strategy
Catastrophic Global Risks: A Silicon Valley Funder Thinks the Unthinkable2016-11-30Sue Lynn-Moses Inside PhilanthropyOpen Philanthropy Project Center for International Security and Cooperation Third-party coverage of donor strategyBiosecurity and pandemic preparednessA discussion of the overall work done by the Open Philanthropy Project on global catastrophic risks, with a particular focus on biosecurity. Comparisons are made with the Skoll Global Threats Fund, and the historical work of the Rockefeller Foundation in disease surveillance (that it recently pulled out of) is referenced
Vast Suffering, Clear Solutions: The Logic Behind a Global Push to Help Farm Animals2016-11-17Tate Williams Inside PhilanthropyOpen Philanthropy Project Broad donor strategyAnimal welfare/factory farmingThe article reviews Open Philanthropy Project grants for animal welfare, primarily grants focused on cage-free campaigns, decided by program officer Lewis Bollard. The connection with the effective altruist movement is also highlighted
The Open Philanthropy Project just announced our latest grant to WildAid in China2016-10-25Lewis Bollard Open Philanthropy ProjectOpen Philanthropy Project Green Monday World Animal Protection Brighter Green WildAid Broad donor strategyAnimal welfare/factory farming/ChinaAnnouncement of strategy on Facebook; official document https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/grants-support-farm-animal-welfare-work-china announced at https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/ngrjni1iKLg (2017-08-09).
Grisly Undercover Video Shows Chickens Being Starved To Produce More Eggs2016-10-11Nico Pitney Huffington PostOpen Philanthropy Project Humane Society International Mercy For Animals Animal Equality People for Animals The Humane League Third-party coverage of donor strategyAnimal welfare/factory farming/chicken/cage-free campaign/internationalProvides some context for the move by the Open Philanthropy Project in mid-2016 to expand its cage-free campaign funding internationally
Brian Tomasik, Research Lead, Foundational Research Institute on October 6, 20162016-10-06Brian Tomasik Luke Muehlhauser Open Philanthropy ProjectOpen Philanthropy Project Reasoning supplementMoral patienthood/animal welfareConversation as part of research by Muehlhauser into moral patienthood, that would culminate in the writeup https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood published in 2017
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Project Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06)
Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering2016-08-26Michael Dickens Effective Altruism ForumOpen Philanthropy Project Unsolicited third-party suggestions for donorAnimal welfare/wild animalsMichael Dickens offers reasons that the Open Philanthropy Project should prioritize Wild Animal Suffering. He writes: "What we need is a large, committed source of funding to jump-start the cause. If the Open Philanthropy Project began funding work on wild animal suffering, it could stimulate new research efforts or small-scale interventions by offering grants. Specifically, Open Phil should probably create a new focus area for wild animal suffering and possibly hire dedicated staff. This problem has such large scale, and so many possible interventions, that it absolutely deserves to be a dedicated focus area. Open Phil might consider lumping WAS under its farm animal welfare program, but this would excessively constrain its budget and limit the amount of staff time that it could receive. Wild animal suffering is a massive problem, and easily deserves as much attention as most of Open Phil’s other focus areas."
Housing and Incarceration Memorandum2016-08-22Chelsea Tabart Open Philanthropy ProjectOpen Philanthropy Project Reasoning supplementCriminal justice reformAn internal memorandum on the intersection between housing and incarceration written by Chelsea Tabart for Chloe Cockburn (the criminal justice program officer). The memorandum would be publicly announced and linked to from https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/jQyJCLBgenc (2017-10-25)
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority
Our Progress in 2015 and Plans for 20162016-04-29Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyScientific research|AI safetyThe blog post compares progress made by the Open Philanthropy Project in 2015 against plans laid out in https://www.openphilanthropy.org/blog/open-philanthropy-project-progress-2014-and-plans-2015 and then lays out plans for 2016. The post notes the following in relation to its 2015 plans: it succeeded in hiring and expanding the team, but had to scale back on its scientific research ambitions in mid-2015. For 2016, Open Phil plans to focus on scaling up its grantmaking and reducing its focus on hiring. AI safety is declared as an intended priority for 2016, with Daniel Dewey working on it full-time, and Nick Beckstead and Holden Karnofsky also devoting significant time to it. The post also notes plans to continue work on separating the Open Philanthropy Project from GiveWell
Initial Grants to Support Corporate Cage-free Reforms2016-03-31Lewis Bollard Open Philanthropy ProjectOpen Philanthropy Project The Humane League Mercy For Animals The Humane Society of the United States Broad donor strategyAnimal welfare/factory farming/chicken/cage-free campaign/internationalWritten to explain a bunch of grants already made in 2016-02 to support cage-free reforms in the United States for egg-laying chicken. The blog post had a heated comment section, potentially influencing future Open Phil communication on the subject
EPISODE 324: LEWIS BOLLARD FROM THE OPEN PHILANTHROPY PROJECT2016-03-26Lewis Bollard Jasmin Singer Mariann Sullivan Our Hen HouseOpen Philanthropy Project Broad donor strategyAnimal welfare/factory farmingLewis Bollard, who recently joined the Open Philanthropy Project and has recently recommended a bunch of grants related to corporate campaigns, describes what he is working on
Suggestions for individual donors from Open Philanthropy Project staff2015-12-23Holden Karnofsky Open Philanthropy ProjectChloe Cockburn Lewis Bollard Alexander Berger Nick Beckstead Howie Lempel Alliance for Safety and Justice Bronx Freedom Fund The Humane League The Humane Society of the United States Center for Global Development Center for Popular Democracy Ploughshares Fund Donation suggestion listCriminal justice reform|Animal welfare|Global healthOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas. The post was originally published to the GiveWell blog
ALLEVIATING ANIMAL SUFFERING: A CONVERSATION WITH LEWIS BOLLARD2015-11-29Marc Gunther Nonprofit ChroniclesOpen Philanthropy Project Broad donor strategyAnimal welfare/factory farmingThe author discusses takeaway from a recent lunch with Lewis Bollard, who has recently joined the Open Philanthropy Project as the Program Officer for Farm Animal Welfare
Incoming Program Officer: Lewis Bollard2015-09-11Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAnimal welfareOpen Philanthropy Project announces that it is hiring Lewis Bollard, poaching him from the Humane Society of the United States (HSUS) via a referral from Howie Lempel. Bollard would direct tens of millions of dollars in funding in the area over the next few years, including massive spend on corporate cage-free campaigns in the United States and internationally. The post was originally published on the GiveWell blog at https://blog.givewell.org/2015/09/11/incoming-program-officer-lewis-bollard/ and has 6 comments there
Open Philanthropy Project2015-09-05Sydney Martin Open Philanthropy Project Third-party coverage of donor strategyCriminal justice reformThe blog post describes the Open Philanthropy Project and its broad strategy of selecting a few areas through cause prioritization, studying them in depth, and granting a lot in those areas. She particularly focuses on criminal justice reform and the hiring of Chloe Cockburn
Incoming Program Officer for Criminal Justice Reform: Chloe Cockburn2015-06-16Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCriminal justice reformThe post notes that the Open Philanthropy Project is hiring Chloe Cockburn as the Program Officer in criminal justice reform, poaching her from the American Civil Liberties Union. Cockburn would direct tens of millions of dollars in funding in criminal justice reform over the next few years. The post was originally published on the GiveWell blog at https://blog.givewell.org/2015/06/16/incoming-program-officer-for-criminal-justice-reform-chloe-cockburn/ and has 5 comemnts there
Co-funding Partnership with Kaitlyn Trigger and Mike Krieger2015-04-21Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Mike and Kaitlyn Krieger PartnershipThe blog post announces that Mike Krieger and Kaitlyn Krieger (then Kaitlyn Trigger) "have made a financial commitment of $750,000 over the next two years. 10% will go to GiveWell to support operations related to the Open Philanthropy Project. 90% will be allocated to grants identified and recommended through the Open Philanthropy Project process. We expect that the funds will be allocated evenly to all grants, rather than selectively allocated on the basis of individual grants." Later, Mike and Kaitlyn Krieger would create their own Future Justice Fund, focused on giving in the criminal justice reform space.
Open Philanthropy Project: Progress in 2014 and Plans for 20152015-03-12Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyGlobal catastrophic risks|Scientific research|Global health and developmentThe blog post compares progress made by the Open Philanthropy Project in 2015 against plans laid out in https://www.openphilanthropy.org/blog/givewell-labs-progress-2013-and-plans-2014 and lays out further plans for 2015. The post says that progress in the areas of U.S. policy and global catastrophic risks was substantial and matched expectations, but progress in scientific research and global health and development was less than hoped for. The plan for 2015 is to focus on growing more in the domain of scientific research and postpone work on global health and development (thus freeing up staff capacity). There is much more detail in the post
Open Philanthropy Project Update: U.S. Policy2015-03-10Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCause prioritization,Criminal justice reform,Animal welfare,Macroeconomic stabilization policy,Migration policy,Drug policyOriginally published on the GiveWell blog at https://blog.givewell.org/2015/03/10/open-philanthropy-project-update-u-s-policy/ where comments can still be found. This is an annual update on where the Open Philanthropy Project stands on its investigation of United States policy issues. Some of the cause areas covered under what they call United States policy would later include grants to outside the United States (in particular, animal welfare), while others, such as criminal justice reform and macroeconomic stabilization policy, would remain within the United States
Thoughts on the Sandler Foundation2015-02-24Holden Karnofsky Open Philanthropy ProjectSandler Foundation Open Philanthropy Project Center for American Progress ProPublica Center for Responsible Lending Washington Center for Equitable Growth Center on Budget and Policy Priorities Third-party coverage of donor strategyThis blog post originally appeared on the GiveWell blog at https://blog.givewell.org/2015/02/24/thoughts-on-the-sandler-foundation/ prior to the Open Phil blog launch. The post is part of Open Phil research into how different foundations structure their operations and giving. The post covers the Sandler Foundation, which has an unusual giving model, sacrificing cause-specific, domain-expert "program officers" and instead having a small staff that would opportunistically shift between researching different giving opportunities. Successes of the Sandler Foundation were noted, including forming the Center for American Progress, ProPublica, Center for Responsible Lending, and Washington Center for Equitable Growth, and providing support to the Center on Budget and Policy Priorities. The Sandler Foundation approach was described as follows: (1) The priority placed on funding strong leadership, (2) A high level of “opportunism”: being ready to put major funding or no funding behind an idea, depending on the quality of the specific opportunity. Ultimately, the post concluded that Open Phil would probably stick with the more standard program officer model and including a mix of larger and smaller grants. Reasons given were: (a) Open Phil's policy priorities mapped less clearly to existing political platforms than the Sandler Foundation's, so it would be harder to find fully aligned leaders, (b) Open Phil sees a good deal of value in relatively small, low-confidence, low-due-diligence grants that give a person/team a chance to “get an idea off the ground.” We’ve made multiple such grants to date and we plan on continuing to do so, (c) confidence in the Sandler Foundation's track record was not very high. However, Open Phil might experiment with using generalist staff in addition to program officers; the generalists would scan across issues to find and vet opportunities
Criminal justice reform2014-11-01Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaCriminal justice reformThe document gives the state of understanding of the Open Philanthropy Project as of November 2014, of the landscrape for criminal justice reform in the United States. It was originally prepared for a November 2014 convening. It is superseded by later documents, in particular https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/criminal-justice-reform-strategy (2017-03-27)
Potential Global Catastrophic Risk Focus Areas2014-06-26Alexander Berger Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risksIn this blog post originally published at https://blog.givewell.org/2014/06/26/potential-global-catastrophic-risk-focus-areas/ Alexander Berger goes over a list of seven types of global catastrophic risks (GCRs) that the Open Philanthropy Project has considered. He details three promising areas that the Open Philanthropy Project is exploring more and may make grants in: (1) Biosecurity and pandemic preparedness, (2) Geoengineering research and governance, (3) AI safety. For the AI safety section, there is a note from Executive Director Holden Karnofsky saying that he sees AI safety as a more promising area than Berger does
Potential U.S. Policy Focus Areas2014-05-29Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCause prioritization|Criminal justice reform|Drug policy|Migration policy|Macroeconomic stabilization policy|Global health and development|Climate change|Tax policyThe blog post reviews the current understanding of the Open Philanthropy Project of various cause areas that they are considering for their grantmaking. They break up the cause areas discussed as: Windows of opportunity: outstanding tractability (i.e., "the time is right"), Ambitious longshots: outstanding importance, and Green fields: outstanding "room for more philanthropy". Other causes of interest (that do not neatly fit into one of these boxes) are also discussed
Criminal Justice Reform2014-05-01Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaCriminal justice reformThe document summarizes the state of investigation of the Open Philanthropy Project into criminal justice reform in a United States context, as of May 2014. The nutshell headers are: What is the state of our investigation into U.S. criminal justice reform? Why are we making criminal justice reform grants? What is the problem? What are possible interventions?
Macroeconomic policy2014-05-01Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaMacroeconomic stabilization policyInitial findings from a medium-depth investigation into the current state of macroeconomic stabilization policy
GiveWell Labs - Progress in 2013 and Plans for 20142014-03-05Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Broad donor strategyCause prioritizationOriginally published on the GiveWell blog at https://blog.givewell.org/2014/03/05/givewell-labs-progress-in-2013-and-plans-for-2014/ where comments can still be found. This is an annual update on the state of the Open Philanthropy Project, which, at the time, was called GiveWell Labs. It describes the areas that the Open Philanthropy Project plans to focus on, and the level of depth it plans to go into
Biosecurity2014-01-01Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaBiosecurity and pandemic preparednessInitial findings from a shallow investigation into the current state of biosecurity and its funding
Treatment of Animals in Industrial Agriculture2013-09-01Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaAnimal welfare/factory farming/United StatesInitial findings from a shallow investigation into the impact of industrial agriculture on animal welfare in the United States
Migration policy/international labor mobility2013-05-01Open Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaMigration policy/international labor mobilityInitial findings from a shallow investigation into the current state of labor mobility, with more focus on the United States
Thoughts on the Singularity Institute (SI) (GW, IR)2012-05-11Holden Karnofsky LessWrongOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit
Singularity Institute for Artificial Intelligence2011-04-30Holden Karnofsky GiveWellOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyIn this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI
Advocacy for Improved or Increased U.S. Foreign AidOpen Philanthropy ProjectOpen Philanthropy Project Review of current state of cause areaGlobal health and developmentThe Open Philanthropy Project reviews the current state of policy advocacy for increasing development assistance from the United States government, in order to identify what a new funder (potentially, the Open Philanthropy Project) could do in the space
Open Philanthropy Project: Grants for Global SecurityInside PhilanthropyOpen Philanthropy Project Center for International Security and Cooperation Third-party coverage of donor strategyBiosecurity and pandemic preparednessAn overview by Inside Philanthropy of the Open Philanthropy Project and its work on biosecurity grants

Full list of donations in reverse chronological order (42 donations)

DoneeAmount (current USD)Amount rank (out of 42)Donation dateCause areaURLInfluencerNotes
Open Phil AI Fellowship (Earmark: Alex Tamkin|Clare Lyle|Cody Coleman|Dami Choi|Dan Hendrycks|Ethan Perez|Frances Ding|Leqi Liu|Peter Henderson|Stanislav Fort)2,300,000.0092020-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-classCatherine Olsson Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship to ten machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests." In a comment reply https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020#BCvuhRCg9egAscpyu (GW, IR) on the Effectiive Altruism Forum, grant investigator Catherine Olsson writes: "But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is comparable to the total amount of the 2019 fellowship grants, though it is distributed among a slightly larger pool of people.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60 Announced: 2020-05-12.
WestExec310,000.00262020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/westexec-report-on-assurance-in-machine-learning-systemsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to support the production and distribution of a report on advancing policy, process, and funding for the Department of Defense’s work on test, evaluation, verification, and validation for deep learning systems." Announced: 2020-03-20.
Machine Intelligence Research Institute7,703,750.0032020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" with the most similar previous grant being https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 (February 2019). Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Centre for Effective Altruism ($4,146,795), 80,000 Hours ($3,457,284), and Ought ($1,593,333).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation.
Intended funding timeframe in months: 24

Other notes: The donee describes the grant in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ (2020-04-27) along with other funding it has received ($300,000 from the Berkeley Existential Risk Initiative and $100,000 from the Long-Term Future Fund). The fact that the grant is a two-year grant is mentioned here, but not in the grant page on Open Phil's website. The page also mentions that of the total grant amount of $7.7 million, $6.24 million is coming from Open Phil's normal funders (Good Ventures) and the remaining $1.46 million is coming from Ben Delo, co-founder of the cryptocurrency trading platform BitMEX, as part of a funding partnership https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo announced November 11, 2019. Announced: 2020-04-10.
Ought1,593,333.00112020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020Committee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing potential risks from advanced artificial intelligence."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter"

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Machine Intelligence Research Institute ($7,703,750), Centre for Effective Altruism ($4,146,795), and 80,000 Hours ($3,457,284)

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation Announced: 2020-02-14.
RAND Corporation (Earmark: Andrew Lohn)30,751.00382020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rand-corporation-research-on-the-state-of-ai-assurance-methodsLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support exploratory research by Andrew Lohn on the state of AI assurance methods." Announced: 2020-03-19.
Berkeley Existential Risk Initiative705,000.00202019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 1; announced: 2019-12-13.
University of California, Berkeley (Earmark: Jacob Steinhardt)1,111,000.00182019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-research-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "This funding will allow Professor Steinhardt to fund students to work on robustness, value learning, aggregating preferences, and other areas of machine learning."

Other notes: This is the third year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Jacob Steinhardt) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 1; announced: 2020-02-19.
Center for Human-Compatible AI200,000.00302019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "CHAI plans to use these funds to support graduate student and postdoc research."

Other notes: Open Phil makes a $705,000 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 to the Berkeley Existential Risk Initiative (BERI) at the same time (November 2019) to collaborate with CHAI. Intended funding timeframe in months: 1; announced: 2019-12-20.
Ought1,000,000.00192019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought Intended funding timeframe in months: 1; announced: 2020-02-14.
Open Phil AI Fellowship (Earmark: Aidan Gomez|Andrew Ilyas|Julius Adebayo|Lydia T. Liu|Max Simchowitz|Pratyusha Kullari|Siddharth Karamcheti|Smitha Milli)2,325,000.0082019-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-classDaniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship support to eight machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is about double the amount of the 2018 grant, although the number of people supported is just one more (8 instead of 7). No explicit comparison of grant amounts is done in the grant page.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) confirms that the program would continue. Announced: 2019-05-17.
Machine Intelligence Research Institute2,652,500.0062019-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.

Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount decided by the Committee for Effective Altruism Support (CEAS) https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by CEAS, made at the same time and therefore likely drawing from the same money pot, are to the Center for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The original amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later. In November 2019, additional funding would bring the total award amount to $2,652,500

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 with a very similar writeup suggests that Open Phil and the Committee for Effective Altruism Support would continue to stand by the reasoning for the grant

Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-01.
Center for Security and Emerging Technology55,000,000.0012019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technologyLuke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: Grant via Georgetown University for the Center for Security and Emerging Technology (CSET), a new think tank led by Jason Matheny, formerly of IARPA, dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders.

Donor reason for selecting the donee: Open Phil thinks that one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. As AI grows more powerful, calls for government to play a more active role are likely to increase, and government funding and regulation could affect the benefits and risks of AI. Thus: "Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice." Despite risks and uncertainty, the grant is described as worthwhile under Open Phil's hits-based giving framework

Donor reason for donating that amount (rather than a bigger or smaller amount): The large amount over an extended period (5 years) is explained at https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant "In the case of the new Center for Security and Emerging Technology, we think it will take some time to develop expertise on key questions relevant to policymakers and want to give CSET the commitment necessary to recruit key people, so we provided a five-year grant."

Donor reason for donating at this time (rather than earlier or later): Likely determined by the timing that the grantee plans to launch. More timing details are not discussed
Intended funding timeframe in months: 60

Other notes: Donee is entered as Center for Security and Emerging Technology rather than as Georgetown University for consistency with future grants directly to the organization once it is set up. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.
Berkeley Existential Risk Initiative250,000.00292019-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineersDaniel Dewey Donation process: The Open Philanthropy Project described the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-04.
University of California, Berkeley (Earmark: Pieter Abeel|Aviv Tamar)1,145,000.00162018-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems."

Other notes: This is the second year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Pieter Abbeel) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 1; announced: 2018-12-11.
GoalsRL (Earmark: Ashley Edwards)7,500.00412018-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learningDaniel Dewey Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.
University of Oxford (Earmark: Allan Dafoe)429,770.00232018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom. The Open Philanthropy Project recommended additional funds to support this work in 2017, while Professor Dafoe was at Yale. Continuation of grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe. Announced: 2018-07-20.
The Wilson Center400,000.00252018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-seriesLuke Muehlhauser Grant over two years to support a series of in-depth AI policy seminars. Named for President Woodrow Wilson, the Wilson Center is a non-partisan policy forum for tackling global issues through independent research and open dialogue. Open Phil believes the seminar series could help raise the salience of AI policy in Washington, D.C. policymaking circles, and could help us identify and empower one or more influential thinkers in those circles, a key component of the Open Phil AI policy strategy. Announced: 2018-08-02.
Stanford University (Earmark: Dan Boneh|Florian Tremer)100,000.00342018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramerDaniel Dewey Grant is a "gift" to Stanford Unviersity to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer. Machine learning security probes worst-case performance of learned models. Believed to be a way of pushing in the direction of more AI safety concern in machine learning research and AI development. Announced: 2018-09-07.
Machine Intelligence Research Institute150,000.00332018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."

Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-27.
AI Impacts100,000.00342018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018Daniel Dewey Discretionary grant via the Machine Intelligence Research Institute. AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence.. Renewal of December 2016 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support. Announced: 2018-06-28.
Open Phil AI Fellowship (Earmark: Aditi Raghunathan|Chris Maddison|Felix Berkenkamp|Jon Gauthier|Michael Janner|Noam Brown|Ruth Fong)1,135,000.00172018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018Daniel Dewey Donation process: According to the grant page: "These fellows were selected from more than 180 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research"

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to provide scholarship support to seven machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating at this time (rather than earlier or later): This is the first of annual sets of grants, decided through an annual application process.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The corresponding grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class (2019) and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) confirm that these grants will be made annually. Announced: 2018-05-31.
Ought525,000.00212018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."

Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought Intended funding timeframe in months: 1; announced: 2018-05-30.
Stanford University2,539.00422018-04AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learningDaniel Dewey Discretionary grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775. Announced: 2018-04-19.
AI Scholarships (Earmark: Dmitrii Krasheninnikov|Michael Cohen)159,000.00322018-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018Daniel Dewey Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.
Machine Intelligence Research Institute3,750,000.0052017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) suggests that work on the review had been going on well before the grant renewal date

Intended use of funds (category): Organizational general support

Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."

Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."

Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36

Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."

Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." is discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
University of California, Berkeley (Earmark: Sergey Levine|Anca Dragan)1,450,016.00132017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-draganDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "The work will be led by Professors Sergey Levine and Anca Dragan, who will each devote approximately 20% of their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems, though they have the flexibility to adjust their focus during the grant period." The project narrative is at https://www.openphilanthropy.org/files/Grants/UC_Berkeley/Levine_Dragan_Project_Narrative_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our broad goals for this funding are to encourage top researchers to work on AI alignment and safety issues in order to build a pipeline for young researchers; to support progress on technical problems; and to generally support the growth of this area of study."

Other notes: This is the first year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It would begin an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Anca Dragan) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 1; announced: 2017-10-20.
Berkeley Existential Risk Initiative403,890.00242017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaborationDaniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx Announced: 2017-09-28.
Montreal Institute for Learning Algorithms (Earmark: Yoshua Bengio)2,400,000.0072017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research-- Grant to support research to improve the positive long-term impact of artificial intelligence on society. Mainly due to star power of researcher Yoshua Bengio who influences many young ML/AI researchers. Detailed writeup available. See also https://www.facebook.com/permalink.php?story_fbid=10110258359382500&id=13963931 for a Facebook share by David Krueger, a member of the grantee organization. The comments include some discussion about the grantee. Announced: 2017-07-19.
Yale University (Earmark: Allan Dafoe)299,320.00272017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoeNick Beckstead Grant to support research into the global politics of artificial intelligence, led by Assistant Professor of Political Science, Allan Dafoe, who will conduct part of the research at the Future of Humanity Institute in Oxford, United Kingdom over the next year. Funds from the two gifts will support the hiring of two full-time research assistants, travel, conferences, and other expenses related to the research efforts, as well as salary, relocation, and health insurance expenses related to Professor Dafoe’s work in Oxford. Announced: 2017-09-28.
Future of Life Institute100,000.00342017-05Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant for general support. However, the primary use of the grant will be to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant

Donor retrospective of the donation: The followup grants in 2018 and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee. Announced: 2017-09-27.
UCLA School of Law (Earmark: Edward Parson,Richard Re)1,536,222.00122017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governanceHelen Toner Grant to support work on governance related to AI risk led by Edward Parson and Richard Re. Announced: 2017-07-27.
Stanford University (Earmark: Percy Liang)1,337,600.00142017-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liangDaniel Dewey Grant awarded over year years (July 2017 to July 2021) to support research by Professor Percy Liang and three graduate students on AI safety and alignment. The funds will be split approximately evenly across the four years (i.e. roughly $320,000 to $350,000 per year). Preceded by planning grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $25,000. Announced: 2017-09-26.
OpenAI30,000,000.0022017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support-- Donation process: According to the grant page Section 4 Our process: "OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors."

Intended use of funds (category): Organizational general support

Intended use of funds: The funds will be used for general support of OpenAI, with 10 millioon USD per year for the next three years. The funding is also accompanied with Holden Karnofsky (Open Phil director) joining the OpenAI Board of Directors. Karnofsky and one other board member will oversee OpenAI's safety and governance work

Donor reason for selecting the donee: Open Phil says that, given its interest in AI safety, it is looking to fund and closely partner with orgs that (a) are working to build transformative AI, (b) are advancing the state of the art in AI research, (c) employ top AI research talent. OpenAI and Deepmind are two such orgs, and OpenAI is particularly appealing due to "our shared values, different starting assumptions and biases, and potential for productive communication." Open Phil is looking to gain the following from a partnership: (i) Improve its understanding of AI research, (ii) Improve its ability to generically achieve goals regarding technical AI safety research, (iii) Better position Open Phil to promote its ideas and goals

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page Section 2.2 "A note on why this grant is larger than others we’ve recommended in this focus area" explains the reasons for the large grant amount (relative to other grants by Open Phil so far). Reasons listed are: (i) Hits-based giving philosophy, described at https://www.openphilanthropy.org/blog/hits-based-giving in depth, (ii) Disproportionately high importance of the cause if transformative AI is developed in the next 20 years, and likelihood that OpenAI will be very important if that happens, (iii) Benefits of working closely with OpenAI in informing Open Phil's understanding of AI safety, (iv) Field-building benefits, including promoting an AI safety culture, (v) Since OpenAI has a lot of other funding, Open Phil can grant a large amount while still not raising the concern of dominating OpenAI's funding

Donor reason for donating at this time (rather than earlier or later): No specific timing considerations are provided. It is likely that the timing of the grant is determined by when OpenAI first approached Open Phil and the time taken for the due diligence
Intended funding timeframe in months: 36

Other notes: External discussions include https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.
Future of Humanity Institute1,994,000.00102017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support-- Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.
Distill25,000.00392017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-supportDaniel Dewey Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.
Stanford University (Earmark: Percy Liang)25,000.00392017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grantDaniel Dewey Grant awarded to Professor Percy Liang to spend significant time engaging in the Open Philanthropy Project grant application process, that would lead to a larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $1,337,600. Announced: 2017-09-26.
AI Impacts32,000.00372016-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-- Grant for work on strategic questions related to potential risks from advanced artificial intelligence. Announced: 2017-02-02.
Electronic Frontier Foundation (Earmark: Peter Eckersley)199,000.00312016-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social-- Grant funded work by Peter Eckersley, whom the Open Philanthropy Project believed in. Followup conversation with Peter Eckersley and Jeremy Gillula of grantee organization at https://www.openphilanthropy.org/sites/default/files/Peter_Eckersley_Jeremy_Gillula_05-26-16_%28public%29.pdf on 2016-05-26. Announced: 2016-12-15.
Machine Intelligence Research Institute500,000.00222016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."

Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety

Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."

Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."

Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant

Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Center for Human-Compatible AI5,555,550.0042016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Donation process: The grant page section https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process says: "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."

Intended use of funds (category): Organizational general support

Intended use of funds: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding says: "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year." The funding from Open Phil will be used toward this budget. An earlier section of the grant page says that the Center's research topics will include value alignment, value functions defined by partially observable and partially defined terms, the structure of human value systems, and conceptual questions including the properties of ideal value systems.

Donor reason for selecting the donee: The grant page gives these reasons: (1) "We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it." (2) "The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research." (3) "We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence." Also, counterfactual impact: "Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. [...] We are not aware of other potential [substantial] funders, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is based on budget estimates in https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year."

Donor reason for donating at this time (rather than earlier or later): Timing seems to have been determined by the time it took to work out the details of the new center after Open Phil decided to make AI safety a major priority in 2016. According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 in November 2019 as well as many grants to Berkeley Existential Risk Initiative (BERI) to collaborate with the grantee suggest that Open Phil would continue to think highly of the grantee, and stand by its reasoning.

Other notes: Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the name of the center for easier cross-referencing. Announced: 2016-08-29.
George Mason University (Earmark: Robin Hanson)277,435.00282016-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios-- Earmarked for Robin Hanson research. Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence for background. Original amount $264,525. Increased to $277,435 through the addition of $12,910 in July 2017 to cover an increase in George Mason University’s instructional release costs (teaching buyouts). Announced: 2016-07-07.
Future of Life Institute1,186,000.00152015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.

Similarity to other donors

Sorry, we couldn't find any similar donors.