This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)||GiveWell Good Ventures|
|Best overview URL||https://causeprioritization.org/Open%20Philanthropy%20Project|
|Page on philosophy informing donations||https://www.openphilanthropy.org/about/vision-and-values|
|Grant application process page||https://www.openphilanthropy.org/giving/guide-for-grant-seekers|
|Regularity with which donor updates donations data||continuous updates|
|Regularity with which Donations List Website updates donations data (after donor update)||continuous updates|
|Lag with which donor updates donations data||months|
|Lag with which Donations List Website updates donations data (after donor update)||days|
|Data entry method on Donations List Website||Manual (no scripts used)|
|Org Watch page||https://orgwatch.issarice.com/?organization=Open+Philanthropy+Project|
Brief history: The Open Philanthropy Project (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell in June 2017
Brief notes on broad donor philosophy and major focus areas: The Open Philanthropy Project is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.
Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more
Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval
Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Project Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.
Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy Project database are not listed on Donations List Website as being under Open Philanthropy Project. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by the Open Philanthropy Project that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by the Open Philanthropy Project are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Number of donees||Total||2019||2018||2017||2016||2015|
|AI safety (filter this donor)||30||21||57,487,342.00||2,362,500.00||4,153,809.00||43,221,048.00||6,563,985.00||1,186,000.00|
|Security (filter this donor)||1||1||55,000,000.00||55,000,000.00||0.00||0.00||0.00||0.00|
Graph of spending by cause area and year (incremental, not cumulative)
Graph of spending by cause area and year (cumulative)
If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
|Subcause area||Number of donations||Number of donees||Total||2019||2018||2017||2016||2015|
|Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety||1||1||55,000,000.00||55,000,000.00||0.00||0.00||0.00||0.00|
Graph of spending by subcause area and year (incremental, not cumulative)
Graph of spending by subcause area and year (cumulative)
|Center for Security and Emerging Technology (filter this donor)||55,000,000.00||55,000,000.00||0.00||0.00||0.00||0.00|
|OpenAI (filter this donor)||AI safety||FB Tw WP Site TW||30,000,000.00||0.00||0.00||30,000,000.00||0.00||0.00|
|Machine Intelligence Research Institute (filter this donor)||AI safety||FB Tw WP Site CN GS TW||6,512,500.00||2,112,500.00||150,000.00||3,750,000.00||500,000.00||0.00|
|Center for Human-Compatible AI (filter this donor)||AI safety||Site TW||5,555,550.00||0.00||0.00||0.00||5,555,550.00||0.00|
|University of California, Berkeley (filter this donor)||FB Tw WP Site||2,595,016.00||0.00||1,145,000.00||1,450,016.00||0.00||0.00|
|Montreal Institute for Learning Algorithms (filter this donor)||AI capabilities/AI safety||Site||2,400,000.00||0.00||0.00||2,400,000.00||0.00||0.00|
|Future of Humanity Institute (filter this donor)||Global catastrophic risks/AI safety/Biosecurity and pandemic preparedness||FB Tw WP Site TW||1,994,000.00||0.00||0.00||1,994,000.00||0.00||0.00|
|UCLA School of Law (filter this donor)||Tw WP Site||1,536,222.00||0.00||0.00||1,536,222.00||0.00||0.00|
|Stanford University (filter this donor)||FB Tw WP Site||1,465,139.00||0.00||102,539.00||1,362,600.00||0.00||0.00|
|Future of Life Institute (filter this donor)||AI safety/other global catastrophic risks||FB Tw WP Site||1,186,000.00||0.00||0.00||0.00||0.00||1,186,000.00|
|AI Fellows Program (filter this donor)||AI safety||Site||1,135,000.00||0.00||1,135,000.00||0.00||0.00||0.00|
|Berkeley Existential Risk Initiative (filter this donor)||AI safety/other global catastrophic risks||Site TW||653,890.00||250,000.00||0.00||403,890.00||0.00||0.00|
|Ought (filter this donor)||AI safety||Site||525,000.00||0.00||525,000.00||0.00||0.00||0.00|
|University of Oxford (filter this donor)||FB Tw WP Site||429,770.00||0.00||429,770.00||0.00||0.00||0.00|
|The Wilson Center (filter this donor)||FB Tw WP Site||400,000.00||0.00||400,000.00||0.00||0.00||0.00|
|Yale University (filter this donor)||FB Tw WP Site||299,320.00||0.00||0.00||299,320.00||0.00||0.00|
|George Mason University (filter this donor)||FB WP Site||277,435.00||0.00||0.00||0.00||277,435.00||0.00|
|Electronic Frontier Foundation (filter this donor)||FB Tw WP Site||199,000.00||0.00||0.00||0.00||199,000.00||0.00|
|AI Scholarships (filter this donor)||159,000.00||0.00||159,000.00||0.00||0.00||0.00|
|AI Impacts (filter this donor)||AI safety||Site||132,000.00||0.00||100,000.00||0.00||32,000.00||0.00|
|Distill (filter this donor)||AI capabilities/AI safety||Tw Site||25,000.00||0.00||0.00||25,000.00||0.00||0.00|
|GoalsRL (filter this donor)||AI safety||Site||7,500.00||0.00||7,500.00||0.00||0.00||0.00|
Graph of spending by donee and year (incremental, not cumulative)
Graph of spending by donee and year (cumulative)
If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
|Influencer||Number of donations||Number of donees||Total||2019||2018||2017|
|Claire Zabel|Committee for Effective Altruism Support||1||1||2,112,500.00||2,112,500.00||0.00||0.00|
Graph of spending by influencer and year (incremental, not cumulative)
Graph of spending by influencer and year (cumulative)
If you hover over a cell for a given disclosures and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
|Disclosures||Number of donations||Number of donees||Total||2017||2016||2015|
|Unknown, generic, or multiple||2||2||1,686,000.00||0.00||500,000.00||1,186,000.00|
Graph of spending by disclosures and year (incremental, not cumulative)
Graph of spending by disclosures and year (cumulative)
Sorry, we couldn't find any country information.
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Document scope||Cause area||Notes|
|New grants from the Open Philanthropy Project and BERI||2019-04-01||Rob Bensinger||Machine Intelligence Research Institute||Open Philanthropy Project Berkeley Existential Risk Initiative||Machine Intelligence Research Institute||Donee periodic update||AI safety||MIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/|
|Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security||2019-03-20||Tate Williams||Inside Philanthropy||Open Philanthropy Project||Center for Security and Emerging Technology||Third-party coverage of donor strategy||AI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|Security||The article focuses on grantmaking by the Open Philanthropy Project in the areas of global catastrophic risks and security, particularly in AI safety and biosecurity and pandemic preparedness. It includes quotes from Luke Muehlhauser, Senior Research Analyst at the Open Philanthropy Project and the investigator for the $55 million grant https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology to the Center for Security and Emerging Technology (CSET). Muehlhauser was previously Executive Director at the Machine Intelligence Research Institute. It also includes a quote from Holden Karnofsky, who sees the early interest of effective altruists in AI safety as prescient. The CSET grant is discussed in the context of the Open Philanthropy Project's hits-based giving approach, as well as the interest in the policy space in better understanding of safety and governance issues related to technology and AI|
|Committee for Effective Altruism Support||2019-02-27||Open Philanthropy Project||Open Philanthropy Project||Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute||Broad donor strategy||Effective altruism|AI safety||The document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019|
|Occasional update July 5 2018||2018-07-05||Katja Grace||AI Impacts||Open Philanthropy Project Anonymous||AI Impacts||Donee periodic update||AI safety||Katja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned|
|The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks.||2018-02-27||Robert Wiblin Kieran Harris Holden Karnofsky||80,000 Hours||Open Philanthropy Project||Broad donor strategy||AI safety|Global catastrophic risks|Biosecurity and pandemic preparedness|Global health and development|Animal welfare|Scientific research||This interview, with full transcript, is an episode of the 80,000 Hours podcast. In the interview, Karnofsky provides an overview of the cause prioritization and grantmaking strategy of the Open Philanthropy Project, and also notes that the Open Philanthropy Project is hiring for a number of positions|
|Suggestions for Individual Donors from Open Philanthropy Project Staff - 2017||2017-12-21||Holden Karnofsky||Open Philanthropy Project||Jaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey||Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters||Donation suggestion list||Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reform||Open Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups|
|The Open Philanthropy Project AI Fellows Program||2017-09-12||Open Philanthropy Project||Open Philanthropy Project||Broad donor strategy||AI safety||This annouces an AI Fellows Program to support students doing Ph.D. work in AI-related fields who have interest in AI safety. See https://www.facebook.com/vipulnaik.r/posts/10213116327718748 and https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussions|
|My current thoughts on MIRI’s highly reliable agent design work||2017-07-07||Daniel Dewey||Effective Altruism Forum||Open Philanthropy Project||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Post discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin|
|Our Progress in 2016 and Plans for 2017||2017-03-14||Holden Karnofsky||Open Philanthropy Project||Open Philanthropy Project||Broad donor strategy||Scientific research|AI safety||The blog post compares progress made by the Open Philanthropy Project in 2016 against plans laid out in https://www.openphilanthropy.org/blog/our-progress-2015-and-plans-2016 and then lays out plans for 2017. The post notes success in scaling up grantmaking, as hoped for in last year's plan. The spinoff from GiveWell is still not completed because it turned out to be more complex than expected, but it is expected to be finished in mid-2017. Open Phil highlights the hiring of three Scientific Advisors (Chris Somerville, Heather Youngs, and Daniel Martin-Alarcon) in mid-2016, as part of its scientific research work. The organization also plans to focus more on figuring out how to decide how much money to allocate between different cause areas, with Karnofsky's worldview diversification post https://www.openphilanthropy.org/blog/worldview-diversification also highlighted. There is no plan to scale up staff or grantmmaking (unlike 2016, when the focus was to scale up hiring, and 2015, when the focus was to scale up staff)|
|Suggestions for Individual Donors from Open Philanthropy Project Staff - 2016||2016-12-14||Holden Karnofsky||Open Philanthropy Project||Jaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead||Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund||Donation suggestion list||Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policy||Open Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas|
|Machine Intelligence Research Institute — General Support||2016-09-06||Open Philanthropy Project||Open Philanthropy Project||Open Philanthropy Project||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Open Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email|
|Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)||2016-09-06||Open Philanthropy Project||Open Philanthropy Project||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Reviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06)|
|Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity||2016-05-06||Holden Karnofsky||Open Philanthropy Project||Open Philanthropy Project||Review of current state of cause area||AI safety||In this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority|
|Our Progress in 2015 and Plans for 2016||2016-04-29||Holden Karnofsky||Open Philanthropy Project||Open Philanthropy Project||Broad donor strategy||Scientific research|AI safety||The blog post compares progress made by the Open Philanthropy Project in 2015 against plans laid out in https://www.openphilanthropy.org/blog/open-philanthropy-project-progress-2014-and-plans-2015 and then lays out plans for 2016. The post notes the following in relation to its 2015 plans: it succeeded in hiring and expanding the team, but had to scale back on its scientific research ambitions in mid-2015. For 2016, Open Phil plans to focus on scaling up its grantmaking and reducing its focus on hiring. AI safety is declared as an intended priority for 2016, with Daniel Dewey working on it full-time, and Nick Beckstead and Holden Karnofsky also devoting significant time to it. The post also notes plans to continue work on separating the Open Philanthropy Project from GiveWell|
|Donee||Amount (current USD)||Amount rank (out of 31)||Donation date||Cause area||URL||Influencer||Notes|
|Machine Intelligence Research Institute||2,112,500.00||6||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019||Claire Zabel Committee for Effective Altruism Support||Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effectlve Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support
Intended use of funds (category): Organizational general support
Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.
Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by the Committee for Effective Altruism Support, made at the same time and therefore likely drawing from the same money pot, are to the Center for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later.
Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round
Intended funding timeframe in months: 24
Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant
Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-02.
|Berkeley Existential Risk Initiative||250,000.00||21||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineers||Daniel Dewey||Grant to BERI to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI). Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-05.|
|Center for Security and Emerging Technology||55,000,000.00||1||Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safety||https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology||Luke Muehlhauser||Grant to Georgetown University to establish the Center for Security and Emerging Technology (CSET), a new think tank dedicated to policy analysis at the intersection of national and international security and emerging technologies. Donee is entered as Center for Security and Emerging Technology rather than as GEorgetown University for consistency with future grants directly to the organization once it is set up. CSET is led by Jason Matheny, former Assistant Director of National Intelligence and Director of Intelligence Advanced Research Projects Activity (IARPA), the U.S. intelligence community’s research organization. Grant money to be given over five years. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.|
|University of California, Berkeley (Earmark: Pieter Abeel|Aviv Tamar)||1,145,000.00||12||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018||Daniel Dewey||Total across two grants over three years for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems. Announced: 2018-12-12.|
|GoalsRL (Earmark: Ashley Edwards)||7,500.00||30||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learning||Daniel Dewey||Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.|
|Stanford University (Earmark: Dan Boneh|Florian Tremer)||100,000.00||25||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramer||Daniel Dewey||Grant is a "gift" to Stanford Unviersity to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer. Machine learning security probes worst-case performance of learned models. Believed to be a way of pushing in the direction of more AI safety concern in machine learning research and AI development. Announced: 2018-09-07.|
|The Wilson Center||400,000.00||18||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series||Luke Muehlhauser||Grant over two years to support a series of in-depth AI policy seminars. Named for President Woodrow Wilson, the Wilson Center is a non-partisan policy forum for tackling global issues through independent research and open dialogue. Open Phil believes the seminar series could help raise the salience of AI policy in Washington, D.C. policymaking circles, and could help us identify and empower one or more influential thinkers in those circles, a key component of the Open Phil AI policy strategy. Announced: 2018-08-02.|
|University of Oxford (Earmark: Allan Dafoe)||429,770.00||16||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/oxford-university-global-politics-of-ai-dafoe||Nick Beckstead||Grant to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom. The Open Philanthropy Project recommended additional funds to support this work in 2017, while Professor Dafoe was at Yale. Continuation of grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe. Announced: 2018-07-20.|
|Machine Intelligence Research Institute||150,000.00||24||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-program||Claire Zabel||Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more
Intended use of funds (category): Direct project expenses
Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."
Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-28.
|AI Impacts||100,000.00||25||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018||Daniel Dewey||Discretionary grant via the Machine Intelligence Research Institute. AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence.. Renewal of December 2016 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support. Announced: 2018-06-28.|
|AI Fellows Program (Earmark: Aditi Raghunathan|Chris Maddison|Felix Berkenkamp|Jon Gauthier|Michael Janner|Noam Brown|Ruth Fong)||1,135,000.00||13||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018||Daniel Dewey||Grants to 7 AI Fellows pursuing research relevant to AI risk. More details at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence. Announced: 2018-06-01.|
|Ought||525,000.00||14||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support||Daniel Dewey||Grantee has a mission to “leverage machine learning to help people think.” Ought plans to conduct research on deliberation and amplification, a concept we consider relevant to AI alignment. The funding, combined with another grant from Open Philanthropy Project technical advisor Paul Christiano, is intended to allow Ought to hire up to three new staff members and provide one to three years of support for Ought’s work, depending how quickly they hire. Announced: 2018-05-31.|
|Stanford University||2,539.00||31||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learning||Daniel Dewey||Discretionary grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775. Announced: 2018-04-19.|
|AI Scholarships (Earmark: Dmitrii Krasheninnikov|Michael Cohen)||159,000.00||23||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018||Daniel Dewey||Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.|
|Machine Intelligence Research Institute||3,750,000.00||4||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017||Nick Beckstead||Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design suggests that work on the review had been going on well before the grant renewal date
Intended use of funds (category): Organizational general expenses
Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."
Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page
Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."
Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36
Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." are discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
|University of California, Berkeley (Earmark: Sergey Levine,Anca Dragan)||1,450,016.00||9||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-dragan||Daniel Dewey||Pair of grants to support AI safety work led by Professors Sergey Levine and Anca Dragan, who would each devote half their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems. Announced: 2017-10-20.|
|Berkeley Existential Risk Initiative||403,890.00||17||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration||Daniel Dewey||Grant to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley, also an Open Phil grantee (see https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for info on that grant). Open Phil also sees this as a promising model for providing assistance to other BERI clients in the future. Announced: 2017-09-28.|
|Yale University (Earmark: Allan Dafoe)||299,320.00||19||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe||Nick Beckstead||Grant to support research into the global politics of artificial intelligence, led by Assistant Professor of Political Science, Allan Dafoe, who will conduct part of the research at the Future of Humanity Institute in Oxford, United Kingdom over the next year. Funds from the two gifts will support the hiring of two full-time research assistants, travel, conferences, and other expenses related to the research efforts, as well as salary, relocation, and health insurance expenses related to Professor Dafoe’s work in Oxford. Announced: 2017-09-28.|
|Montreal Institute for Learning Algorithms (Earmark: Yoshua Bengio)||2,400,000.00||5||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/montreal-institute-learning-algorithms-ai-safety-research||--||Grant to support research to improve the positive long-term impact of artificial intelligence on society. Mainly due to star power of researcher Yoshua Bengio who influences many young ML/AI researchers. Detailed writeup available. See also https://www.facebook.com/permalink.php?story_fbid=10110258359382500&id=13963931 for a Facebook share by David Krueger, a member of the grantee organization. The comments include some discussion about the grantee. Announced: 2017-07-19.|
|Stanford University (Earmark: Percy Liang)||1,337,600.00||10||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang||Daniel Dewey||Grant awarded over year years (July 2017 to July 2021) to support research by Professor Percy Liang and three graduate students on AI safety and alignment. The funds will be split approximately evenly across the four years (i.e. roughly $320,000 to $350,000 per year). Preceded by planning grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $25,000. Announced: 2017-09-26.|
|UCLA School of Law (Earmark: Edward Parson,Richard Re)||1,536,222.00||8||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ucla-artificial-intelligence-governance||Helen Toner||Grant to support work on governance related to AI risk led by Edward Parson and Richard Re. Announced: 2017-07-27.|
|Stanford University (Earmark: Percy Liang)||25,000.00||28||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grant||Daniel Dewey||Grant awarded to Professor Percy Liang to spend significant time engaging in the Open Philanthropy Project grant application process, that would lead to a larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang of $1,337,600. Announced: 2017-09-26.|
|Distill||25,000.00||28||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-support||Daniel Dewey||Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.|
|OpenAI||30,000,000.00||2||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support||--||Grant for general support; 10 million dollars each over three years. Grant write-up explains reasons for this grant being larger than other grants in the cause area. External discussions include https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.|
|Future of Humanity Institute||1,994,000.00||7||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support||--||Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.|
|AI Impacts||32,000.00||27||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support||--||Grant for work on strategic questions related to potential risks from advanced artificial intelligence. Announced: 2017-02-02.|
|Electronic Frontier Foundation (Earmark: Peter Eckersley)||199,000.00||22||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/electronic-frontier-foundation-ai-social||--||Grant funded work by Peter Eckersley, whom the Open Philanthropy Project believed in. Followup conversation with Peter Eckersley and Jeremy Gillula of grantee organization at https://www.openphilanthropy.org/sites/default/files/Peter_Eckersley_Jeremy_Gillula_05-26-16_%28public%29.pdf on 2016-05-26. Announced: 2016-12-15.|
|Machine Intelligence Research Institute||500,000.00||15||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support||--||Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.
Intended use of funds (category): Organizational general expenses
Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."
Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety
Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."
Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12
Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."
Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant
Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
|Center for Human-Compatible AI||5,555,550.00||3||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai||--||Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence and https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity and states a 50% chance that two years from the grant date, the Center will be spending at least $2 million a year, and will be considered by one or more of our relevant technical advisors to have a reasonably good reputation in the field. Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the naem of the center for easier cross-referencing. Announced: 2016-08-29.|
|George Mason University (Earmark: Robin Hanson)||277,435.00||20||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios||--||Earmarked for Robin Hanson research. Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence for background. Original amount $264,525. Increased to $277,435 through the addition of $12,910 in July 2017 to cover an increase in George Mason University’s instructional release costs (teaching buyouts). Announced: 2016-07-07.|
|Future of Life Institute||1,186,000.00||11||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction||--||Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.|
Sorry, we couldn't find any similar donors.