This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)||GiveWell Good Ventures|
|Best overview URL||https://causeprioritization.org/Open%20Philanthropy%20Project|
|Page on philosophy informing donations||https://www.openphilanthropy.org/about/vision-and-values|
|Grant application process page||https://www.openphilanthropy.org/giving/guide-for-grant-seekers|
|Regularity with which donor updates donations data||continuous updates|
|Regularity with which Donations List Website updates donations data (after donor update)||continuous updates|
|Lag with which donor updates donations data||months|
|Lag with which Donations List Website updates donations data (after donor update)||days|
|Data entry method on Donations List Website||Manual (no scripts used)|
|Org Watch page||https://orgwatch.issarice.com/?organization=Open+Philanthropy|
Brief history: Open Philanthropy (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell as the "Open Philanthropy Project" in June 2017. In 2020, it started going by "Open Philanthropy" dropping the "Project" word.
Brief notes on broad donor philosophy and major focus areas: Open Philanthropy is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.
Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more.
Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval.
Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.
Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy database are not listed on Donations List Website as being under Open Philanthropy. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by open philanthropy that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by Open Philanthropy are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding
Full donor page for donor Open Philanthropy
|Donors list page||https://intelligence.org/topdonors/|
|Transparency and financials page||https://intelligence.org/transparency/|
|Donation case page||https://forum.effectivealtruism.org/posts/EKfjh5W7PkykLM7eG/miri-update-and-fundraising-case-1|
|Open Philanthropy Project grant review||http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support|
|Charity Navigator page||https://www.charitynavigator.org/index.cfm?bay=search.profile&ein=582565917|
|Timelines wiki page||https://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute|
|Org Watch page||https://orgwatch.issarice.com/?organization=Machine+Intelligence+Research+Institute|
|Key people||Eliezer Yudkowsky|Nate Soares|Luke Muehlhauser|
This entity is also a donor.
Full donee page for donee Machine Intelligence Research Institute
|Notes||Open Philanthropy Project (Open Phil) staff have interacted with MIRI staff from the time before Open Phil existed. The first published conversation between GiveWell staff (Holden Karnofsky) and MIRI staff dates back to February 2011 (six months before GiveWell Labs is announced), where Karnofsky decides against recommending MIRI due to lack of track record and room for more funding. In May 2012, Karnofsky publishes “Thoughts on the Singularity Institute (SI)” on LessWrong; the blog post outlines Karnofsky’s reasons for not recommending MIRI. However, over the years Karnofsky in particular has come to agree with more of MIRI’s views, as described in his post “Three Key Issues I’ve Changed My Mind About” (published September 2016). Since 2016, Open Phil has given MIRI a steady stream of grants, despite some significant reservations about its output. Since 2019, grant amounts from the Open Phil to MIRI have been determined by the Committee for Effective Altruism Support.|
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Total||2020||2019||2018||2017||2016|
|AI safety (filter this donor)||5||14,756,250.00||7,703,750.00||2,652,500.00||150,000.00||3,750,000.00||500,000.00|
Graph of spending by cause area and year (incremental, not cumulative)
Graph of spending by cause area and year (cumulative)
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Affected influencers||Document scope||Cause area||Notes|
|2020 AI Alignment Literature Review and Charity Comparison (GW, IR)||2020-12-21||Larks||Effective Altruism Forum||Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours||Survival and Flourishing Fund||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.|
|2019 AI Alignment Literature Review and Charity Comparison (GW, IR)||2019-12-19||Larks||Effective Altruism Forum||Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse||Survival and Flourishing Fund||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.|
|Suggestions for Individual Donors from Open Philanthropy Staff - 2019||2019-12-18||Holden Karnofsky||Open Philanthropy||Chloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel||National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought||Donation suggestion list||Criminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safety||Continuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.|
|Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)||2019-09-10||Ryan Carey||Effective Altruism Forum||Founders Pledge Open Philanthropy||OpenAI Machine Intelligence Research Institute||Broad donor strategy||AI safety|Global catastrophic risks|Scientific research|Politics||Ryan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.|
|New grants from the Open Philanthropy Project and BERI||2019-04-01||Rob Bensinger||Machine Intelligence Research Institute||Open Philanthropy Berkeley Existential Risk Initiative||Machine Intelligence Research Institute||Donee periodic update||AI safety||MIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/|
|Committee for Effective Altruism Support||2019-02-27||Open Philanthropy||Open Philanthropy||Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute||Broad donor strategy||Effective altruism|AI safety||The document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019|
|Suggestions for Individual Donors from Open Philanthropy Project Staff - 2017||2017-12-21||Holden Karnofsky||Open Philanthropy||Jaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey||Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters||Donation suggestion list||Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reform||Open Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.|
|A major grant from the Open Philanthropy Project||2017-09-08||Malo Bourgon||Machine Intelligence Research Institute||Open Philanthropy||Machine Intelligence Research Institute||Donee periodic update||AI safety||MIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."|
|My current thoughts on MIRI’s highly reliable agent design work (GW, IR)||2017-07-07||Daniel Dewey||Effective Altruism Forum||Open Philanthropy||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Post discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin.|
|Suggestions for Individual Donors from Open Philanthropy Project Staff - 2016||2016-12-14||Holden Karnofsky||Open Philanthropy||Jaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead||Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund||Donation suggestion list||Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policy||Open Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas.|
|Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)||2016-09-06||Open Philanthropy||Open Philanthropy||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Reviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06).|
|Machine Intelligence Research Institute — General Support||2016-09-06||Open Philanthropy||Open Philanthropy||Open Philanthropy||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Open Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email.|
|Some Key Ways in Which I've Changed My Mind Over the Last Several Years||2016-09-06||Holden Karnofsky||Open Philanthropy||Machine Intelligence Research Institute Future of Humanity Institute||Reasoning supplement||AI safety||In this 16-page Google Doc, Holden Karnofsky, Executive Director of the Open Philanthropy Project, lists three issues he has changed his mind about: (1) AI safety (he considers it more important now), (2) effective altruism community (he takes it more seriously now), and (3) general properties of promising ideas and interventions (he considers feedback loops less necessary than he used to, and finding promising ideas through abstract reasoning more promising). The document is linked to and summarized in the blog post https://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about|
|Here are the biggest things I got wrong in my attempts at effective altruism over the last ~3 years.||2016-05-24||Buck Shlegeris||Buck Shlegeris Open Philanthropy||Vegan Outreach Machine Intelligence Research Institute||Broad donor strategy||Global health|Animal welfare|AI safety||Buck Shlegeris, reflecting on his past three years as an effective altruist, identifies two mistakes he made in his past 3 years as an effective altruist: (1) "I thought leafleting about factory farming was more effective than GiveWell top charities. [...] I probably made this mistake because of emotional bias. I was frustrated by people who advocated for global poverty charities for dumb reasons. [...] I thought that if they really had that belief, they should either save their money just in case we found a great intervention for animals in the future, or donate it to the people who were trying to find effective animal right interventions. I think that this latter argument was correct, but I didn't make it exclusively." (2) "In 2014 and early 2015, I didn't pay as much attention to OpenPhil as I should have. [...] Being wrong about OpenPhil's values is forgivable, but what was really dumb is that I didn't realize how incredibly important it was to my life plan that I understand OpenPhil's values." (3) "I wish I'd thought seriously about donating to MIRI sooner. [...] Like my error #2, this is an example of failing to realize that when there's an unknown which is extremely important to my plans but I'm very unsure about it and haven't really seriously thought about it, I should probably try to learn more about it."|
|Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity||2016-05-06||Holden Karnofsky||Open Philanthropy||Open Philanthropy||Machine Intelligence Research Institute Future of Humanity Institute||Review of current state of cause area||AI safety||In this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority.|
|Thoughts on the Singularity Institute (SI) (GW, IR)||2012-05-11||Holden Karnofsky||LessWrong||Open Philanthropy||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||Post discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit|
|Singularity Institute for Artificial Intelligence||2011-04-30||Holden Karnofsky||GiveWell||Open Philanthropy||Machine Intelligence Research Institute||Evaluator review of donee||AI safety||In this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI.|
Graph of all donations (with known year of donation), showing the timeframe of donations
|Amount (current USD)||Amount rank (out of 5)||Donation date||Cause area||URL||Influencer||Notes|
|7,703,750.00||1||AI safety/technical research||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020||Claire Zabel Committee for Effective Altruism Support||Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support
Intended use of funds (category): Organizational general support
Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety
Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" with the most similar previous grant being https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 (February 2019). Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Centre for Effective Altruism ($4,146,795), 80,000 Hours ($3,457,284), and Ought ($1,593,333).
Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation.
Intended funding timeframe in months: 24
Other notes: The donee describes the grant in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ (2020-04-27) along with other funding it has received ($300,000 from the Berkeley Existential Risk Initiative and $100,000 from the Long-Term Future Fund). The fact that the grant is a two-year grant is mentioned here, but not in the grant page on Open Phil's website. The page also mentions that of the total grant amount of $7.7 million, $6.24 million is coming from Open Phil's normal funders (Good Ventures) and the remaining $1.46 million is coming from Ben Delo, co-founder of the cryptocurrency trading platform BitMEX, as part of a funding partnership https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo announced November 11, 2019. Announced: 2020-04-10.
|2,652,500.00||3||AI safety/technical research||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019||Claire Zabel Committee for Effective Altruism Support||Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support
Intended use of funds (category): Organizational general support
Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.
Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Donor reason for donating that amount (rather than a bigger or smaller amount): Amount decided by the Committee for Effective Altruism Support (CEAS) https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by CEAS, made at the same time and therefore likely drawing from the same money pot, are to the Centre for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The original amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later. In November 2019, additional funding would bring the total award amount to $2,652,500.
Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round.
Intended funding timeframe in months: 24
Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant.
Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 with a very similar writeup suggests that Open Phil and the Committee for Effective Altruism Support would continue to stand by the reasoning for the grant.
Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-01.
|150,000.00||5||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-program||Claire Zabel||Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more
Intended use of funds (category): Direct project expenses
Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."
Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-27.
|3,750,000.00||2||AI safety/technical research||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017||Nick Beckstead||Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) suggests that work on the review had been going on well before the grant renewal date
Intended use of funds (category): Organizational general support
Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."
Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page
Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."
Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36
Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." is discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
|500,000.00||4||AI safety/technical research||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support||--||Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.
Intended use of funds (category): Organizational general support
Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."
Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety
Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."
Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12
Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."
Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant
Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.