Berkeley Existential Risk Initiative donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttp://existence.org/
Donate pagehttp://existence.org/donating/
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative
Org Watch pagehttps://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative
Key peopleAndrew Critch|Gina Stuessy|Michael Keenan
Launch date2017-02
NotesLaunched to provide fast-moving support to existing existential risk organizations. Works closely with Machine Intelligence Research Institute, Center for Human-Compatible AI, Centre for the Study of Existential Risk, and Future of Humanity Institute. People working at it are closely involved with MIRI and the Center for Applied Rationality

This entity is also a donor.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 11 150,000 796,944 7,497 14,838 35,162 100,000 100,000 150,000 250,000 403,890 705,000 2,000,000 5,000,000
AI safety 11 150,000 796,944 7,497 14,838 35,162 100,000 100,000 150,000 250,000 403,890 705,000 2,000,000 5,000,000

Donation amounts by donor and year for donee Berkeley Existential Risk Initiative

Donor Total 2020 2019 2017
Jaan Tallinn (filter this donee) 7,000,000.00 0.00 0.00 7,000,000.00
Open Philanthropy (filter this donee) 1,508,890.00 150,000.00 955,000.00 403,890.00
Anonymous (filter this donee) 100,000.00 0.00 0.00 100,000.00
Casey and Family Foundation (filter this donee) 100,000.00 0.00 0.00 100,000.00
EA Giving Group (filter this donee) 35,161.98 0.00 0.00 35,161.98
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 14,838.02 0.00 0.00 14,838.02
Patrick Brinich-Langlois (filter this donee) 7,497.00 0.00 7,497.00 0.00
Total 8,766,387.00 150,000.00 962,497.00 7,653,890.00

Full list of documents in reverse chronological order (13 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourising Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
The Future of Grant-making Funded by Jaan Tallinn at BERI2019-08-25Board of Directors Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Jaan Tallinn Broad donor strategyIn the blog post, BERI announces that it is no longer going to be handling grantmaking for Jaan Tallinn. The grantmaking is being handed to "one or more other teams and/or processes that are separate from BERI." Andrew Critch will be working on the handoff. BERI will complete administration of grants already committed to.
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Seeking Testimonials - IPR, Leverage, and Paradigm2018-11-15Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Leverage Research Institute for Philosophical Research Paradigm Academy Request for reviews of doneeRationality improvementIn the blog post, Andrew Critch of BERI talks about plans to make grants to Leverage Research and the Institute for Philosophical Research (IPR). Critch says that IPR, Leverage, and Paradigm Academy are three related organizations that BERI internally refers to as ILP. In light of communtiy skepticism about ILP, Critch announces that BERI is inviting feedback through a feedback form on these organizations till December 20. He also explains what sort of feedback will be taken more seriously by BERI. The post was also announced on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/fvvRZMJJ7g4gzXjSH/seeking-information-on-three-potential-grantee-organizations (GW, IR) on 2018-12-09
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
Staff Members’ Personal Donations for Giving Season 20172017-12-18Holden Karnofsky Open PhilanthropyHolden Karnofsky Alexander Berger Nick Beckstead Helen Toner Claire Zabel Lewis Bollard Ajeya Cotra Morgan Davis Michael Levine GiveWell top charities GiveWell GiveDirectly EA Giving Group Berkeley Existential Risk Initiative Effective Altruism Funds Sentience Institute Encompass The Humane League The Good Food Institute Mercy For Animals Compassion in World Farming USA Animal Equality Donor lottery Against Malaria Foundation GiveDirectly Periodic donation list documentationOpen Philanthropy Project staff members describe where they are donating this year, and the considerations that went into the donation decision. By policy, amounts are not disclosed. This is the first standalone blog post of this sort by the Open Philanthropy Project; in previous years, the corresponding donations were documented in the GiveWell staff members donation post.
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).
Announcing BERI Computing Grants2017-12-01Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
Forming an engineering team2017-10-25Andrew Critch Berkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
What we’re thinking about as we grow - ethics, oversight, and getting things done2017-10-19Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksOutlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.).
BERI's semi-annual report, August2017-09-12Rebecca Raible Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksA blog post announcing BERI's semi-annual report.

Full list of donations in reverse chronological order (11 donations)

Graph of top 10 donors by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 11)Donation dateCause areaURLInfluencerNotes
Open Philanthropy150,000.0062020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-general-supportClaire Zabel Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "BERI seeks to reduce existential risks to humanity, and collaborates with other long-termist organizations, including the Center for Human-Compatible AI at UC Berkeley. This funding is intended to help BERI establish new collaborations."
Patrick Brinich-Langlois7,497.00112019-12-03AI safetyhttps://www.patbl.com/misc/other/donations/--
Open Philanthropy705,000.0032019-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
Open Philanthropy250,000.0052019-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineersDaniel Dewey Donation process: The Open Philanthropy Project described the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-04.
Jaan Tallinn5,000,000.0012017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html-- Donation amount approximate.
Anonymous100,000.0072017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html--
Casey and Family Foundation100,000.0072017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html--
Open Philanthropy403,890.0042017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaborationDaniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx

Other notes: Announced: 2017-09-28.
EA Giving Group35,161.9892017-04AI safety/other global catastrophic riskshttps://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgINick Beckstead Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund in Effective Altruism Funds, and made up the remainder via the EA Giving Group and some personal funds. It is not clear how much was personal funds, so for simplicity we are attributing the entirety of the remainder to EA Giving Group (creating some inaccuracy).
Effective Altruism Funds: Long-Term Future Fund14,838.02102017-04AI safety/other global catastrophic riskshttps://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgINick Beckstead Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund, and made up the remainder via the EA Giving Group and some personal funds. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Jaan Tallinn2,000,000.0022017AI safetyhttp://existence.org/grants--