Berkeley Existential Risk Initiative donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttp://existence.org/
Donate pagehttp://existence.org/donating/
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative
Org Watch pagehttps://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative
Key peopleAndrew Critch|Gina Stuessy|Michael Keenan
Launch date2017-02
NotesLaunched to provide fast-moving support to existing existential risk organizations. Works closely with Machine Intelligence Research Institute, Center for Human-Compatible AI, Centre for the Study of Existential Risk, and Future of Humanity Institute. People working at it are closely involved with MIRI and the Center for Applied Rationality

This entity is also a donor.

Donation amounts by donor and year for donee Berkeley Existential Risk Initiative

Donor Total 2017
Jaan Tallinn (filter this donee) 7,000,000.00 7,000,000.00
Open Philanthropy Project (filter this donee) 403,890.00 403,890.00
Anonymous (filter this donee) 100,000.00 100,000.00
Casey and Family Foundation (filter this donee) 100,000.00 100,000.00
EA Giving Group (filter this donee) 35,161.98 35,161.98
Effective Altruism Funds (filter this donee) 14,838.02 14,838.02
Total 7,653,890.00 7,653,890.00

Full list of documents in reverse chronological order (9 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
2018 AI Alignment Literature Review and Charity Comparison2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Seeking Testimonials - IPR, Leverage, and Paradigm2018-11-15Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Leverage Research Institute for Philosophical Research Paradigm Academy Request for reviews of doneeRationality improvementIn the blog post, Andrew Critch of BERI talks about plans to make grants to Leverage Research and the Institute for Philosophical Research (IPR). Critch says that IPR, Leverage, and Paradigm Academy are three related organizations that BERI internally refers to as ILP. In light of communtiy skepticism about ILP, Critch announces that BERI is inviting feedback through a feedback form on these organizations till December 20. He also explains what sort of feedback will be taken more seriously by BERI. The post was also announced on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/fvvRZMJJ7g4gzXjSH/seeking-information-on-three-potential-grantee-organizations on 2018-12-09
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups
Staff Members’ Personal Donations for Giving Season 20172017-12-18Holden Karnofsky Open Philanthropy ProjectHolden Karnofsky Alexander Berger Nick Beckstead Helen Toner Claire Zabel Lewis Bollard Ajeya Cotra Morgan Davis Michael Levine GiveWell top charities GiveWell GiveDirectly EA Giving Group Berkeley Existential Risk Initiative Effective Altruism Funds Sentience Institute Encompass The Humane League The Good Food Institute Mercy For Animals Compassion in World Farming USA Animal Equality Donor lottery Against Malaria Foundation GiveDirectly Periodic donation list documentationOpen Philanthropy Project staff members describe where they are donating this year, and the considerations that went into the donation decision. By policy, amounts are not disclosed. This is the first standalone blog post of this sort by the Open Philanthropy Project; in previous years, the corresponding donations were documented in the GiveWell staff members donation post
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund)
Announcing BERI Computing Grants2017-12-01Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
Forming an engineering team2017-10-25Andrew Critch Berkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
What we’re thinking about as we grow - ethics, oversight, and getting things done2017-10-19Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksOutlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.).
BERI's semi-annual report, August2017-09-12Rebecca Raible Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksA blog post announcing BERI's semi-annual report.

Full list of donations in reverse chronological order (7 donations)

DonorAmount (current USD)Donation dateCause areaURLInfluencerNotes
Jaan Tallinn5,000,000.002017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html-- Donation amount approximate.
Anonymous100,000.002017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html--
Casey and Family Foundation100,000.002017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html--
Open Philanthropy Project403,890.002017-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaborationDaniel Dewey Grant to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley, also an Open Phil grantee (see https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for info on that grant). Open Phil also sees this as a promising model for providing assistance to other BERI clients in the future. Announced: 2017-09-28.
Effective Altruism Funds14,838.022017-04AI safety/other global catastrophic riskshttps://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgINick Beckstead Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund, and made up the remainder via the EA Giving Group and some personal funds. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
EA Giving Group35,161.982017-04AI safety/other global catastrophic riskshttps://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgINick Beckstead Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund in Effective Altruism Funds, and made up the remainder via the EA Giving Group and some personal funds. It is not clear how much was personal funds, so for simplicity we are attributing the entirety of the remainder to EA Giving Group (creating some inaccuracy).
Jaan Tallinn2,000,000.002017AI safetyhttp://existence.org/grants--