This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Timelines wiki page||https://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative|
|Org Watch page||https://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative|
|Key people||Andrew Critch|Gina Stuessy|Michael Keenan|
|Notes||Launched to provide fast-moving support to existing existential risk organizations. Works closely with Machine Intelligence Research Institute, Center for Human-Compatible AI, Centre for the Study of Existential Risk, and Future of Humanity Institute. People working at it are closely involved with MIRI and the Center for Applied Rationality|
This entity is also a donor.
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
|Jaan Tallinn (filter this donee)||7,000,000.00||0.00||0.00||7,000,000.00|
|Open Philanthropy (filter this donee)||1,508,890.00||150,000.00||955,000.00||403,890.00|
|Anonymous (filter this donee)||100,000.00||0.00||0.00||100,000.00|
|Casey and Family Foundation (filter this donee)||100,000.00||0.00||0.00||100,000.00|
|EA Giving Group (filter this donee)||35,161.98||0.00||0.00||35,161.98|
|Effective Altruism Funds: Long-Term Future Fund (filter this donee)||14,838.02||0.00||0.00||14,838.02|
|Patrick Brinich-Langlois (filter this donee)||7,497.00||0.00||7,497.00||0.00|
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Document scope||Cause area||Notes|
|2019 AI Alignment Literature Review and Charity Comparison (GW, IR)||2019-12-19||Ben Hoskin||Effective Altruism Forum||Ben Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourising Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.|
|The Future of Grant-making Funded by Jaan Tallinn at BERI||2019-08-25||Board of Directors||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative Jaan Tallinn||Broad donor strategy||In the blog post, BERI announces that it is no longer going to be handling grantmaking for Jaan Tallinn. The grantmaking is being handed to "one or more other teams and/or processes that are separate from BERI." Andrew Critch will be working on the handoff. BERI will complete administration of grants already committed to.|
|Committee for Effective Altruism Support||2019-02-27||Open Philanthropy||Open Philanthropy||Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute||Broad donor strategy||Effective altruism|AI safety||The document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019|
|EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)||2019-01-06||Avi Norowitz||Effective Altruism Forum||Avi Norowitz William Kiely||Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund||Miscellaneous commentary||The blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched|
|2018 AI Alignment Literature Review and Charity Comparison (GW, IR)||2018-12-17||Ben Hoskin||Effective Altruism Forum||Ben Hoskin||Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.|
|Seeking Testimonials - IPR, Leverage, and Paradigm||2018-11-15||Andrew Critch||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Leverage Research Institute for Philosophical Research Paradigm Academy||Request for reviews of donee||Rationality improvement||In the blog post, Andrew Critch of BERI talks about plans to make grants to Leverage Research and the Institute for Philosophical Research (IPR). Critch says that IPR, Leverage, and Paradigm Academy are three related organizations that BERI internally refers to as ILP. In light of communtiy skepticism about ILP, Critch announces that BERI is inviting feedback through a feedback form on these organizations till December 20. He also explains what sort of feedback will be taken more seriously by BERI. The post was also announced on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/fvvRZMJJ7g4gzXjSH/seeking-information-on-three-potential-grantee-organizations (GW, IR) on 2018-12-09|
|Suggestions for Individual Donors from Open Philanthropy Project Staff - 2017||2017-12-21||Holden Karnofsky||Open Philanthropy||Jaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey||Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters||Donation suggestion list||Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reform||Open Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.|
|Staff Members’ Personal Donations for Giving Season 2017||2017-12-18||Holden Karnofsky||Open Philanthropy||Holden Karnofsky Alexander Berger Nick Beckstead Helen Toner Claire Zabel Lewis Bollard Ajeya Cotra Morgan Davis Michael Levine||GiveWell top charities GiveWell GiveDirectly EA Giving Group Berkeley Existential Risk Initiative Effective Altruism Funds Sentience Institute Encompass The Humane League The Good Food Institute Mercy For Animals Compassion in World Farming USA Animal Equality Donor lottery Against Malaria Foundation GiveDirectly||Periodic donation list documentation||Open Philanthropy Project staff members describe where they are donating this year, and the considerations that went into the donation decision. By policy, amounts are not disclosed. This is the first standalone blog post of this sort by the Open Philanthropy Project; in previous years, the corresponding donations were documented in the GiveWell staff members donation post.|
|AI: a Reason to Worry, and to Donate||2017-12-10||Jacob Falkovich||Jacob Falkovich||Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds||Single donation documentation||AI safety||Falkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).|
|Announcing BERI Computing Grants||2017-12-01||Andrew Critch||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Donee periodic update||AI safety/other global catastrophic risks|
|Forming an engineering team||2017-10-25||Andrew Critch||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Donee periodic update||AI safety/other global catastrophic risks|
|What we’re thinking about as we grow - ethics, oversight, and getting things done||2017-10-19||Andrew Critch||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Donee periodic update||AI safety/other global catastrophic risks||Outlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.).|
|BERI's semi-annual report, August||2017-09-12||Rebecca Raible||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Berkeley Existential Risk Initiative||Donee periodic update||AI safety/other global catastrophic risks||A blog post announcing BERI's semi-annual report.|
Graph of top 10 donors by amount, showing the timeframe of donations
|Donor||Amount (current USD)||Amount rank (out of 11)||Donation date||Cause area||URL||Influencer||Notes|
|Open Philanthropy||150,000.00||6||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-general-support||Claire Zabel||Intended use of funds (category): Organizational general support
Intended use of funds: The grant page says: "BERI seeks to reduce existential risks to humanity, and collaborates with other long-termist organizations, including the Center for Human-Compatible AI at UC Berkeley. This funding is intended to help BERI establish new collaborations."
|Patrick Brinich-Langlois||7,497.00||11||AI safety||https://www.patbl.com/misc/other/donations/||--|
|Open Philanthropy||705,000.00||3||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019||Daniel Dewey||Intended use of funds (category): Direct project expenses
Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."
Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
|Open Philanthropy||250,000.00||5||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-ml-engineers||Daniel Dewey||Donation process: The Open Philanthropy Project described the donation decision as being based on "conversations with various professors and students"
Intended use of funds (category): Direct project expenses
Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).
Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."
Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.
Other notes: Follows previous support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai for the launch of CHAI and previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration to collaborate with CHAI. Announced: 2019-03-04.
|Jaan Tallinn||5,000,000.00||1||AI safety||http://existence.org/2018/01/11/activity-update-december-2017.html||--||Donation amount approximate.|
|Casey and Family Foundation||100,000.00||7||AI safety||http://existence.org/2018/01/11/activity-update-december-2017.html||--|
|Open Philanthropy||403,890.00||4||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-core-staff-and-chai-collaboration||Daniel Dewey||Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf
Intended use of funds (category): Direct project expenses
Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf
Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."
Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx
Other notes: Announced: 2017-09-28.
|EA Giving Group||35,161.98||9||AI safety/other global catastrophic risks||https://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgI||Nick Beckstead||Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund in Effective Altruism Funds, and made up the remainder via the EA Giving Group and some personal funds. It is not clear how much was personal funds, so for simplicity we are attributing the entirety of the remainder to EA Giving Group (creating some inaccuracy).|
|Effective Altruism Funds: Long-Term Future Fund||14,838.02||10||AI safety/other global catastrophic risks||https://funds.effectivealtruism.org/funds/payouts/march-2017-berkeley-existential-risk-initiative-beri||Nick Beckstead||Donation process: The grant page says that Nick Beckstead, the fund manager, learned that Andrew Critch was starting up BERI and needed $50,000. Beckstead determined that this would be the best use of the money in the Long-Term Future Fund.
Intended use of funds (category): Organizational general support
Intended use of funds: The grant page says: "It is a new initiative providing various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support). It works as a non-profit entity, independent of any university, so that it can help multiple organizations and to operate more swiftly than would be possible within a university context."
Donor reason for selecting the donee: Nick Beckstead gives these reasons on the grant page: the basic idea makes sense to him, his confidence in Critch's ability to make it happen, supporting people to try out reasonable ideas and learn from how they unfold seems valuable, and the natural role of Beckstead as a "first funder" for such opportunities and confidence that other competing funders for this would have good counterfactual uses of their money.
Donor reason for donating that amount (rather than a bigger or smaller amount): The requested amount was $50,000, and at the time of grant, the fund only had $14,838.02. So, all the fund money was granted. Beckstead donated the remainder of the funding via the EA Giving Group and a personal donor-advised fund.
Percentage of total donor spend in the corresponding batch of donations: 100.00%
Donor reason for donating at this time (rather than earlier or later): The timing of BERI starting up and the launch of the Long-Term Future Fund closely matched, leading to this grant happening when it did.
Donor retrospective of the donation: BERI would become successful and get considerable funding from Jaan Tallinn in the coming months, validating the grant. The Long-Term Future Fund would not make any further grants to BERI.
|Jaan Tallinn||2,000,000.00||2||AI safety||http://existence.org/grants||--|