Centre for the Study of Existential Risk donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee Centre for the Study of Existential Risk in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 3 40,000 80,000 0 0 0 0 40,000 40,000 40,000 200,000 200,000 200,000 200,000
Rationality improvement 1 0 0 0 0 0 0 0 0 0 0 0 0 0
Global catastrophic risks 1 40,000 40,000 40,000 40,000 40,000 40,000 40,000 40,000 40,000 40,000 40,000 40,000 40,000
AI safety 1 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000

Donation amounts by donor and year for donee Centre for the Study of Existential Risk

Donor Total 2019 2018
Berkeley Existential Risk Initiative (filter this donee) 200,000.00 0.00 200,000.00
Survival and Flourishing Fund (filter this donee) 40,000.00 40,000.00 0.00
EA Giving Group (filter this donee) 0.00 0.00 0.00
Total 240,000.00 40,000.00 200,000.00

Full list of documents in reverse chronological order (7 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourising Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
Where should you donate to have the most impact during giving season 2015?2015-12-24Robert Wiblin 80,000 Hours Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly Evaluator consolidated recommendation listGlobal health and development,Effective altruism/movement growth,Rationality improvement,Biosecurity and pandemic preparedness,AI risk,Global catastrophic risksRobert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier

Full list of donations in reverse chronological order (3 donations)

Graph of top 10 donors by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 3)Donation dateCause areaURLInfluencerNotes
Survival and Flourishing Fund40,000.0022019-08Global catastrophic riskshttp://survivalandflourishing.org/Alex Flint Andrew Critch Eric Rogstad Donation process: Part of the founding batch of grants for the Survival and Flourishing Fund made in August 2019. The fund is partly a successor to part of the grants program of the Berkeley Existential Risk Initiative (BERI) that handled grantmaking by Jaan Tallinn; see http://existence.org/tallinn-grants-future/ As such, this grant to FLI may represent a followup to past grants by BERI in support of CSER (though BERI did not grant directly to CSER)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: This grant may represent a followup to past grants by BERI in support of CSER (though BERI did not grant directly to CSER)

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; the Survival and Flourishing Fund is making its first round of grants in August 2019

Other notes: Percentage of total donor spend in the corresponding batch of donations: 100.00%; announced: 2019-08-29.
Berkeley Existential Risk Initiative200,000.0012018-04-06AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general suppotr; grant via Cambridge in America.
EA Giving Group----2014Rationality improvementhttps://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/editNick Beckstead Actual date range: December 2013 to December 2014. Exact date, amount, or fraction not known, but it is the donee with the fourth highest amount donated out of five donees in this period.