Berkeley Existential Risk Initiative donations made

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Facebook username Berkeley-Existential-Risk-Initiative-1875638366085846
Websitehttp://existence.org/
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative

This entity is also a donee.

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 47 50,000 113,659 1,900 10,000 20,500 25,000 32,500 50,000 86,580 100,000 150,000 300,000 800,000
AI safety 33 50,000 129,031 1,900 10,000 20,000 25,000 50,000 50,000 100,000 100,000 200,000 300,000 800,000
Existential risks 4 25,000 38,500 4,000 4,000 4,000 25,000 25,000 25,000 25,000 25,000 100,000 100,000 100,000
Rationality community 1 24,000 24,000 24,000 24,000 24,000 24,000 24,000 24,000 24,000 24,000 24,000 24,000 24,000
Existential risk 6 25,000 42,656 25,000 25,000 25,000 25,000 25,000 25,000 44,353 50,000 50,000 86,580 86,580
Rationality improvement 1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
1 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000 200,000
Career planning 1 350,000 350,000 350,000 350,000 350,000 350,000 350,000 350,000 350,000 350,000 350,000 350,000 350,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2020 2019 2018 2017
AI safety (filter this donor) 33 23 4,258,032.00 300,000.00 650,000.00 2,838,032.00 470,000.00
Career planning (filter this donor) 1 1 350,000.00 0.00 350,000.00 0.00 0.00
Existential risk (filter this donor) 6 5 255,933.00 0.00 211,580.00 44,353.00 0.00
(filter this donor) 1 1 200,000.00 0.00 200,000.00 0.00 0.00
Existential risks (filter this donor) 4 4 154,000.00 0.00 0.00 129,000.00 25,000.00
Rationality improvement (filter this donor) 1 1 100,000.00 0.00 0.00 100,000.00 0.00
Rationality community (filter this donor) 1 1 24,000.00 0.00 0.00 24,000.00 0.00
Total 47 34 5,341,965.00 300,000.00 1,411,580.00 3,135,385.00 495,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2020 2019 2018 2017
AI safety 33 23 4,258,032.00 300,000.00 650,000.00 2,838,032.00 470,000.00
Career planning 1 1 350,000.00 0.00 350,000.00 0.00 0.00
Existential risk 6 5 255,933.00 0.00 211,580.00 44,353.00 0.00
Existential risks 4 4 154,000.00 0.00 0.00 129,000.00 25,000.00
Rationality improvement 1 1 100,000.00 0.00 0.00 100,000.00 0.00
Rationality community 1 1 24,000.00 0.00 0.00 24,000.00 0.00
Classified total 46 33 5,141,965.00 300,000.00 1,211,580.00 3,135,385.00 495,000.00
Unclassified total 1 1 200,000.00 0.00 200,000.00 0.00 0.00
Total 47 34 5,341,965.00 300,000.00 1,411,580.00 3,135,385.00 495,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2020 2019 2018 2017
Center for Applied Rationality (filter this donor) Rationality FB Tw WP Site TW 1,200,000.00 0.00 0.00 1,100,000.00 100,000.00
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 1,100,000.00 300,000.00 600,000.00 0.00 200,000.00
Future of Life Institute (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 500,000.00 0.00 50,000.00 300,000.00 150,000.00
UC Berkeley Foundation (filter this donor) 400,000.00 0.00 0.00 400,000.00 0.00
80,000 Hours (filter this donor) Career coaching/life guidance FB Tw WP Site 350,000.00 0.00 350,000.00 0.00 0.00
LessWrong 2.0 (filter this donor) Rationality improvement FB WP Site 250,000.00 0.00 0.00 250,000.00 0.00
Centre for the Study of Existential Risk (filter this donor) 200,000.00 0.00 0.00 200,000.00 0.00
Survival and Flourishing Fund (filter this donor) 200,000.00 0.00 200,000.00 0.00 0.00
Ben Goldhaber (filter this donor) 150,000.00 0.00 0.00 150,000.00 0.00
Stephanie Zolayvar (filter this donor) 100,000.00 0.00 0.00 100,000.00 0.00
Justin Shovelain (filter this donor) 100,000.00 0.00 0.00 100,000.00 0.00
Stefan Schubert (filter this donor) 86,580.00 0.00 86,580.00 0.00 0.00
Leverage Research (filter this donor) 75,000.00 0.00 75,000.00 0.00 0.00
Institute for Philosophical Research (filter this donor) 75,000.00 0.00 0.00 50,000.00 25,000.00
Baeo Maltinsky (filter this donor) 55,000.00 0.00 0.00 55,000.00 0.00
Colleen McKenzie (filter this donor) 51,000.00 0.00 0.00 51,000.00 0.00
Theiss Research (filter this donor) 50,000.00 0.00 0.00 50,000.00 0.00
Roxanne Heston (filter this donor) 49,532.00 0.00 0.00 49,532.00 0.00
Association for the Advancement of Artificial Intelligence (filter this donor) 45,000.00 0.00 0.00 25,000.00 20,000.00
Lucius Caviola (filter this donor) 44,353.00 0.00 0.00 44,353.00 0.00
Miles Brundage (filter this donor) 32,500.00 0.00 0.00 32,500.00 0.00
Zoe Cremer (filter this donor) 25,200.00 0.00 0.00 25,200.00 0.00
Alliance to Feed the Earth in Disasters (filter this donor) 25,000.00 0.00 25,000.00 0.00 0.00
Global Catastrophic Risk Institute (filter this donor) Global catastrophic risks FB Tw Site 25,000.00 0.00 0.00 25,000.00 0.00
David Manheim (filter this donor) 25,000.00 0.00 0.00 25,000.00 0.00
Americans for Oxford (filter this donor) 25,000.00 0.00 25,000.00 0.00 0.00
Berkeley REACH (filter this donor) Rationality community FB Site 24,000.00 0.00 0.00 24,000.00 0.00
Bryce Hidysmith (filter this donor) 20,500.00 0.00 0.00 20,500.00 0.00
Luca Rade (filter this donor) 20,400.00 0.00 0.00 20,400.00 0.00
Sebastian Farquhar (filter this donor) 12,000.00 0.00 0.00 12,000.00 0.00
Cambridge in America (filter this donor) 10,000.00 0.00 0.00 10,000.00 0.00
Jessica Taylor (filter this donor) 10,000.00 0.00 0.00 10,000.00 0.00
Effective Altruism Sweden (filter this donor) 4,000.00 0.00 0.00 4,000.00 0.00
Jordan Alexander (filter this donor) 1,900.00 0.00 0.00 1,900.00 0.00
Total -- -- 5,341,965.00 300,000.00 1,411,580.00 3,135,385.00 495,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

Sorry, we couldn't find any influencer information.

Donation amounts by disclosures and year

Sorry, we couldn't find any disclosures information.

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (10 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
ALLFED 2019 Annual Report and Fundraising Appeal (GW, IR)2019-11-23Aron Mill Alliance to Feed the Earth in DisastersBerkeley Existential Risk Initiative Donor lottery Effective Altruism Grants Open Philanthropy Alliance to Feed the Earth in Disasters Future of Humanity Institute Donee donation caseAlternative foodsAron Mill provides a summary of the work of the Alliance to Feed the Earth in Disasters (ALLFED) in 2019. He lists key supporters as well as partners that ALLFED worked with during the year. The blog post proceeds to make an appeal and a case for fundraising ALLFED. Sections of the blog post include: (1) research output, (2) preparedness and alliance-building, (3) ALLFED team, (4) current projects, and (5) projects in need of funding.
The Future of Grant-making Funded by Jaan Tallinn at BERI2019-08-25Board of Directors Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Jaan Tallinn Broad donor strategyIn the blog post, BERI announces that it is no longer going to be handling grantmaking for Jaan Tallinn. The grantmaking is being handed to "one or more other teams and/or processes that are separate from BERI." Andrew Critch will be working on the handoff. BERI will complete administration of grants already committed to.
80,000 Hours Annual Review – December 20182019-05-07Benjamin Todd 80,000 HoursOpen Philanthropy Berkeley Existential Risk Initiative Effective Altruism Funds 80,0000 Hours Donee periodic updateEffective altruism/movement growth/career counselingThis blog post is the annual self-review by 80,000 Hours, originally written in December 2018. Publication was deferred because 80,000 Hours was waiting to hear back on the status of some large grants (in particular, one from the Open Philanthropy Project), but most of the content is still from the December 2018 draft. The post goes into detail about 80,000 Hours' progress in 2018, impact and plan changes, and future expansion plans. Funding gaps are discussed (the funding gap for 2019 is $400,000, and further money will be saved for 2020 and 2021). Grants from the Open Philanthropy Project, BERI, and the Effective Altruism Funds (EA Meta Fund) are mentioned
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
Seeking Testimonials - IPR, Leverage, and Paradigm2018-11-15Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Leverage Research Institute for Philosophical Research Paradigm Academy Request for reviews of doneeRationality improvementIn the blog post, Andrew Critch of BERI talks about plans to make grants to Leverage Research and the Institute for Philosophical Research (IPR). Critch says that IPR, Leverage, and Paradigm Academy are three related organizations that BERI internally refers to as ILP. In light of communtiy skepticism about ILP, Critch announces that BERI is inviting feedback through a feedback form on these organizations till December 20. He also explains what sort of feedback will be taken more seriously by BERI. The post was also announced on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/fvvRZMJJ7g4gzXjSH/seeking-information-on-three-potential-grantee-organizations (GW, IR) on 2018-12-09
A model I use when making plans to reduce AI x-risk (GW, IR)2018-01-18Ben Pace LessWrongBerkeley Existential Risk Initiative Review of current state of cause areaAI safetyThe author describes his implicit model of AI risk, with four parts: (1) Alignment is hard, (2) Getting alignment right accounts for most of the variance in whether an AGI system will be positive for humanity, (3) Our current epistemic state regarding AGI timelines will continue until we are close (<2 years from) to having AGI, and (4) Given timeline uncertainty, it is best to spend marginal effort on plans that assume / work in shorter timelines. There is a lot of discussion in the comments
Announcing BERI Computing Grants2017-12-01Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
Forming an engineering team2017-10-25Andrew Critch Berkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
What we’re thinking about as we grow - ethics, oversight, and getting things done2017-10-19Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksOutlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.).
BERI's semi-annual report, August2017-09-12Rebecca Raible Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksA blog post announcing BERI's semi-annual report.

Full list of donations in reverse chronological order (47 donations)

Graph of top 10 donees by amount, showing the timeframe of donations

Graph of donations and their timeframes
DoneeAmount (current USD)Amount rank (out of 47)Donation dateCause areaURLInfluencerNotes
Machine Intelligence Research Institute300,000.0052020-03-02AI safetyhttp://existence.org/grants/-- Intended use of funds (category): Organizational general support

Other notes: The grant is mentioned by MIRI in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ along with a large grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 from the Open Philanthropy Project. The post says: "at the time of our 2019 fundraiser, we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower.".
Survival and Flourishing Fund200,000.0082019-09-07--http://existence.org/tallinn-grants-future/-- Handoff of funds donated by Jaan Tallinn, so that grantmaking for these funds can be handled by the Survival and Flourishing Fund. This is because BERI no longer wants to manage grantmaking for these funds.
80,000 Hours350,000.0042019-04-22Career planninghttp://web.archive.org/web/20190623203105/http://existence.org/grants/-- The grant is mentioned by the recipient, 80,000 Hours, in a blog post https://80000hours.org/2019/05/annual-review-dec-2018/ along with grants from the Open Philanthropy Project and the Effective Altruism Funds Meta Fund.
Future of Life Institute50,000.00222019-04-22AI safetyhttp://web.archive.org/web/20190623203105/http://existence.org/grants/--
Leverage Research50,000.00222019-04-09Existential riskhttp://web.archive.org/web/20190623203105/http://existence.org/grants/--
Americans for Oxford (Earmark: Yarin Gal)25,000.00312019-04-08Existential riskhttp://web.archive.org/web/20190623203105/http://existence.org/grants/--
Alliance to Feed the Earth in Disasters25,000.00312019-03-12Existential riskhttp://web.archive.org/web/20190623203105/http://existence.org/grants/--
Machine Intelligence Research Institute600,000.0022019-02-26AI safetyhttp://existence.org/grants/-- Intended use of funds (category): Organizational general support

Donor retrospective of the donation: BERI would make a further to MIRI, indicating continued confidence in the grantee. The followup grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund.

Other notes: This grant is also discussed by the Machine Intelligence Research Institute (the grant recipient) at https://intelligence.org/2017/11/08/major-grant-open-phil/ along with a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 from the Open Philanthropy Project. Announced: 2019-05-09.
Stefan Schubert86,580.00192019-01-24Existential riskhttp://web.archive.org/web/20190623203105/http://existence.org/grants/-- Research on the psychology of existential risk.
Leverage Research25,000.00312019-01-22Existential riskhttp://web.archive.org/web/20190623203105/http://existence.org/grants/-- The grant follows a solicitation of feedback on Leverage ond related orgs http://existence.org/testimonials-ipr-leverage-paradigm/.
Lucius Caviola44,353.00282018-12-27Existential riskhttp://web.archive.org/web/20190623203105/http://existence.org/grants/-- Research on the psychology of existential risk.
Sebastian Farquhar12,000.00422018-10-31AI safetyhttps://web.archive.org/web/20181107040049/http://existence.org/grants-list/-- “to attend conferences and purchase compute for experiments related to his PhD research on uncertainty modeling in neural networks”.
Justin Shovelain100,000.00122018-10-24Existential riskshttps://web.archive.org/web/20181107040049/http://existence.org/grants-list/-- “to lead and develop Convergence Analysis, a new group focused on x-risk strategy research”.
Berkeley REACH (Earmark: Sarah Spikes)24,000.00372018-10-23Rationality communityhttps://web.archive.org/web/20181107040049/http://existence.org/grants-list/-- Grant “to implement improvements in support of the Rationality and Effective Altruism Community Hub (REACH)” BERI support is mentioned on the REACH Patreon page: "In October 2018, BERI (Berkeley Existential Risk Initiative) awarded a grant of $24k toward REACH operations for 2018-2019. So far, $10k of that has been dispersed as salary for the REACH Manager, Sarah “Stardust” Spikes.".
Effective Altruism Sweden4,000.00462018-10-18Existential riskshttps://web.archive.org/web/20181107040049/http://existence.org/grants-list/-- “to support Markus Stoor’s project to coordinate two follow-up lunch-to-lunch meetings in Sweden for x-risk-focused individuals”.
Association for the Advancement of Artificial Intelligence20,000.00402018-10-15AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “for the support of the 2019 conference on AI, Ethics, and Society”.
Jessica Taylor10,000.00432018-10-10AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to work on her research in AI alignment and other areas”.
Ben Goldhaber150,000.00102018-10-10AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to support his project (co-lead by Jacob Lagerros) to bring x-risk-relevant questions to popular prediction platforms”.
Stephanie Zolayvar100,000.00122018-10-09AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to train herself in circling and host circles for people who are promising contributors to reducing x-risk”.
Roxanne Heston49,532.00272018-10-02AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to work on a variety of AI policy projects in Washington, D.C.”.
Baeo Maltinsky55,000.00202018-10-02AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to further his research on AI and technology trends”.
Zoe Cremer25,200.00302018-10-02AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to support her as a visiting fellow at CFI, where she will research disagreements about the amount and kind of structure required for AGI”.
Colleen McKenzie51,000.00212018-10-02AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to support her research on AI timelines and the processes that produce technical and scientific progress”.
Cambridge in America10,000.00432018-10-02AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “for the support of the Leverhulme Centre for the Future of Intellgience”.
David Manheim25,000.00312018-09-26AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to research aspects of Goodhart’s law, focusing on multi-agent dynamics”. Probably related to https://arxiv.org/pdf/1803.04585.pdf.
Luca Rade20,400.00392018-09-24AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to research the implications of coarse-graining by an agent in a complex environment for AI alignment”.
Jordan Alexander1,900.00472018-09-24AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to host several meetings at Stanford EA and the Stanford Transhumanist Association”.
Bryce Hidysmith20,500.00382018-09-24AI safetyhttps://web.archive.org/web/20181019174535/http://existence.org/grants-list/-- “to analyze global risks from technology through a geopolitical lens”.
Institute for Philosophical Research50,000.00222018-09-07AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- General support. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html.
LessWrong 2.0150,000.00102018-09-04AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- Given via the Center for Applied Rationality. To support the development of LessWrong 2.0. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html.
Center for Applied Rationality300,000.0052018-07-24AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- General support. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html and at http://www.rationality.org/resources/updates/2018/august-newsletter (CFAR lists $600,000 over two years as the amount).
UC Berkeley Foundation400,000.0032018-07-24AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- Intended to support Andrew Critch’s work at the Center for Human-compatible AI. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html.
Theiss Research50,000.00222018-07-12AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- Intended to support Carrick Flynn’s research with Allan DaFoe at the Future of Humanity Institute. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html.
Future of Life Institute300,000.0052018-04-10AI safetyhttps://web.archive.org/web/20180905034853/http://existence.org/organization-grants/ https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- General support.
Centre for the Study of Existential Risk200,000.0082018-04-06AI safetyhttps://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general suppotr; grant via Cambridge in America.
Association for the Advancement of Artificial Intelligence5,000.00452018-03-30AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- Grant in support of the 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents. See announcement at http://existence.org/2018/04/07/activity-update-february-and-march-2018.html.
LessWrong 2.0100,000.00122018-02-08Rationality improvementhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants-- Grant via the Center for Applied Rationality for the support of activities to develop and improve the website LessWrong 2.0. See announcement at http://existence.org/2018/04/07/activity-update-february-and-march-2018.html.
Center for Applied Rationality800,000.0012018-01-26AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- General support for the purpose of a permanent venue for CFAR workshops. See announcement at http://existence.org/2018/02/08/activity-update-january-2018.html.
Global Catastrophic Risk Institute25,000.00312018-01-24Existential riskshttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general support; grant via Social and Environmental Entrepreneurs.
Miles Brundage32,500.00292018-01-02AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants--
Machine Intelligence Research Institute100,000.00122017-12-28AI safetyhttp://existence.org/grants/-- Intended use of funds (category): Organizational general support

Donor retrospective of the donation: BERI would make two further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Announced: 2018-01-11.
Center for Applied Rationality100,000.00122017-12-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- See announcement at http://existence.org/2018/01/11/activity-update-december-2017.html.
Association for the Advancement of Artificial Intelligence20,000.00402017-12-18AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- To support the conference on Artificial Intelligence, Ethics, and Society. See announcement at http://existence.org/2017/12/01/activity-update-november-2017.html.
Institute for Philosophical Research25,000.00312017-12-15Existential riskshttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/--
Future of Life Institute100,000.00122017-10-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/--
Future of Life Institute50,000.00222017-10-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general support. See announcement at http://existence.org/2017/11/03/activity-update-october-2017.html.
Machine Intelligence Research Institute100,000.00122017-09-13AI safetyhttp://existence.org/grants-- Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: The announcement page says: "Broadly, we believe these groups [Machine Intelligence Research Institute and Future of Life Institute] to have done good work in the past for reducing existential risk and wish to support their continued efforts."

Donor reason for donating at this time (rather than earlier or later): This is one of two opening grants made by BERI to begin its grants program.

Donor thoughts on making further donations to the donee: The grant page says: "Over the next few months, we may write more about our reasoning behind these and other grants." It further outlines the kinds of organizations that BERI will be granting to in the short run.

Donor retrospective of the donation: BERI would make three further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Announced: 2017-09-25.

Similarity to other donors

Sorry, we couldn't find any similar donors.