Berkeley Existential Risk Initiative donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttp://existence.org/
Donate pagehttp://existence.org/donating/
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative
Org Watch pagehttps://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative
Key peopleAndrew Critch|Gina Stuessy|Michael Keenan
Launch date2017-02
NotesLaunched to provide fast-moving support to existing existential risk organizations. Works closely with Machine Intelligence Research Institute, Center for Human-Compatible AI, Centre for the Study of Existential Risk, and Future of Humanity Institute. People working at it are closely involved with MIRI and the Center for Applied Rationality

This entity is also a donor.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 34 150,000 467,853 7,497 17,000 35,000 70,000 100,000 150,000 210,000 250,000 478,000 1,126,160 5,000,000
AI safety 26 155,000 563,884 7,497 30,000 70,000 100,000 140,050 155,000 210,000 250,000 705,000 2,000,000 5,000,000
Global catastrophic risks 6 37,000 187,667 14,000 14,000 17,000 17,000 37,000 37,000 247,000 333,000 333,000 478,000 478,000
Gobal catastrophic risks 1 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000
1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000

Donation amounts by donor and year for donee Berkeley Existential Risk Initiative

Donor Total 2023 2022 2021 2020 2019 2017
Jaan Tallinn (filter this donee) 8,363,000.00 0.00 0.00 1,343,000.00 20,000.00 0.00 7,000,000.00
Open Philanthropy (filter this donee) 6,750,495.00 175,000.00 4,661,605.00 405,000.00 150,000.00 955,000.00 403,890.00
Jed McCaleb (filter this donee) 281,000.00 0.00 0.00 281,000.00 0.00 0.00 0.00
FTX Future Fund (filter this donee) 255,000.00 0.00 255,000.00 0.00 0.00 0.00 0.00
Anonymous (filter this donee) 100,000.00 0.00 0.00 0.00 0.00 0.00 100,000.00
Casey and Family Foundation (filter this donee) 100,000.00 0.00 0.00 0.00 0.00 0.00 100,000.00
EA Giving Group (filter this donee) 35,161.98 0.00 0.00 0.00 0.00 0.00 35,161.98
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 14,838.02 0.00 0.00 0.00 0.00 0.00 14,838.02
Patrick Brinich-Langlois (filter this donee) 7,497.00 0.00 0.00 0.00 0.00 7,497.00 0.00
Total 15,906,992.00 175,000.00 4,916,605.00 2,029,000.00 170,000.00 962,497.00 7,653,890.00

Full list of documents in reverse chronological order (16 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)2021-12-14Zvi Mowshowitz LessWrongSurvival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Miscellaneous commentaryLongtermism|AI safety|Global catastrophic risksIn this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
The Future of Grant-making Funded by Jaan Tallinn at BERI2019-08-25Board of Directors Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Jaan Tallinn Broad donor strategyIn the blog post, BERI announces that it is no longer going to be handling grantmaking for Jaan Tallinn. The grantmaking is being handed to "one or more other teams and/or processes that are separate from BERI." Andrew Critch will be working on the handoff. BERI will complete administration of grants already committed to.
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Seeking Testimonials - IPR, Leverage, and Paradigm2018-11-15Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Leverage Research Institute for Philosophical Research Paradigm Academy Request for reviews of doneeEpistemic institutionsIn the blog post, Andrew Critch of BERI talks about plans to make grants to Leverage Research and the Institute for Philosophical Research (IPR). Critch says that IPR, Leverage, and Paradigm Academy are three related organizations that BERI internally refers to as ILP. In light of community skepticism about ILP, Critch announces that BERI is inviting feedback through a feedback form on these organizations till December 20. He also explains what sort of feedback will be taken more seriously by BERI. The post was also announced on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/fvvRZMJJ7g4gzXjSH/seeking-information-on-three-potential-grantee-organizations (GW, IR) on 2018-12-09.
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
Staff Members’ Personal Donations for Giving Season 20172017-12-18Holden Karnofsky Open PhilanthropyHolden Karnofsky Alexander Berger Nick Beckstead Helen Toner Claire Zabel Lewis Bollard Ajeya Cotra Morgan Davis Michael Levine GiveWell top charities GiveWell GiveDirectly EA Giving Group Berkeley Existential Risk Initiative Effective Altruism Funds Sentience Institute Encompass The Humane League The Good Food Institute Mercy For Animals Compassion in World Farming USA Animal Equality Donor lottery Against Malaria Foundation GiveDirectly Periodic donation list documentationOpen Philanthropy Project staff members describe where they are donating this year, and the considerations that went into the donation decision. By policy, amounts are not disclosed. This is the first standalone blog post of this sort by the Open Philanthropy Project; in previous years, the corresponding donations were documented in the GiveWell staff members donation post.
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).
Announcing BERI Computing Grants2017-12-01Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
Forming an engineering team2017-10-25Andrew Critch Berkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risks
What we’re thinking about as we grow - ethics, oversight, and getting things done2017-10-19Andrew Critch Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksOutlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.).
BERI's semi-annual report, August2017-09-12Rebecca Raible Berkeley Existential Risk InitiativeBerkeley Existential Risk Initiative Berkeley Existential Risk Initiative Donee periodic updateAI safety/other global catastrophic risksA blog post announcing BERI's semi-annual report.

Full list of donations in reverse chronological order (34 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 34)Donation dateCause areaURLInfluencerNotes
Open Philanthropy70,000.00242023-10AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-university-collaboration-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [BERI's] university collaboration program. Selected applicants become eligible for support and services from BERI that would be difficult or impossible to obtain through normal university channels. BERI will use these funds to increase the size of its 2024 cohort." The page https://existence.org/2023/07/27/trial-collaborations-2023.html is linked.

Other notes: Intended funding timeframe in months: 12.
Open Philanthropy70,000.00242023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-scalable-oversight-dataset/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant " to support the creation of a scalable oversight dataset. The purpose of the dataset is to collect questions that non-experts can’t answer even with the internet at their disposal; these kinds of questions can be used to test how well AI systems can lead humans to the right answers without misleading them."
Open Philanthropy35,000.00282023-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-lab-retreat/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a retreat for Anca Dragan’s BAIR lab group, where members will discuss potential risks from advanced artificial intelligence."

Other notes: Intended funding timeframe in months: 1.
Open Philanthropy2,047,268.0022022-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community. This grant will support the MATS program’s third cohort."

Donor reason for donating at this time (rather than earlier or later): The grant is made in time for the third cohort of the SERI-MATS program; this is the cohort being funded by the grant.
Intended funding timeframe in months: 6

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ for the London-based extension suggests continued satisfaction with this funded program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year. See also the companion grants: https://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/ to AI Safety Support and https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ to Conjecture for the London-based extension.
Open Philanthropy100,000.00202022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support-2/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. BERI seeks to reduce existential risks to humanity by providing services and support to university-based research groups, including the Center for Human-Compatible AI at the University of California, Berkeley."
Open Philanthropy30,000.00292022-06AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-language-model-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language model alignment research."

Other notes: Intended funding timeframe in months: 36.
FTX Future Fund155,000.00172022-05AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft."
Open Philanthropy140,050.00192022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-david-krueger-collaboration/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger."

Other notes: The grant page says: "The grant amount was updated in August 2023.".
Open Philanthropy210,000.00142022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-ai-standards-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence."

Donor reason for donating at this time (rather than earlier or later): The grant is made at the same time as the companion grant https://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/ to the Center for Long-Term Cybersecurity (CLTC), via the University of California, Berkeley.

Other notes: There is a companion grant https://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/ to the Center for Long-Term Cybersecurity (CLTC), via the University of California, Berkeley.
Open Philanthropy1,008,127.0052022-04AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Berkeley Existential Risk Initiative to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the second cohort of the SERI Machine Learning Alignment Theory Scholars (MATS) Program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community."

Donor reason for donating at this time (rather than earlier or later): The grant is made in time for the second cohort of the SERI-MATS program; this is the cohort being funded by the grant.
Intended funding timeframe in months: 6

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the third cohort of the SERI-MATS program suggests the donor's continued satisfaction with the SERI-MATS program. Also, the grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-program-in-london/ for the London-based extension of this cohort (the second cohort) also suggests the donor's satisfaction with the program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year.
FTX Future Fund100,000.00202022-03--https://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support BERI in hiring a second core operations employee to contribute to BERI’s work supporting university research groups."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Open Philanthropy1,126,160.0042022-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. BERI will use the funding to facilitate the creation of an in-house compute cluster for CHAI’s use, purchase compute resources, and hire a part-time system administrator to help manage the cluster."
Open Philanthropy195,000.00162021-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program-2/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the SERI ML Alignment Theory Scholars (MATS) Program. MATS is a two-month program where students will research problems related to AI alignment while supervised by a mentor."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ and https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the second and third cohort of the SERI-MATS program suggests the donor's continued satisfaction with the SERI-MATS program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year. Intended funding timeframe in months: 6.
Jaan Tallinn248,000.00122021-10AI safetyhttps://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-CHAI collaboration, This is BERI's collaboration with the Center for Human-Compatible AI (CHAI). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the fourth with grants to the grantee. It is the first round with a grant specifically for this collaboration.

Other notes: Jed McCaleb makes a $250,000 grant to BERI in this grant round for the same collaboration (BERI-CHAI). The Casey and Family Foundation, that also participates as a funder in this grant round, does not make any grants to BERI. Percentage of total donor spend in the corresponding batch of donations: 2.80%; announced: 2021-11-20.
Jed McCaleb250,000.00102021-10AI safetyhttps://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-CHAI collaboration, This is BERI's collaboration with the Center for Human-Compatible AI (CHAI). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the fourth with grants to the grantee. It is the first round with a grant specifically for this collaboration.

Other notes: Jaan Tallinn makes a $248,00 grant to BERI in this grant round for the same collaboration (BERI-CHAI). The Casey and Family Foundation, that also participates as a funder in this grant round, does not make any grants to BERI. Percentage of total donor spend in the corresponding batch of donations: 100.00%; announced: 2021-11-20.
Jaan Tallinn37,000.00262021-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-CSER collaboration, This is BERI's collaboration with the Centre for the Study of Existential Risk (CSER). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the third with grants to the grantee. It is the second round with a grant specifically for this collaboration.

Other notes: The grant round includes grants from Tallinn for two other BERI collaborations (with FHI and SERI) as well as grants from Jed McCaleb for the collaborations with FHI and SERI. Percentage of total donor spend in the corresponding batch of donations: 0.39%.
Jed McCaleb17,000.00312021-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-FHI collaboration, This is BERI's collaboration with the Future of Humanity Institute (FHI). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the third with grants to the grantee. It is the first round with a grant specifically for this collaboration.

Other notes: The grant round includes a grant from Jed McCaleb for another BERI collaboration (with SERI) and grants from Tallinn for collaborations with FHI, SERI, and CSER. Percentage of total donor spend in the corresponding batch of donations: 7.00%.
Jed McCaleb14,000.00332021-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-SERI collaboration, This is BERI's collaboration with the Stanford Existential Risk Initiative (SERI). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the third with grants to the grantee. It is the first round with a grant specifically for this collaboration.

Other notes: The grant round includes a grant from Jed McCaleb for another BERI collaboration (with FHI) and grants from Tallinn for collaborations with FHI, SERI, and CSER. Percentage of total donor spend in the corresponding batch of donations: 5.76%.
Jaan Tallinn478,000.0072021-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-FHI collaboration, This is BERI's collaboration with the Future of Humanity Institute (FHI). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the third with grants to the grantee. It is the first round with a grant specifically for this collaboration.

Other notes: The grant round includes grants from Tallinn for two other BERI collaborations (with SERI and CSER) as well as grants from Jed McCaleb for the collaborations with FHI and SERI. Percentage of total donor spend in the corresponding batch of donations: 5.02%.
Jaan Tallinn333,000.0092021-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-SERI collaboration, This is BERI's collaboration with the Stanford Existential Risk Institute (SERI). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the third with grants to the grantee. It is the first round with a grant specifically for this collaboration.

Other notes: The grant round includes grants from Tallinn for two other BERI collaborations (with FHI and CSER) as well as grants from Jed McCaleb for the collaborations with FHI and SERI. Percentage of total donor spend in the corresponding batch of donations: 3.50%.
Open Philanthropy210,000.00142021-03AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-summer-fellowships/Claire Zabel Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to provide stipends for the Stanford Existential Risks Initiative (SERI) summer research fellowship program."

Donor retrospective of the donation: The multiple future grants https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program-2/ https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ and https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ from Open Philanthropy to BERI for the SERI-MATS program, a successor of sorts to this program, suggests satisfaction with the outcome of this grant.

Other notes: Intended funding timeframe in months: 2.
Jaan Tallinn247,000.00132021-01-05Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Oliver Habryka Eric Rogstad Donation process: Part of the Survival and Flourishing Fund's 2020 H2 grants https://survivalandflourishing.fund/sff-2020-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fourth grant round and the second with grants to this grantee.

Other notes: Although the Survival and Flourishing Fund and Jed McCaleb also participate in this grant round as funders, neither of them makes any grants to this grantee. Percentage of total donor spend in the corresponding batch of donations: 2.74%.
Jaan Tallinn20,000.00302020-06-11Gobal catastrophic riskshttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the BERI-CSER collaboration, This is BERI's collaboration with the Centre for the Study of Existential Risk (CSER). See https://existence.org/collaborations/ for BERI's full list of collaborations.

Donor reason for selecting the donee: Zvi Mowshowitz, one of the recommenders in the grant round, writes in https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff#AI_Safety_Paper_Production (GW, IR) "I consider AI Safety and related existential risks to be by far the most important ‘cause area,’ that’s even more true given the focus of SFF, and I am confident Jaan feels the same way. [...] It’s hard to find things that might possibly work in the AI Safety space, as opposed to plans to look around for something that might possibly work. [...] CHAI@BERI also seemed clearly worthwhile, and they got a large grant as well."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round and the first with a grant to BERI.

Other notes: Although the Survival and Flourishing Fund and Jed McCaleb also participate as funders in this grant round, neither of them makes a grant to the grantee. SFF itself is a descendant of BERI's now-ended grantmaking, which is distinct from BERI's academic collaboration work that is still ongoing and being funded by this grant. Percentage of total donor spend in the corresponding batch of donations: 2.18%.
Open Philanthropy150,000.00182020-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support/Claire Zabel Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "BERI seeks to reduce existential risks to humanity, and collaborates with other long-termist organizations, including the Center for Human-Compatible AI at UC Berkeley. This funding is intended to help BERI establish new collaborations."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support-2/ suggests continued satisfaction with the grantee.
Patrick Brinich-Langlois7,497.00342019-12-03AI safetyhttps://www.patbl.com/misc/other/donations/--
Open Philanthropy705,000.0062019-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2019/Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/ from Open Philanthropy to BERI for the same purpose (CHAI collaboration) suggests satisfaction with the outcome of the grant.

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
Open Philanthropy250,000.00102019-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-ml-engineers/Daniel Dewey Donation process: The grant page describes the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/grants/uc-berkeley-center-for-human-compatible-ai-2016/ for the launch of CHAI and previous grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/ to collaborate with CHAI. Announced: 2019-03-04.
Casey and Family Foundation100,000.00202017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html--
Anonymous100,000.00202017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html--
Jaan Tallinn5,000,000.0012017-12AI safetyhttp://existence.org/2018/01/11/activity-update-december-2017.html-- Donation amount approximate.
Open Philanthropy403,890.0082017-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/Daniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx

Other notes: Announced: 2017-09-28.
EA Giving Group35,161.98272017-04AI safety/other global catastrophic riskshttps://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgINick Beckstead Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund in Effective Altruism Funds, and made up the remainder via the EA Giving Group and some personal funds. It is not clear how much was personal funds, so for simplicity we are attributing the entirety of the remainder to EA Giving Group (creating some inaccuracy).
Effective Altruism Funds: Long-Term Future Fund14,838.02322017-03-20AI safety/other global catastrophic riskshttps://funds.effectivealtruism.org/funds/payouts/march-2017-berkeley-existential-risk-initiative-beriNick Beckstead Donation process: The grant page says that Nick Beckstead, the fund manager, learned that Andrew Critch was starting up BERI and needed $50,000. Beckstead determined that this would be the best use of the money in the Long-Term Future Fund.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "It is a new initiative providing various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support). It works as a non-profit entity, independent of any university, so that it can help multiple organizations and to operate more swiftly than would be possible within a university context."

Donor reason for selecting the donee: Nick Beckstead gives these reasons on the grant page: the basic idea makes sense to him, his confidence in Critch's ability to make it happen, supporting people to try out reasonable ideas and learn from how they unfold seems valuable, and the natural role of Beckstead as a "first funder" for such opportunities and confidence that other competing funders for this would have good counterfactual uses of their money.

Donor reason for donating that amount (rather than a bigger or smaller amount): The requested amount was $50,000, and at the time of grant, the fund only had $14,838.02. So, all the fund money was granted. Beckstead donated the remainder of the funding via the EA Giving Group and a personal donor-advised fund.
Percentage of total donor spend in the corresponding batch of donations: 100.00%

Donor reason for donating at this time (rather than earlier or later): The timing of BERI starting up and the launch of the Long-Term Future Fund closely matched, leading to this grant happening when it did.

Donor retrospective of the donation: BERI would become successful and get considerable funding from Jaan Tallinn in the coming months, validating the grant. The Long-Term Future Fund would not make any further grants to BERI.
Jaan Tallinn2,000,000.0032017AI safetyhttp://existence.org/grants--