Future of Life Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country
Facebook page futureoflifeinstitute
Websitehttps://futureoflife.org/
Transparency and financials pagehttps://futureoflife.org/tax-forms/
Donation case pagehttps://futureoflife.org/wp-content/uploads/2016/02/FLI-2015-Annual-Report.pdf
Twitter usernameFLIxrisk
Wikipedia pagehttps://en.wikipedia.org/wiki/Future_of_Life_Institute
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction
Org Watch pagehttps://orgwatch.issarice.com/?organization=Future+of+Life+Institute
Key peopleJaan Tallinn|Max Tegmark|Meia Chita-Tegmark|Viktoriya Krakovna|Anthony Aguirre
Launch date2014-03

This entity is also a donor.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 17 100,000 169,176 0 0 23,000 50,000 50,000 100,000 100,000 100,000 250,000 347,000 1,186,000
Global catastrophic risks 12 100,000 99,167 0 0 10,000 23,000 30,000 100,000 100,000 100,000 130,000 250,000 347,000
AI safety 5 100,000 337,200 50,000 50,000 50,000 50,000 50,000 100,000 100,000 300,000 300,000 1,186,000 1,186,000

Donation amounts by donor and year for donee Future of Life Institute

Donor Total 2020 2019 2018 2017 2016 2015
Open Philanthropy (filter this donee) 1,736,000.00 0.00 100,000.00 250,000.00 100,000.00 100,000.00 1,186,000.00
Berkeley Existential Risk Initiative (filter this donee) 500,000.00 0.00 50,000.00 300,000.00 150,000.00 0.00 0.00
Jaan Tallinn (filter this donee) 377,000.00 377,000.00 0.00 0.00 0.00 0.00 0.00
Survival and Flourishing Fund (filter this donee) 253,000.00 123,000.00 130,000.00 0.00 0.00 0.00 0.00
Jed McCaleb (filter this donee) 10,000.00 10,000.00 0.00 0.00 0.00 0.00 0.00
EA Giving Group (filter this donee) 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 2,876,000.00 510,000.00 280,000.00 550,000.00 250,000.00 100,000.00 1,186,000.00

Full list of documents in reverse chronological order (14 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team2019-12-27Lucas Perry Future of Life InstituteFuture of Life Institute Future of Life Institute Donee periodic updateAI safety/Global catastrophic risksThis is a podcast along with transcript. FLI team members each describe what they do, how their role has evolved, and their plans for 2020
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).
Introducing CEA’s Guiding Principles2017-03-07William MacAskill Centre for Effective AltruismEffective Altruism Foundation Rethink Charity Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Charity Science Effective Altruism Foundation Foundational Research Institute Future of Life Institute Raising for Effective Giving The Life You Can Save Miscellaneous commentaryEffective altruismWillam MacAskill outlines CEA's understanding of the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit. The post also lists other organizations that voice their support for these definitions and guiding principles, including: .impact, 80,000 Hours, Animal Charity Evaluators, Charity Science, Effective Altruism Foundation, Foundational Research Institute, Future of Life Institute, Raising for Effective Giving, and The Life You Can Save. The following individuals are also listed as voicing their support for the definition and guiding principles: Elie Hassenfeld of GiveWell and the Open Philanthropy Project, Holden Karnofsky of GiveWell and the Open Philanthropy Project, Toby Ord of the Future of Humanity Institute, Nate Soares of the Machine Intelligence Research Institute, and Peter Singer. William MacAskill worked on the document with Julia Wise, and also expresses gratitude to Rob Bensinger and Hilary Mayhew for their comments and wording suggestions. The post also briefly mentions an advisory panel set up by Julia Wise, and links to https://forum.effectivealtruism.org/posts/mdMyPRSSzYgk7X45K/advisory-panel-at-cea (GW, IR) for more detail
Changes in funding in the AI safety field2017-02-01Sebastian Farquhar Centre for Effective Altruism Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab Review of current state of cause areaAI safetyThe post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/
Where the ACE Staff Members are Giving in 2016 and Why2016-12-23Leah Edgerton Animal Charity EvaluatorsAllison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds Periodic donation list documentationAnimal welfare|AI safety|Global catastrophic risksAnimal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
Where should you donate to have the most impact during giving season 2015?2015-12-24Robert Wiblin 80,000 Hours Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly Evaluator consolidated recommendation listGlobal health and development|Effective altruism/movement growth|Epistemic institutions|Biosecurity and pandemic preparedness|AI risk|Global catastrophic risksRobert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate
Peter McCluskey's favorite charities2015-12-06Peter McCluskey Peter McCluskey Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute Evaluator consolidated recommendation listThe page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier

Full list of donations in reverse chronological order (17 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 17)Donation dateCause areaURLInfluencerNotes
Jaan Tallinn347,000.0022020-12-04Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Oliver Habryka Eric Rogstad Donation process: Part of the Survival and Flourishing Fund's 2020 H2 grants https://survivalandflourishing.fund/sff-2020-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fourth grant round. Grants to the grantee had been made in the first and third grant round.

Other notes: The grant round also includes a grant from the Survival and Flourishing Fund of $347,000 to the same grantee (FLI). Although Jed McCaleb also participates as a funder in the round, he does not make any grants to this grantee in this round. Percentage of total donor spend in the corresponding batch of donations: 12.83%.
Survival and Flourishing Fund23,000.00142020-10Global catastrophic riskshttps://survivalandflourishing.fund/sff-2020-h2-recommendationsOliver Habryka Eric Rogstad Donation process: Part of the Survival and Flourishing Fund's 2020 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fourth grant round. Grants to the grantee had been made in the first and third grant round.

Other notes: The grant round also includes a grant from Jaan Tallinn of $347,000 to the same grantee (FLI). Although Jed McCaleb also participates as a funder in the round, he does not make any grants to this grantee in this round. Percentage of total donor spend in the corresponding batch of donations: 3.54%.
Jaan Tallinn30,000.00132020-07-23Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; the grantee had also received a grant in the first round.

Donor retrospective of the donation: Continued grants in future grant rounds such as https://survivalandflourishing.fund/sff-2020-h2-recommendations (2020 H2) suggest continued satisfaction with the grantee.

Other notes: The grant round also includes grants from the Survival and Flourishing Fund ($100,000) and Jed McCaleb ($10,000) to the same grantee (FLI). Percentage of total donor spend in the corresponding batch of donations: 3.26%.
Survival and Flourishing Fund100,000.0062020-06-09Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlAlex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; the grantee had also received a grant in the first round.

Donor retrospective of the donation: Continued grants in future grant rounds such as https://survivalandflourishing.fund/sff-2020-h2-recommendations (2020 H2) suggest continued satisfaction with the grantee.

Other notes: The grant round also includes grants from Jaan Tallinn ($30,000) and Jed McCaleb ($10,000) to the same grantee (FLI). Percentage of total donor spend in the corresponding batch of donations: 15.38%.
Jed McCaleb10,000.00152020-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2020-h1-recommendationsSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; the grantee had also received a grant in the first round.

Other notes: The grant round also includes grants from the Survival and Flourishing Fund ($100,000) and Jaan Tallinn ($30,000) to the same grantee (FLI). Percentage of total donor spend in the corresponding batch of donations: 4.00%.
Open Philanthropy100,000.0062019-10Global catastrophic riskshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2019Daniel Dewey Intended use of funds (category): Organizational general support

Other notes: Announced: 2019-11-18.
Survival and Flourishing Fund130,000.0052019-09-05Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlAlex Flint Andrew Critch Eric Rogstad Donation process: Part of the founding batch of grants for the Survival and Flourishing Fund made in August 2019. The fund is partly a successor to part of the grants program of the Berkeley Existential Risk Initiative (BERI) that handled grantmaking by Jaan Tallinn; see http://existence.org/tallinn-grants-future/ As such, this grant to FLI may represent a followup to past grants by BERI to FLI

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: This grant may represent a followup to past grants by BERI to FLI

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; the Survival and Flourishing Fund is making its first round of grants in August 2019.

Donor retrospective of the donation: Continued grants in future grant rounds, including https://survivalandflourishing.fund/sff-2020-h1-recommendations (2020 H1) and https://survivalandflourishing.fund/sff-2020-h2-recommendations (2020 H2) suggest continued satisfaction with the grantee.

Other notes: Percentage of total donor spend in the corresponding batch of donations: 14.77%; announced: 2019-08-29.
Berkeley Existential Risk Initiative50,000.00112019-04-22AI safetyhttp://web.archive.org/web/20190623203105/http://existence.org/grants/--
Open Philanthropy250,000.0042018-06Global catastrophic riskshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2018Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant for general support. It is a renewal of the May 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017 whose primary purpose to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant

Donor retrospective of the donation: The followup grant in 2019 suggests that Open Phil would continue to stand by its assessment of the grantee.

Other notes: Announced: 2018-07-05.
Berkeley Existential Risk Initiative300,000.0032018-04-10AI safetyhttps://web.archive.org/web/20180905034853/http://existence.org/organization-grants/ https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- General support.
Berkeley Existential Risk Initiative50,000.00112017-10-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general support. See announcement at http://existence.org/2017/11/03/activity-update-october-2017.html.
Berkeley Existential Risk Initiative100,000.0062017-10-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/--
Open Philanthropy100,000.0062017-05Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: Grant for general support. However, the primary use of the grant will be to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant

Donor retrospective of the donation: The followup grants in 2018 and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee.

Other notes: Announced: 2017-09-27.
Open Philanthropy100,000.0062016-03Global catastrophic risks/general researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-- Donation process: According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support#Our_process "Following our collaboration last year, we kept in touch with FLI regarding its funding situation and plans for future activities."

Intended use of funds (category): Organizational general support

Intended use of funds: Main planned activities for 2016 include: news operation, nuclear weapons campaign, AI safety conference, and AI conference travel.

Donor reason for selecting the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support#The_case_for_the_grant says: "In organizing its 2015 [Puerto Rico] AI safety conference (which we attended), FLI demonstrated a combination of network, ability to execute, and values that impressed us. We felt that the conference was well-organized, attracted the attention of high-profile individuals who had not previously demonstrated an interest in AI safety, and seemed to lead many of those individuals to take the issue more seriously." There is more detail in the grant page, as well as a list of reservations about the grant.

Donor reason for donating at this time (rather than earlier or later): Open Phil needed enough time to evaluate the results of its first Future of Life Institute grant that was focused on AI safety, and to see the effects of the Puerto Rico 2015 AI safety conference. Timing also likely determined by FLI explicitly seeking more money to meet its budget.

Donor thoughts on making further donations to the donee: According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support#Key_questions_for_follow-up "We expect to have a conversation with FLI staff every 3-6 months for the next 12 months. After that, we plan to consider renewal." A list of questions is included.

Donor retrospective of the donation: The followup grants in 2017, 2018, and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee.

Other notes: Announced: 2016-03-18.
EA Giving Group----2016Global catastrophic riskshttps://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/editNick Beckstead Actual date range: December 2015 to February 2016. Exact date, amount, or fraction not known, but it is the donee with the fourth highest amount donated out of six donees in this period.
Open Philanthropy1,186,000.0012015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.
EA Giving Group----2015Global catastrophic riskshttps://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/editNick Beckstead Actual date range: December 2014 to December 2015. Exact date, amount, or fraction not known, but it is the donee with the second highest amount donated out of eight donees in this period.