Future of Life Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country
Facebook page futureoflifeinstitute
Websitehttps://futureoflife.org/
Transparency and financials pagehttps://futureoflife.org/tax-forms/
Donation case pagehttps://futureoflife.org/wp-content/uploads/2016/02/FLI-2015-Annual-Report.pdf
Twitter usernameFLIxrisk
Wikipedia pagehttps://en.wikipedia.org/wiki/Future_of_Life_Institute
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction
Org Watch pagehttps://orgwatch.issarice.com/?organization=Future+of+Life+Institute
Key peopleJaan Tallinn|Max Tegmark|Meia Chita-Tegmark|Viktoriya Krakovna|Anthony Aguirre
Launch date2014-03

This entity is also a donor.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 10 100,000 213,600 0 0 0 50,000 50,000 100,000 100,000 100,000 250,000 300,000 1,186,000
Global catastrophic risks 5 100,000 90,000 0 0 0 0 0 100,000 100,000 100,000 100,000 250,000 250,000
AI safety 5 100,000 337,200 50,000 50,000 50,000 50,000 50,000 100,000 100,000 300,000 300,000 1,186,000 1,186,000

Donation amounts by donor and year for donee Future of Life Institute

Donor Total 2019 2018 2017 2016 2015
Open Philanthropy Project (filter this donee) 1,636,000.00 0.00 250,000.00 100,000.00 100,000.00 1,186,000.00
Berkeley Existential Risk Initiative (filter this donee) 500,000.00 50,000.00 300,000.00 150,000.00 0.00 0.00
EA Giving Group (filter this donee) 0.00 0.00 0.00 0.00 0.00 0.00
Total 2,136,000.00 50,000.00 550,000.00 250,000.00 100,000.00 1,186,000.00

Full list of documents in reverse chronological order (11 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
2017 AI Safety Literature Review and Charity Comparison2017-12-20Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund)
Introducing CEA’s Guiding Principles2017-03-07William MacAskill Centre for Effective AltruismEffective Altruism Foundation Rethink Charity Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Charity Science Effective Altruism Foundation Foundational Research Institute Future of Life Institute Raising for Effective Giving The Life You Can Save Miscellaneous commentaryEffective altruismWillam MacAskill outlines CEA's understanding of the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit. The post also lists other organizations that voice their support for these definitions and guiding principles, including: .impact, 80,000 Hours, Animal Charity Evaluators, Charity Science, Effective Altruism Foundation, Foundational Research Institute, Future of Life Institute, Raising for Effective Giving, and The Life You Can Save. The following individuals are also listed as voicing their support for the definition and guiding principles: Elie Hassenfeld of GiveWell and the Open Philanthropy Project, Holden Karnofsky of GiveWell and the Open Philanthropy Project, Toby Ord of the Future of Humanity Institute, Nate Soares of the Machine Intelligence Research Institute, and Peter Singer. William MacAskill worked on the document with Julia Wise, and also expresses gratitude to Rob Bensinger, Jeff Alstott, and Hilary Mayhew for their comments and wording suggestions. The post also briefly mentions an advisory panel set up by Julia Wise, and links to https://forum.effectivealtruism.org/posts/mdMyPRSSzYgk7X45K/advisory-panel-at-cea for more detail
Changes in funding in the AI safety field2017-02-01Sebastian Farquhar Centre for Effective Altruism Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab Review of current state of cause areaAI safetyThe post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/
Where the ACE Staff Members are Giving in 2016 and Why2016-12-23Leah Edgerton Animal Charity EvaluatorsAllison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds Periodic donation list documentationAnimal welfare|AI safety|Global catastrophic risksAnimal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy
2016 AI Risk Literature Review and Charity Comparison2016-12-13Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
Where should you donate to have the most impact during giving season 2015?2015-12-24Robert Wiblin 80,000 Hours Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly Evaluator consolidated recommendation listGlobal health and development,Effective altruism/movement growth,Rationality improvement,Biosecurity and pandemic preparedness,AI risk,Global catastrophic risksRobert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate
Peter McCluskey's favorite charities2015-12-06Peter McCluskey Peter McCluskey Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute Evaluator consolidated recommendation listThe page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Project Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier

Full list of donations in reverse chronological order (10 donations)

DonorAmount (current USD)Amount rank (out of 10)Donation dateCause areaURLInfluencerNotes
Berkeley Existential Risk Initiative50,000.0072019-04-22AI safetyhttp://web.archive.org/web/20190623203105/http://existence.org/grants/--
Open Philanthropy Project250,000.0032018-06Global catastrophic riskshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2018Nick Beckstead Grant for general support; renewal of May 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017. Announced: 2018-07-06.
Berkeley Existential Risk Initiative300,000.0022018-04-10AI safetyhttps://web.archive.org/web/20180905034853/http://existence.org/organization-grants/ https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- General support.
Berkeley Existential Risk Initiative100,000.0042017-10-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/--
Berkeley Existential Risk Initiative50,000.0072017-10-27AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general support. See announcement at http://existence.org/2017/11/03/activity-update-october-2017.html.
Open Philanthropy Project100,000.0042017-05Global catastrophic riskshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017Nick Beckstead Grant awarded for increasing awareness of global catastrophic risks. Announced: 2017-09-27.
Open Philanthropy Project100,000.0042016-03Global catastrophic risks/general researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-- Grant for general support. Recipient organization does research and outreach on global catastrophic risks (GCRs) at a broad level. Announced: 2016-03-18.
EA Giving Group----2016Global catastrophic riskshttps://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/editNick Beckstead Actual date range: December 2015 to February 2016. Exact date, amount, or fraction not known, but it is the donee with the fourth highest amount donated out of six donees in this period.
Open Philanthropy Project1,186,000.0012015-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction-- Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.
EA Giving Group----2015Global catastrophic riskshttps://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/editNick Beckstead Actual date range: December 2014 to December 2015. Exact date, amount, or fraction not known, but it is the donee with the second highest amount donated out of eight donees in this period.