This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Transparency and financials page||https://futureoflife.org/tax-forms/|
|Donation case page||https://futureoflife.org/wp-content/uploads/2016/02/FLI-2015-Annual-Report.pdf|
|Open Philanthropy Project grant review||http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction|
|Org Watch page||https://orgwatch.issarice.com/?organization=Future+of+Life+Institute|
|Key people||Jaan Tallinn|Max Tegmark|Meia Chita-Tegmark|Viktoriya Krakovna|Anthony Aguirre|
This entity is also a donor.
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
|Global catastrophic risks||7||100,000||97,143||0||0||0||100,000||100,000||100,000||100,000||100,000||130,000||250,000||250,000|
|Open Philanthropy Project (filter this donee)||1,736,000.00||100,000.00||250,000.00||100,000.00||100,000.00||1,186,000.00|
|Berkeley Existential Risk Initiative (filter this donee)||500,000.00||50,000.00||300,000.00||150,000.00||0.00||0.00|
|Survival and Flourishing Fund (filter this donee)||130,000.00||130,000.00||0.00||0.00||0.00||0.00|
|EA Giving Group (filter this donee)||0.00||0.00||0.00||0.00||0.00||0.00|
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Document scope||Cause area||Notes|
|FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team||2019-12-27||Lucas Perry||Future of Life Institute||Future of Life Institute||Future of Life Institute||Donee periodic update||AI safety/Global catastrophic risks||This is a podcast along with transcript. FLI team members each describe what they do, how their role has evolved, and their plans for 2020|
|2019 AI Alignment Literature Review and Charity Comparison (GW, IR)||2019-12-19||Ben Hoskin||Effective Altruism Forum||Ben Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Project Survival and Flourising Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.|
|EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)||2019-01-06||Avi Norowitz||Effective Altruism Forum||Avi Norowitz William Kiely||Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund||Miscellaneous commentary||The blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched|
|2017 AI Safety Literature Review and Charity Comparison (GW, IR)||2017-12-20||Ben Hoskin||Effective Altruism Forum||Ben Hoskin||Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours||Review of current state of cause area||AI safety||The lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."|
|AI: a Reason to Worry, and to Donate||2017-12-10||Jacob Falkovich||Jacob Falkovich||Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds: Meta Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund||Single donation documentation||AI safety||Falkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund)|
|Introducing CEA’s Guiding Principles||2017-03-07||William MacAskill||Centre for Effective Altruism||Effective Altruism Foundation||Rethink Charity Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Charity Science Effective Altruism Foundation Foundational Research Institute Future of Life Institute Raising for Effective Giving The Life You Can Save||Miscellaneous commentary||Effective altruism||Willam MacAskill outlines CEA's understanding of the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit. The post also lists other organizations that voice their support for these definitions and guiding principles, including: .impact, 80,000 Hours, Animal Charity Evaluators, Charity Science, Effective Altruism Foundation, Foundational Research Institute, Future of Life Institute, Raising for Effective Giving, and The Life You Can Save. The following individuals are also listed as voicing their support for the definition and guiding principles: Elie Hassenfeld of GiveWell and the Open Philanthropy Project, Holden Karnofsky of GiveWell and the Open Philanthropy Project, Toby Ord of the Future of Humanity Institute, Nate Soares of the Machine Intelligence Research Institute, and Peter Singer. William MacAskill worked on the document with Julia Wise, and also expresses gratitude to Rob Bensinger and Hilary Mayhew for their comments and wording suggestions. The post also briefly mentions an advisory panel set up by Julia Wise, and links to https://forum.effectivealtruism.org/posts/mdMyPRSSzYgk7X45K/advisory-panel-at-cea (GW, IR) for more detail|
|Changes in funding in the AI safety field||2017-02-01||Sebastian Farquhar||Centre for Effective Altruism||Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab||Review of current state of cause area||AI safety||The post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/|
|Where the ACE Staff Members are Giving in 2016 and Why||2016-12-23||Leah Edgerton||Animal Charity Evaluators||Allison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso||Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds||Periodic donation list documentation||Animal welfare|AI safety|Global catastrophic risks||Animal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy|
|2016 AI Risk Literature Review and Charity Comparison (GW, IR)||2016-12-13||Ben Hoskin||Effective Altruism Forum||Ben Hoskin||Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving||Review of current state of cause area||AI safety||The lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."|
|CEA Staff Donation Decisions 2016||2016-12-06||Sam Deere||Centre for Effective Altruism||William MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz||Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute||Periodic donation list documentation||Centre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.|
|Where should you donate to have the most impact during giving season 2015?||2015-12-24||Robert Wiblin||80,000 Hours||Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly||Evaluator consolidated recommendation list||Global health and development,Effective altruism/movement growth,Rationality improvement,Biosecurity and pandemic preparedness,AI risk,Global catastrophic risks||Robert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate|
|Peter McCluskey's favorite charities||2015-12-06||Peter McCluskey||Peter McCluskey||Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute||Evaluator consolidated recommendation list||The page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans|
|My Cause Selection: Michael Dickens||2015-09-15||Michael Dickens||Effective Altruism Forum||Michael Dickens||Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Project Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving||Single donation documentation||Animal welfare,AI risk,Effective altruism||Explanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier|
|Donor||Amount (current USD)||Amount rank (out of 12)||Donation date||Cause area||URL||Influencer||Notes|
|Open Philanthropy Project||100,000.00||5||Global catastrophic risks||https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2019||Daniel Dewey||Intended use of funds (category): Organizational general support Announced: 2019-11-18.|
|Survival and Flourishing Fund||130,000.00||4||Global catastrophic risks||http://survivalandflourishing.org/||Alex Flint Andrew Critch Eric Rogstad||Donation process: Part of the founding batch of grants for the Survival and Flourishing Fund made in August 2019. The fund is partly a successor to part of the grants program of the Berkeley Existential Risk Initiative (BERI) that handled grantmaking by Jaan Tallinn; see http://existence.org/tallinn-grants-future/ As such, this grant to FLI may represent a followup to past grants by BERI to FLI
Intended use of funds (category): Organizational general support
Donor reason for selecting the donee: This grant may represent a followup to past grants by BERI to FLI
Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; the Survival and Flourishing Fund is making its first round of grants in August 2019 Percentage of total donor spend in the corresponding batch of donations: 100.00%; announced: 2019-08-29.
|Berkeley Existential Risk Initiative||50,000.00||9||AI safety||http://web.archive.org/web/20190623203105/http://existence.org/grants/||--|
|Open Philanthropy Project||250,000.00||3||Global catastrophic risks||https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2018||Nick Beckstead||Intended use of funds (category): Organizational general support
Intended use of funds: Grant for general support. It is a renewal of the May 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017 whose primary purpose to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant
Donor retrospective of the donation: The followup grant in 2019 suggests that Open Phil would continue to stand by its assessment of the grantee. Announced: 2018-07-05.
|Berkeley Existential Risk Initiative||300,000.00||2||AI safety||https://web.archive.org/web/20180905034853/http://existence.org/organization-grants/ https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/||--||General support.|
|Berkeley Existential Risk Initiative||100,000.00||5||AI safety||https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/||--|
|Berkeley Existential Risk Initiative||50,000.00||9||AI safety||https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/||--||For general support. See announcement at http://existence.org/2017/11/03/activity-update-october-2017.html.|
|Open Philanthropy Project||100,000.00||5||Global catastrophic risks/AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2017||Nick Beckstead||Intended use of funds (category): Organizational general support
Intended use of funds: Grant for general support. However, the primary use of the grant will be to administer a request for proposals in AI safety similar to a request for proposals in 2015 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant
Donor retrospective of the donation: The followup grants in 2018 and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee. Announced: 2017-09-27.
|Open Philanthropy Project||100,000.00||5||Global catastrophic risks/general research||https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support||--||Donation process: According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support#Our_process "Following our collaboration last year, we kept in touch with FLI regarding its funding situation and plans for future activities."
Intended use of funds (category): Organizational general support
Intended use of funds: Main planned activities for 2016 include: news operation, nuclear weapons campaign, AI safety conference, and AI conference travel.
Donor reason for selecting the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support#The_case_for_the_grant says: "In organizing its 2015 [Puerto Rico] AI safety conference (which we attended), FLI demonstrated a combination of network, ability to execute, and values that impressed us. We felt that the conference was well-organized, attracted the attention of high-profile individuals who had not previously demonstrated an interest in AI safety, and seemed to lead many of those individuals to take the issue more seriously." There is more detail in the grant page, as well as a list of reservations about the grant.
Donor reason for donating at this time (rather than earlier or later): Open Phil needed enough time to evaluate the results of its first Future of Life Institute grant that was focused on AI safety, and to see the effects of the Puerto Rico 2015 AI safety conference. Timing also likely determined by FLI explicitly seeking more money to meet its budget.
Donor thoughts on making further donations to the donee: According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support#Key_questions_for_follow-up "We expect to have a conversation with FLI staff every 3-6 months for the next 12 months. After that, we plan to consider renewal." A list of questions is included.
Donor retrospective of the donation: The followup grants in 2017, 2018, and 2019, for similar or larger amounts, suggest that Open Phil would continue to stand by its assessment of the grantee. Announced: 2016-03-18.
|EA Giving Group||--||--||Global catastrophic risks||https://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/edit||Nick Beckstead||Actual date range: December 2015 to February 2016. Exact date, amount, or fraction not known, but it is the donee with the fourth highest amount donated out of six donees in this period.|
|Open Philanthropy Project||1,186,000.00||1||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction||--||Grant accompanied a grant by Elon Musk to FLI for the same purpose. See also the March 2015 blog post https://www.openphilanthropy.org/blog/open-philanthropy-project-update-global-catastrophic-risks that describes strategy and developments prior to the grant. An update on the grant was posted in 2017-04 at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/update-fli-grant discussing impressions of Howie Lempel and Daniel Dewey of the grant and of the effect on and role of Open Phil. Announced: 2015-08-26.|
|EA Giving Group||--||--||Global catastrophic risks||https://docs.google.com/spreadsheets/d/1H2hF3SaO0_QViYq2j1E7mwoz3sDjdf9pdtuBcrq7pRU/edit||Nick Beckstead||Actual date range: December 2014 to December 2015. Exact date, amount, or fraction not known, but it is the donee with the second highest amount donated out of eight donees in this period.|