Foundational Research Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2025. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country Germany
Facebook page FoundationalResearch
Websitehttps://foundational-research.org/
Donate pagehttps://foundational-research.org/donate
Transparency and financials pagehttps://foundational-research.org/transparency
Donation case pagehttps://foundational-research.org/the-case-for-suffering-focused-ethics/
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Foundational_Research_Institute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Foundational+Research+Institute
Key peopleMax Daniel|Brian Tomasik
Launch date2013-07-31

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 4 39 343 5 5 5 39 39 39 327 327 1,000 1,000 1,000
2 5 166 5 5 5 5 5 5 327 327 327 327 327
Suffering-focused philosophy 2 39 520 39 39 39 39 39 39 1,000 1,000 1,000 1,000 1,000

Donation amounts by donor and year for donee Foundational Research Institute

Donor Total 2016 2014
Nicholas Link (filter this donee) 1,000.00 1,000.00 0.00
Pablo Stafforini (filter this donee) 331.66 326.66 5.00
Vidur Kapur (filter this donee) 39.44 39.44 0.00
Total 1,371.10 1,366.10 5.00

Full list of documents in reverse chronological order (7 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Effective Altruism Foundation update: Plans for 2018 and room for more funding (GW, IR)2017-12-15Jonas Vollmer Effective Altruism Foundation Effective Altruism Foundation Raising for Effective Giving Foundational Research Institute Wild-Animal Suffering Research Donee donation caseEffective altruism/movement growth/s-risk reductionThe document describes the 2018 plan and room for more funding of the Effective Altruism Foundation. Subsidiaries include Raising for Effective Giving, Foundational Research Institute, and Wild-Animal Suffering Research, Also cross-posted at https://ea-foundation.org/blog/our-plans-for-2018/ (own blog)
Fear and Loathing at Effective Altruism Global 20172017-08-16Scott Alexander Slate Star CodexOpen Philanthropy GiveWell Centre for Effective Altruism Center for Effective Global Action Raising for Effective Giving 80,000 Hours Wild-Animal Suffering Research Qualia Research Institute Foundational Research Institute Miscellaneous commentaryScott Alexander describes his experience at Effective ALtruism Global 2017. He describes how the effective altruism movement has both the formal-looking, "suits" people who are in charge of large amounts of money, and the "weirdos" who are toying around with ideas that seem strange and are not mainstream even within effective altruism. However, he feels that rather than being two separate groups, the two groups blend into and overlap with each other. He sees this as a sign that the effective altruism movement is composed of genuinely good people who are looking to make a difference, and explains why he thinks they are succeeding
Introducing CEA’s Guiding Principles2017-03-07William MacAskill Centre for Effective AltruismEffective Altruism Foundation Rethink Charity Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Charity Science Effective Altruism Foundation Foundational Research Institute Future of Life Institute Raising for Effective Giving The Life You Can Save Miscellaneous commentaryEffective altruismWillam MacAskill outlines CEA's understanding of the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit. The post also lists other organizations that voice their support for these definitions and guiding principles, including: .impact, 80,000 Hours, Animal Charity Evaluators, Charity Science, Effective Altruism Foundation, Foundational Research Institute, Future of Life Institute, Raising for Effective Giving, and The Life You Can Save. The following individuals are also listed as voicing their support for the definition and guiding principles: Elie Hassenfeld of GiveWell and the Open Philanthropy Project, Holden Karnofsky of GiveWell and the Open Philanthropy Project, Toby Ord of the Future of Humanity Institute, Nate Soares of the Machine Intelligence Research Institute, and Peter Singer. William MacAskill worked on the document with Julia Wise, and also expresses gratitude to Rob Bensinger and Hilary Mayhew for their comments and wording suggestions. The post also briefly mentions an advisory panel set up by Julia Wise, and links to https://forum.effectivealtruism.org/posts/mdMyPRSSzYgk7X45K/advisory-panel-at-cea (GW, IR) for more detail
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier

Full list of donations in reverse chronological order (4 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 4)Donation dateCause areaURLInfluencerNotes
Pablo Stafforini326.6622016-06-01--http://www.stafforini.com/blog/donations/--
Vidur Kapur39.4432016Suffering-focused philosophyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 30.00 GBP (conversion done on 2017-08-05 via Fixer.io).
Nicholas Link1,000.0012016Suffering-focused philosophyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 1,000.00 USD (conversion done on 2017-08-05 via Fixer.io).
Pablo Stafforini5.0042014-07-09--http://www.stafforini.com/blog/donations/-- Reward in exchange for feedback.