Effective Altruism Funds: Long-Term Future Fund donations made to Machine Intelligence Research Institute

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United Kingdom
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)Centre for Effective Altruism
Websitehttps://app.effectivealtruism.org/funds/far-future
Donations URLhttps://app.effectivealtruism.org/
Regularity with which donor updates donations datairregular
Regularity with which Donations List Website updates donations data (after donor update)irregular
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)

Brief history: This is one of four Effective Altruism Funds that are a program of the Centre for Effective Altruism (CEA). The creation of the funds was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead, and also by the donor lottery run in December 2016 by Paul Christiano and Carl Shulman (see https://forum.effectivealtruism.org/posts/WvPEitTCM8ueYPeeH/donor-lotteries-demonstration-and-faq (GW, IR) for more). EA Funds were introduced on 2017-02-09 in the post https://forum.effectivealtruism.org/posts/a8eng4PbME85vdoep/introducing-the-ea-funds (GW, IR) and launched in the post https://forum.effectivealtruism.org/posts/iYoSAXhodpxJFwdQz/ea-funds-beta-launch (GW, IR) on 2017-02-28. The first round of allocations was announced at https://forum.effectivealtruism.org/posts/MsaS8JKrR8nnxyPkK/update-on-effective-altruism-funds (GW, IR) on 2017-04-20. The funds allocation information appears to have next been updated in November 2017; see https://www.facebook.com/groups/effective.altruists/permalink/1606722932717391/ for more. This particular fund was previously called the Far Future Fund; it was renamed to the Long-Term Future Fund to more accurately reflect the meaning.

Brief notes on broad donor philosophy and major focus areas: As the name suggests, the Fund's focus area is activities that could significantly affect the long term future. Historically, the fund has focused on areas such as AI safety and epistemic institutions, though it has also made grants related to biosecurity and other global catastrophic risks. At inception, the Fund had Nick Beckstead of Open Philanthropy its sole manager. Beckstead stepped down in August 2018, and October 2018, https://forum.effectivealtruism.org/posts/yYHKRgLk9ufjJZn23/announcing-new-ea-funds-management-teams (GW, IR) announces a new management team for the Fund, comprising chair Matt Fallshaw, and team Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with advisors Nick Beckstead and Jonas Vollmer.

Notes on grant decision logistics: Money from the fund is supposed to be granted about thrice a year, with the target months being November, February, and June. Actual grant months may differ from the target months. The amount of money granted with each decision cycle depends on the amount of money available in the Fund as well as on the available donation opportunities. Grant applications can be submitted any time; any submitted applications will be considered prior to the next grant round (each grant round has a deadline by which applications must be submitted to be considered).

Notes on grant publication logistics: Grant details are published on the EA Funds website, and linked to from the Fund page. Each grant is accompanied by a brief description of the grantee's work (and hence, the intended use of funds) as well as reasons the grantee was considered impressive. In April 2019, the write-up for each grant at https://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvl had just one author (rather than group authorship), likely the management team member who did the most work on that particular grant. Grant write-ups vary greatly in length; in April 2019, the write-ups by Oliver Habryka were the most thorough.

Notes on grant financing: Money in the Long-Term Future Fund only includes funds explicitly donated for that Fund. In each grant round, the amount of money that can be allocated is limited by the balance available in the fund at that time.

This entity is also a donee.

Full donor page for donor Effective Altruism Funds: Long-Term Future Fund

Basic donee information

ItemValue
Country United States
Facebook page MachineIntelligenceResearchInstitute
Websitehttps://intelligence.org
Donate pagehttps://intelligence.org/donate/
Donors list pagehttps://intelligence.org/topdonors/
Transparency and financials pagehttps://intelligence.org/transparency/
Donation case pagehttps://forum.effectivealtruism.org/posts/EKfjh5W7PkykLM7eG/miri-update-and-fundraising-case-1
Twitter usernameMIRIBerkeley
Wikipedia pagehttps://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Charity Navigator pagehttps://www.charitynavigator.org/index.cfm?bay=search.profile&ein=582565917
Guidestar pagehttps://www.guidestar.org/profile/58-2565917
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Machine+Intelligence+Research+Institute
Key peopleEliezer Yudkowsky|Nate Soares|Luke Muehlhauser
Launch date2000

This entity is also a donor.

Full donee page for donee Machine Intelligence Research Institute

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 4 50,000 169,749 40,000 40,000 40,000 50,000 50,000 50,000 100,000 100,000 488,994 488,994 488,994
AI safety 4 50,000 169,749 40,000 40,000 40,000 50,000 50,000 50,000 100,000 100,000 488,994 488,994 488,994

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2020 2019 2018
AI safety (filter this donor) 4 678,994.00 100,000.00 50,000.00 528,994.00
Total 4 678,994.00 100,000.00 50,000.00 528,994.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (3 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.

Full list of donations in reverse chronological order (4 donations)

Graph of all donations (with known year of donation), showing the timeframe of donations

Graph of donations and their timeframes
Amount (current USD)Amount rank (out of 4)Donation dateCause areaURLInfluencerNotes
100,000.0022020-04-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Oliver Habryka Adam Gleave Intended use of funds (category): Organizational general support

Other notes: In the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ MIRI mentions the grant along with a $7.7 million grant from the Open Philanthropy Project and a $300,000 grant from Berkeley Existential Risk Initiative. Percentage of total donor spend in the corresponding batch of donations: 20.48%.
50,000.0032019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Grant investigation and influencer Oliver Habryka believes that MIRI is making real progress in its approach of "creating a fundamental piece of theory that helps humanity to understand a wide range of powerful phenomena" He notes that MIRI started work on the alignment problem long before it became cool, which gives him more confidence that they will do the right thing and even their seemingly weird actions may be justified in ways that are not yet obvious. He also thinks that both the research team and ops staff are quite competent

Donor reason for donating that amount (rather than a bigger or smaller amount): Habryka offers the following reasons for giving a grant of just $50,000, which is small relative to the grantee budget: (1) MIRI is in a solid position funding-wise, and marginal use of money may be lower-impact. (2) There is a case for investing in helping grow a larger and more diverse set of organizations, as opposed to putting money in a few stable and well-funded onrganizations.
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Oliver Habryka writes: "I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) . Despite these, Habryka recommends a relatively small grant to MIRI, because they are already relatively well-funded and are not heavily bottlenecked on funding. However, he ultimately decides to grant some amount to MIRI, giving some explanation. He says he will think more about this before the next funding round.
40,000.0042018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and to MIRI's 2018 fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ saying "According to their fundraiser post, MIRI believes it will be able to find productive uses for additional funding, and gives examples of ways additional funding was used to support their work this year."

Donor reason for selecting the donee: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and says "We believe that this research represents one promising approach to AI alignment research."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor retrospective of the donation: The Long-Term Future Fund would make a similarly sized grant ($50,000) in its next grant round in April 2019, suggesting that it was satisfied with the outcome of the grant

Other notes: Percentage of total donor spend in the corresponding batch of donations: 41.88%.
488,994.0012018-08-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/july-2018-long-term-future-fund-grantsNick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017

Intended use of funds (category): Organizational general support

Intended use of funds: Beckstead writes "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare), due to a sense that (i) their work is otherwise much less funding constrained than it used to be, and (ii) spending like this would better reflect the value of staff time and increase staff satisfaction. However, I also told them that I was open to them using these funds to accomplish this objective indirectly (e.g. through salary increases) or using the funds for another purpose if that seemed better to them."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 for Beckstead's opinion of the donee.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-Term Future Fund." Also: "I think a number of these organizations could qualify for the criteria of either the Long-Term Future Fund or the EA Community Fund because of their dual focus on EA and longtermism, which is part of the reason that 80,000 Hours is receiving a grant from each fund."
Percentage of total donor spend in the corresponding batch of donations: 53.32%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: Even after the fund management being moved to a new team, the EA Long-Term Future Fund would continue making grants to MIRI.