Effective Altruism Funds: Long-Term Future Fund donations made to Machine Intelligence Research Institute

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

We do not have any donor information for the donor Effective Altruism Funds: Long-Term Future Fund in our system.

This entity is also a donee.

Full donor page for donor Effective Altruism Funds: Long-Term Future Fund

Basic donee information

ItemValue
Country United States
Facebook page MachineIntelligenceResearchInstitute
Websitehttps://intelligence.org
Donate pagehttps://intelligence.org/donate/
Donors list pagehttps://intelligence.org/topdonors/
Transparency and financials pagehttps://intelligence.org/transparency/
Donation case pagehttp://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/
Twitter usernameMIRIBerkeley
Wikipedia pagehttps://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Charity Navigator pagehttps://www.charitynavigator.org/index.cfm?bay=search.profile&ein=582565917
Guidestar pagehttps://www.guidestar.org/profile/58-2565917
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Machine+Intelligence+Research+Institute
Key peopleEliezer Yudkowsky|Nate Soares|Luke Muehlhauser
Launch date2000

This entity is also a donor.

Full donee page for donee Machine Intelligence Research Institute

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 4 50,000 169,749 40,000 40,000 40,000 50,000 50,000 50,000 100,000 100,000 488,994 488,994 488,994
AI safety 4 50,000 169,749 40,000 40,000 40,000 50,000 50,000 50,000 100,000 100,000 488,994 488,994 488,994

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2020 2019 2018
AI safety (filter this donor) 4 678,994.00 100,000.00 50,000.00 528,994.00
Total 4 678,994.00 100,000.00 50,000.00 528,994.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Full list of donations in reverse chronological order (4 donations)

Amount (current USD)Amount rank (out of 4)Donation dateCause areaURLInfluencerNotes
100,000.0022020-04-14AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3waQ7rp3Bfy4Lwr5sZP9TPMatt Wage Helen Toner Oliver Habryka Adam Gleave Intended use of funds (category): Organizational general support

Other notes: In the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ MIRI mentions the grant along with a $7.7 million grant from the Open Philanthropy Project and a $300,00 grant from Berkeley Existential Risk Initiative. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
50,000.0032019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Grant investigation and influencer Oliver Habryka believes that MIRI is making real progress in its approach of "creating a fundamental piece of theory that helps humanity to understand a wide range of powerful phenomena" He notes that MIRI started work on the alignment problem long before it became cool, which gives him more confidence that they will do the right thing and even their seemingly weird actions may be justified in ways that are not yet obvious. He also thinks that both the research team and ops staff are quite competent

Donor reason for donating that amount (rather than a bigger or smaller amount): Habryka offers the following reasons for giving a grant of just $50,000, which is small relative to the grantee budget: (1) MIRI is in a solid position funding-wise, and marginal use of money may be lower-impact. (2) There is a case for investing in helping grow a larger and more diverse set of organizations, as opposed to putting money in a few stable and well-funded onrganizations.
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Oliver Habryka writes: "I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) . Despite these, Habryka recommends a relatively small grant to MIRI, because they are already relatively well-funded and are not heavily bottlenecked on funding. However, he ultimately decides to grant some amount to MIRI, giving some explanation. He says he will think more about this before the next funding round.
40,000.0042018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and to MIRI's 2018 fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ saying "According to their fundraiser post, MIRI believes it will be able to find productive uses for additional funding, and gives examples of ways additional funding was used to support their work this year."

Donor reason for selecting the donee: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and says "We believe that this research represents one promising approach to AI alignment research."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor retrospective of the donation: The Long-Term Future Fund would make a similarly sized grant ($50,000) in its next grant round in April 2019, suggesting that it was satisfied with the outcome of the grant Percentage of total donor spend in the corresponding batch of donations: 100.00%.
488,994.0012018-08-14AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0Nick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017

Intended use of funds (category): Organizational general support

Intended use of funds: Beckstead writes "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare), due to a sense that (i) their work is otherwise much less funding constrained than it used to be, and (ii) spending like this would better reflect the value of staff time and increase staff satisfaction. However, I also told them that I was open to them using these funds to accomplish this objective indirectly (e.g. through salary increases) or using the funds for another purpose if that seemed better to them."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 for Beckstead's opinion of the donee.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-Term Future Fund." Also: "I think a number of these organizations could qualify for the criteria of either the Long-Term Future Fund or the EA Community Fund because of their dual focus on EA and longtermism, which is part of the reason that 80,000 Hours is receiving a grant from each fund."
Percentage of total donor spend in the corresponding batch of donations: 53.32%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: Even after the fund management being moved to a new team, the EA Long-Term Future Fund would continue making grants to MIRI.