Effective Altruism Funds: Long-Term Future Fund donations made to Center for Human-Compatible AI

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United Kingdom
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)Centre for Effective Altruism
Websitehttps://app.effectivealtruism.org/funds/far-future
Donations URLhttps://app.effectivealtruism.org/
Regularity with which donor updates donations datairregular
Regularity with which Donations List Website updates donations data (after donor update)irregular
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)

Brief history: This is one of four Effective Altruism Funds that are a program of the Centre for Effective Altruism (CEA). The creation of the funds was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead, and also by the donor lottery run in December 2016 by Paul Christiano and Carl Shulman (see https://forum.effectivealtruism.org/posts/WvPEitTCM8ueYPeeH/donor-lotteries-demonstration-and-faq (GW, IR) for more). EA Funds were introduced on 2017-02-09 in the post https://forum.effectivealtruism.org/posts/a8eng4PbME85vdoep/introducing-the-ea-funds (GW, IR) and launched in the post https://forum.effectivealtruism.org/posts/iYoSAXhodpxJFwdQz/ea-funds-beta-launch (GW, IR) on 2017-02-28. The first round of allocations was announced at https://forum.effectivealtruism.org/posts/MsaS8JKrR8nnxyPkK/update-on-effective-altruism-funds (GW, IR) on 2017-04-20. The funds allocation information appears to have next been updated in November 2017; see https://www.facebook.com/groups/effective.altruists/permalink/1606722932717391/ for more. This particular fund was previously called the Far Future Fund; it was renamed to the Long-Term Future Fund to more accurately reflect the meaning.

Brief notes on broad donor philosophy and major focus areas: As the name suggests, the Fund's focus area is activities that could significantly affect the long term future. Historically, the fund has focused on areas such as AI safety and epistemic institutions, though it has also made grants related to biosecurity and other global catastrophic risks. At inception, the Fund had Nick Beckstead of Open Philanthropy its sole manager. Beckstead stepped down in August 2018, and October 2018, https://forum.effectivealtruism.org/posts/yYHKRgLk9ufjJZn23/announcing-new-ea-funds-management-teams (GW, IR) announces a new management team for the Fund, comprising chair Matt Fallshaw, and team Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with advisors Nick Beckstead and Jonas Vollmer.

Notes on grant decision logistics: Money from the fund is supposed to be granted about thrice a year, with the target months being November, February, and June. Actual grant months may differ from the target months. The amount of money granted with each decision cycle depends on the amount of money available in the Fund as well as on the available donation opportunities. Grant applications can be submitted any time; any submitted applications will be considered prior to the next grant round (each grant round has a deadline by which applications must be submitted to be considered).

Notes on grant publication logistics: Grant details are published on the EA Funds website, and linked to from the Fund page. Each grant is accompanied by a brief description of the grantee's work (and hence, the intended use of funds) as well as reasons the grantee was considered impressive. In April 2019, the write-up for each grant at https://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvl had just one author (rather than group authorship), likely the management team member who did the most work on that particular grant. Grant write-ups vary greatly in length; in April 2019, the write-ups by Oliver Habryka were the most thorough.

Notes on grant financing: Money in the Long-Term Future Fund only includes funds explicitly donated for that Fund. In each grant round, the amount of money that can be allocated is limited by the balance available in the fund at that time.

This entity is also a donee.

Full donor page for donor Effective Altruism Funds: Long-Term Future Fund

Basic donee information

ItemValue
Country United States
Websitehttps://humancompatible.ai/
Donate pagehttp://humancompatible.ai/get-involved#supporter
Wikipedia pagehttps://en.wikipedia.org/wiki/Center_for_Human-Compatible_Artificial_Intelligence
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Center_for_Human-Compatible_AI
Org Watch pagehttps://orgwatch.issarice.com/?organization=Center+for+Human-Compatible+AI
Key peopleStuart Russell|Bart Sellman|Michael Wellman|Andrew Critch
Launch date2016-08
NotesReceived a 5.5 million dollar grant from the Open Philanthropy Project at the time of founding, with a 50% probability estimate that it would be a respected AI-related organization in two years

Full donee page for donee Center for Human-Compatible AI

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 2 48,000 61,500 48,000 48,000 48,000 48,000 48,000 48,000 75,000 75,000 75,000 75,000 75,000
AI safety 2 48,000 61,500 48,000 48,000 48,000 48,000 48,000 48,000 75,000 75,000 75,000 75,000 75,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2021 2020
AI safety (filter this donor) 2 123,000.00 48,000.00 75,000.00
Total 2 123,000.00 48,000.00 75,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (3 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.

Full list of donations in reverse chronological order (2 donations)

Graph of all donations (with known year of donation), showing the timeframe of donations

Graph of donations and their timeframes
Amount (current USD)Amount rank (out of 2)Donation dateCause areaURLInfluencerNotes
48,000.0022021-04-01AI safetyhttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grantsEvan Hubinger Oliver Habryka Asya Bergal Adam Gleave Daniel Eth Ozzie Gooen Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "hiring research engineers to support CHAI’s technical research projects." "This grant is to support Cody Wild and Steven Wang in their work assisting CHAI as research engineers, funded through BERI."

Donor reason for selecting the donee: Grant investigator and main influencer Evan Hubinger writes: "Overall, I have a very high opinion of CHAI’s ability to produce good alignment researchers—Rohin Shah, Adam Gleave, Daniel Filan, Michael Dennis, etc.—and I think it would be very unfortunate if those researchers had to spend a lot of their time doing non-alignment-relevant engineering work. Thus, I think there is a very strong case for making high-quality research engineers available to help CHAI students run ML experiments. [...] both Cody and Steven have already been working with CHAI doing exactly this sort of work; when we spoke to Adam Gleave early in the evaluation process, he seems to have found their work to be positive and quite helpful. Thus, the risk of this grant hurting rather than helping CHAI researchers seems very minimal, and the case for it seems quite strong overall, given our general excitement about CHAI."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; a grant of $75,000 for a similar purpose was made to the grantee in the Septembe 2020 round, so the timing is likely partly determined by the need to renew funding for the people (Cody Wild and Steven Wang) funded through the previous grant.

Other notes: The grant page says: "Adam Gleave [one of the fund managers] did not participate in the voting or final discussion around this grant." The EA Forum post https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 5.15%.
75,000.0012020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000Oliver Habryka Adam Gleave Asya Bergal Matt Wage Helen Toner Donation process: Donee submitted grant application through the application form for the September 2020 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "hiring a research engineer to support CHAI’s technical research projects."

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka gives these reasons for the grant: "Over the last few years, CHAI has hosted a number of people who I think have contributed at a very high quality level to the AI alignment problem, most prominently Rohin Shah [...] I've also found engaging with Andrew Critch's thinking on AI alignment quite valuable, and I am hopeful about more work from Stuart Russell [...] the specific project that CHAI is requesting money for seems also quite reasonable to me. [...] it seems quite important for them to be able to run engineering-heavy machine learning projects, for which it makes sense to hire research engineers to assist with the associated programming tasks. The reports we've received from students at CHAI also suggest that past engineer hiring has been valuable and has enabled students at CHAI to do substantially better work."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Oliver Habryka writes: "Having thought more recently about CHAI as an organization and its place in the ecosystem of AI alignment,I am currently uncertain about its long-term impact and where it is going, and I eventually plan to spend more time thinking about the future of CHAI. So I think it's not that unlikely (~20%) that I might change my mind on the level of positive impact I'd expect from future grants like this. However, I think this holds less for the other Fund members who were also in favor of this grant, so I don't think my uncertainty is much evidence about how LTFF will think about future grants to CHAI."

Donor retrospective of the donation: A later grant round https://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants includes a $48,000 grant from the LTFF to CHAI for a similar purpose, suggesting continued satisfaction and a continued positive assessment of the grantee.

Other notes: Adam Gleave, though on the grantmaking team, recused himself from discussions around this grant since he is a Ph.D. student at CHAI. Grant investigator and main influencer Oliver Habryka includes a few concerns: "Rohin is leaving CHAI soon, and I'm unsure about CHAI's future impact, since Rohin made up a large fraction of the impact of CHAI in my mind. [...] I also maintain a relatively high level of skepticism about research that tries to embed itself too closely within the existing ML research paradigm. [...] A concrete example of the problems I have seen (chosen for its simplicity more than its importance) is that, on several occasions, I've spoken to authors who, during the publication and peer-review process, wound up having to remove some of their papers' most important contributions to AI alignment. [...] Another concern: Most of the impact that Rohin contributed seemed to be driven more by distillation and field-building work than by novel research. [...] I believe distillation and field-building to be particularly neglected and valuable at the margin. I don't currently see the rest of CHAI engaging in that work in the same way." The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 19.02%.