Effective Altruism Funds: Long-Term Future Fund donations made

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United Kingdom
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)Centre for Effective Altruism
Websitehttps://app.effectivealtruism.org/funds/far-future
Donations URLhttps://app.effectivealtruism.org/
Regularity with which donor updates donations datairregular
Regularity with which Donations List Website updates donations data (after donor update)irregular
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)

Brief history: This is one of four Effective Altruism Funds that are a program of the Centre for Effective Altruism (CEA). The creation of the funds was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead, and also by the donor lottery run in December 2016 by Paul Christiano and Carl Shulman (see https://forum.effectivealtruism.org/posts/WvPEitTCM8ueYPeeH/donor-lotteries-demonstration-and-faq (GW, IR) for more). EA Funds were introduced on 2017-02-09 in the post https://forum.effectivealtruism.org/posts/a8eng4PbME85vdoep/introducing-the-ea-funds (GW, IR) and launched in the post https://forum.effectivealtruism.org/posts/iYoSAXhodpxJFwdQz/ea-funds-beta-launch (GW, IR) on 2017-02-28. The first round of allocations was announced at https://forum.effectivealtruism.org/posts/MsaS8JKrR8nnxyPkK/update-on-effective-altruism-funds (GW, IR) on 2017-04-20. The funds allocation information appears to have next been updated in November 2017; see https://www.facebook.com/groups/effective.altruists/permalink/1606722932717391/ for more. This particular fund was previously called the Far Future Fund; it was renamed to the Long-Term Future Fund to more accurately reflect the meaning.

Brief notes on broad donor philosophy and major focus areas: As the name suggests, the Fund's focus area is activities that could significantly affect the long term future. Historically, the fund has focused on areas such as AI safety and epistemic institutions, though it has also made grants related to biosecurity and other global catastrophic risks. At inception, the Fund had Nick Beckstead of Open Philanthropy its sole manager. Beckstead stepped down in August 2018, and October 2018, https://forum.effectivealtruism.org/posts/yYHKRgLk9ufjJZn23/announcing-new-ea-funds-management-teams (GW, IR) announces a new management team for the Fund, comprising chair Matt Fallshaw, and team Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with advisors Nick Beckstead and Jonas Vollmer.

Notes on grant decision logistics: Money from the fund is supposed to be granted about thrice a year, with the target months being November, February, and June. Actual grant months may differ from the target months. The amount of money granted with each decision cycle depends on the amount of money available in the Fund as well as on the available donation opportunities. Grant applications can be submitted any time; any submitted applications will be considered prior to the next grant round (each grant round has a deadline by which applications must be submitted to be considered).

Notes on grant publication logistics: Grant details are published on the EA Funds website, and linked to from the Fund page. Each grant is accompanied by a brief description of the grantee's work (and hence, the intended use of funds) as well as reasons the grantee was considered impressive. In April 2019, the write-up for each grant at https://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvl had just one author (rather than group authorship), likely the management team member who did the most work on that particular grant. Grant write-ups vary greatly in length; in April 2019, the write-ups by Oliver Habryka were the most thorough.

Notes on grant financing: Money in the Long-Term Future Fund only includes funds explicitly donated for that Fund. In each grant round, the amount of money that can be allocated is limited by the balance available in the fund at that time.

This entity is also a donee.

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 76 40,000 63,475 1,050 14,838 24,000 28,000 30,000 40,000 50,000 70,000 82,000 135,000 488,994
AI safety 59 39,000 61,726 1,050 12,000 23,000 27,260 30,000 39,000 48,000 60,000 80,000 121,575 488,994
Epistemic institutions 8 30,000 69,003 10,000 10,000 20,000 28,000 30,000 30,000 70,000 70,000 150,000 174,021 174,021
Rationality community 2 20,000 35,000 20,000 20,000 20,000 20,000 20,000 20,000 50,000 50,000 50,000 50,000 50,000
Biosecurity and pandemic preparedness 1 26,250 26,250 26,250 26,250 26,250 26,250 26,250 26,250 26,250 26,250 26,250 26,250 26,250
Effective altruism 4 60,000 75,363 50,000 50,000 50,000 60,000 60,000 60,000 91,450 91,450 100,000 100,000 100,000
Global catastrophic risks 1 70,000 70,000 70,000 70,000 70,000 70,000 70,000 70,000 70,000 70,000 70,000 70,000 70,000
Cause prioritization 1 162,537 162,537 162,537 162,537 162,537 162,537 162,537 162,537 162,537 162,537 162,537 162,537 162,537

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2023 2022 2021 2020 2019 2018 2017
AI safety (filter this donor) 59 39 3,641,847.02 1,024,238.00 923,377.00 293,000.00 250,000.00 571,900.00 564,494.00 14,838.02
Epistemic institutions (filter this donor) 8 6 552,021.00 0.00 0.00 0.00 0.00 358,000.00 194,021.00 0.00
Effective altruism (filter this donor) 4 3 301,450.00 0.00 0.00 0.00 100,000.00 110,000.00 91,450.00 0.00
Cause prioritization (filter this donor) 1 1 162,537.00 0.00 0.00 0.00 0.00 0.00 162,537.00 0.00
Global catastrophic risks (filter this donor) 1 1 70,000.00 0.00 0.00 70,000.00 0.00 0.00 0.00 0.00
Rationality community (filter this donor) 2 2 70,000.00 0.00 0.00 0.00 0.00 70,000.00 0.00 0.00
Biosecurity and pandemic preparedness (filter this donor) 1 1 26,250.00 0.00 0.00 0.00 0.00 26,250.00 0.00 0.00
Total 76 53 4,824,105.02 1,024,238.00 923,377.00 363,000.00 350,000.00 1,136,150.00 1,012,502.00 14,838.02

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2023 2022 2021 2020 2019 2018 2017
AI safety/technical research 19 15 1,192,528.00 847,701.00 344,827.00 0.00 0.00 0.00 0.00 0.00
AI safety 15 10 1,075,394.00 0.00 0.00 73,000.00 250,000.00 187,900.00 564,494.00 0.00
AI safety/technical research/talent pipeline 7 2 595,500.00 0.00 415,500.00 85,000.00 0.00 95,000.00 0.00 0.00
Epistemic institutions 5 4 392,021.00 0.00 0.00 0.00 0.00 218,000.00 174,021.00 0.00
AI safety/movement growth 5 3 339,587.00 176,537.00 163,050.00 0.00 0.00 0.00 0.00 0.00
Effective altruism/movement growth/career counseling 2 1 191,450.00 0.00 0.00 0.00 100,000.00 0.00 91,450.00 0.00
Cause prioritization 1 1 162,537.00 0.00 0.00 0.00 0.00 0.00 162,537.00 0.00
Epistemic institutions/forecasting 3 2 160,000.00 0.00 0.00 0.00 0.00 140,000.00 20,000.00 0.00
AI safety/governance 1 1 135,000.00 0.00 0.00 135,000.00 0.00 0.00 0.00 0.00
AI safety/deconfusion research 4 4 110,000.00 0.00 0.00 0.00 0.00 110,000.00 0.00 0.00
AI safety/forecasting 3 3 77,000.00 0.00 0.00 0.00 0.00 77,000.00 0.00 0.00
Global catastrophic risks 1 1 70,000.00 0.00 0.00 70,000.00 0.00 0.00 0.00 0.00
Rationality community 2 2 70,000.00 0.00 0.00 0.00 0.00 70,000.00 0.00 0.00
Effective altruism/government policy 1 1 60,000.00 0.00 0.00 0.00 0.00 60,000.00 0.00 0.00
Effective altruism/long-termism 1 1 50,000.00 0.00 0.00 0.00 0.00 50,000.00 0.00 0.00
AI safety/content creation/video 1 1 39,000.00 0.00 0.00 0.00 0.00 39,000.00 0.00 0.00
AI safety/upskilling 2 2 33,000.00 0.00 0.00 0.00 0.00 33,000.00 0.00 0.00
AI safety/agent foundations 1 1 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
Biosecurity and pandemic preparedness 1 1 26,250.00 0.00 0.00 0.00 0.00 26,250.00 0.00 0.00
AI safety/other global catastrophic risks 1 1 14,838.02 0.00 0.00 0.00 0.00 0.00 0.00 14,838.02
Classified total 76 53 4,824,105.02 1,024,238.00 923,377.00 363,000.00 350,000.00 1,136,150.00 1,012,502.00 14,838.02
Unclassified total 0 0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 76 53 4,824,105.02 1,024,238.00 923,377.00 363,000.00 350,000.00 1,136,150.00 1,012,502.00 14,838.02

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2023 2022 2021 2020 2019 2018 2017
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 678,994.00 0.00 0.00 0.00 100,000.00 50,000.00 528,994.00 0.00
Alexander Turner (filter this donor) 436,461.00 405,411.00 1,050.00 0.00 0.00 30,000.00 0.00 0.00
SERI-MATS program (filter this donor) 343,000.00 0.00 343,000.00 0.00 0.00 0.00 0.00 0.00
Center for Applied Rationality (filter this donor) Rationality FB Tw WP Site TW 324,021.00 0.00 0.00 0.00 0.00 150,000.00 174,021.00 0.00
Robert Miles (filter this donor) 297,537.00 176,537.00 82,000.00 0.00 0.00 39,000.00 0.00 0.00
AI Safety Camp (filter this donor) 252,500.00 0.00 72,500.00 85,000.00 0.00 95,000.00 0.00 0.00
80,000 Hours (filter this donor) Career coaching/life guidance FB Tw WP Site 191,450.00 0.00 0.00 0.00 100,000.00 0.00 91,450.00 0.00
Kaarel Hänni, Kay Kozaronek, Walter Laurito, and Georgios Kaklmanos (filter this donor) 167,480.00 167,480.00 0.00 0.00 0.00 0.00 0.00 0.00
Centre for Effective Altruism (filter this donor) Effective altruism/movement growth FB Site 162,537.00 0.00 0.00 0.00 0.00 0.00 162,537.00 0.00
Legal Priorities Project (filter this donor) 135,000.00 0.00 0.00 135,000.00 0.00 0.00 0.00 0.00
Center for Human-Compatible AI (filter this donor) AI safety WP Site TW 123,000.00 0.00 0.00 48,000.00 75,000.00 0.00 0.00 0.00
AI Safety Support (filter this donor) 105,000.00 0.00 80,000.00 25,000.00 0.00 0.00 0.00 0.00
David Udell (filter this donor) 100,000.00 0.00 100,000.00 0.00 0.00 0.00 0.00 0.00
Foretold (filter this donor) 90,000.00 0.00 0.00 0.00 0.00 70,000.00 20,000.00 0.00
AI Impacts (filter this donor) AI safety Site 75,000.00 0.00 0.00 0.00 75,000.00 0.00 0.00 0.00
Conjecture (filter this donor) 72,827.00 0.00 72,827.00 0.00 0.00 0.00 0.00 0.00
Alignment Research Center (filter this donor) 72,000.00 0.00 72,000.00 0.00 0.00 0.00 0.00 0.00
Rethink Priorities (filter this donor) Cause prioritization Site 70,000.00 0.00 0.00 70,000.00 0.00 0.00 0.00 0.00
Metaculus (filter this donor) 70,000.00 0.00 0.00 0.00 0.00 70,000.00 0.00 0.00
AI Safety Hub (filter this donor) 60,000.00 0.00 60,000.00 0.00 0.00 0.00 0.00 0.00
Ought (filter this donor) AI safety Site 60,000.00 0.00 0.00 0.00 0.00 50,000.00 10,000.00 0.00
High Impact Policy Engine (filter this donor) 60,000.00 0.00 0.00 0.00 0.00 60,000.00 0.00 0.00
Anonymous (filter this donor) 59,600.00 59,600.00 0.00 0.00 0.00 0.00 0.00 0.00
Neel Nanda (filter this donor) 52,000.00 52,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Lucius Caviola (filter this donor) 50,000.00 0.00 0.00 0.00 0.00 50,000.00 0.00 0.00
Kocherga (filter this donor) 50,000.00 0.00 0.00 0.00 0.00 50,000.00 0.00 0.00
Jeremy Gillen (filter this donor) 40,000.00 0.00 40,000.00 0.00 0.00 0.00 0.00 0.00
Shahar Avin (filter this donor) 40,000.00 0.00 0.00 0.00 0.00 40,000.00 0.00 0.00
Hoagy Cunningham (filter this donor) 35,300.00 35,300.00 0.00 0.00 0.00 0.00 0.00 0.00
Jonathan Ng (filter this donor) 32,650.00 32,650.00 0.00 0.00 0.00 0.00 0.00 0.00
Quentin Feuillade--Montixi (filter this donor) 32,000.00 32,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Nikhil Kunapuli (filter this donor) 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
Anand Srinivasan (filter this donor) 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
David Girardo (filter this donor) 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
Tegan McCaslin (filter this donor) 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
Eli Tyre (filter this donor) 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
Alexander Gietelink Oldenziel (filter this donor) 30,000.00 0.00 0.00 0.00 0.00 30,000.00 0.00 0.00
Effective Altruism Russia (filter this donor) 28,000.00 0.00 0.00 0.00 0.00 28,000.00 0.00 0.00
Ann-Kathrin Dombrowski (filter this donor) 27,260.00 27,260.00 0.00 0.00 0.00 0.00 0.00 0.00
Jacob Lagerros (filter this donor) 27,000.00 0.00 0.00 0.00 0.00 27,000.00 0.00 0.00
Tessa Alexanian (filter this donor) 26,250.00 0.00 0.00 0.00 0.00 26,250.00 0.00 0.00
Matt MacDermott (filter this donor) 24,000.00 24,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Stag Lynn (filter this donor) 23,000.00 0.00 0.00 0.00 0.00 23,000.00 0.00 0.00
AI summer school (filter this donor) 21,000.00 0.00 0.00 0.00 0.00 0.00 21,000.00 0.00
Lauren Lee (filter this donor) 20,000.00 0.00 0.00 0.00 0.00 20,000.00 0.00 0.00
Connor Flexman (filter this donor) 20,000.00 0.00 0.00 0.00 0.00 20,000.00 0.00 0.00
Alexander Siegenfeld (filter this donor) 20,000.00 0.00 0.00 0.00 0.00 20,000.00 0.00 0.00
Effective Altruism Zürich (filter this donor) 17,900.00 0.00 0.00 0.00 0.00 17,900.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 14,838.02 0.00 0.00 0.00 0.00 0.00 0.00 14,838.02
Shashwat Goel (filter this donor) 12,000.00 12,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Orpheus Lummis (filter this donor) 10,000.00 0.00 0.00 0.00 0.00 10,000.00 0.00 0.00
Roam Research (filter this donor) 10,000.00 0.00 0.00 0.00 0.00 10,000.00 0.00 0.00
AI Safety Unconference (filter this donor) 4,500.00 0.00 0.00 0.00 0.00 0.00 4,500.00 0.00
Total -- -- 4,824,105.02 1,024,238.00 923,377.00 363,000.00 350,000.00 1,136,150.00 1,012,502.00 14,838.02

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2023 2022 2021 2020 2019 2018 2017
Nick Beckstead 5 5 931,840.02 0.00 0.00 0.00 0.00 0.00 917,002.00 14,838.02
Oliver Habryka|Alex Zhu|Matt Wage|Helen Toner|Matt Fallshaw 15 15 649,000.00 0.00 0.00 0.00 0.00 649,000.00 0.00 0.00
Oliver Habryka 2 1 335,411.00 335,411.00 0.00 0.00 0.00 0.00 0.00 0.00
Caleb Parikh|Asya Bergal 1 1 316,000.00 0.00 316,000.00 0.00 0.00 0.00 0.00 0.00
Matt Wage|Helen Toner|Oliver Habryka|Adam Gleave 2 2 200,000.00 0.00 0.00 0.00 200,000.00 0.00 0.00 0.00
Thomas Larsen 1 1 167,480.00 167,480.00 0.00 0.00 0.00 0.00 0.00 0.00
Matt Wage|Helen Toner|Matt Fallshaw|Alex Zhu|Oliver Habryka 4 4 166,250.00 0.00 0.00 0.00 0.00 166,250.00 0.00 0.00
Daniel Eth|Asya Bergal|Adam Gleave|Oliver Habryka|Evan Hubinger|Ozzie Gooen 1 1 135,000.00 0.00 0.00 135,000.00 0.00 0.00 0.00 0.00
Alex Zhu|Helen Toner|Matt Fallshaw|Matt Wage|Oliver Habryka 5 5 95,500.00 0.00 0.00 0.00 0.00 0.00 95,500.00 0.00
Alex Zhu|Matt Wage|Helen Toner|Matt Fallshaw|Oliver Habryka 3 3 90,000.00 0.00 0.00 0.00 0.00 90,000.00 0.00 0.00
Oliver Habryka|Asya Bergal|Adam Gleave|Daniel Eth|Evan Hubinger|Ozzie Gooen 1 1 85,000.00 0.00 0.00 85,000.00 0.00 0.00 0.00 0.00
Alex Zhu|Helen Toner|Matt Wage|Oliver Habryka 4 4 83,000.00 0.00 0.00 0.00 0.00 83,000.00 0.00 0.00
Adam Gleave|Oliver Habryka|Asya Bergal|Matt Wage|Helen Toner 1 1 75,000.00 0.00 0.00 0.00 75,000.00 0.00 0.00 0.00
Oliver Habryka|Adam Gleave|Asya Bergal|Matt Wage|Helen Toner 1 1 75,000.00 0.00 0.00 0.00 75,000.00 0.00 0.00 0.00
Caleb Parikh 1 1 72,827.00 0.00 72,827.00 0.00 0.00 0.00 0.00 0.00
Asya Bergal 1 1 72,000.00 0.00 72,000.00 0.00 0.00 0.00 0.00 0.00
Asya Bergal|Adam Gleave|Oliver Habryka|Evan Hubinger 1 1 70,000.00 0.00 0.00 70,000.00 0.00 0.00 0.00 0.00
Helen Toner|Matt Wage|Oliver Habryka|Alex Zhu 1 1 60,000.00 0.00 0.00 0.00 0.00 60,000.00 0.00 0.00
Evan Hubinger|Oliver Habryka|Asya Bergal|Adam Gleave|Daniel Eth|Ozzie Gooen 1 1 48,000.00 0.00 0.00 48,000.00 0.00 0.00 0.00 0.00
Oliver Habryka|Alex Zhu|Matt Wage|Helen Toner 1 1 41,000.00 0.00 0.00 0.00 0.00 41,000.00 0.00 0.00
Matt Wage|Helen Toner|Oliver Habryka|Alex Zhu 1 1 29,000.00 0.00 0.00 0.00 0.00 29,000.00 0.00 0.00
Helen Toner|Matt Wage|Matt Fallshaw|Alex Zhu|Oliver Habryka 1 1 17,900.00 0.00 0.00 0.00 0.00 17,900.00 0.00 0.00
Classified total 54 41 3,815,208.02 502,891.00 460,827.00 338,000.00 350,000.00 1,136,150.00 1,012,502.00 14,838.02
Unclassified total 22 16 1,008,897.00 521,347.00 462,550.00 25,000.00 0.00 0.00 0.00 0.00
Total 76 53 4,824,105.02 1,024,238.00 923,377.00 363,000.00 350,000.00 1,136,150.00 1,012,502.00 14,838.02

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

Sorry, we couldn't find any disclosures information.

Donation amounts by country and year

If you hover over a cell for a given country and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Country Number of donations Number of donees Total 2022 2019
United States|United Kingdom|Japan 1 1 72,827.00 72,827.00 0.00
United Kingdom 1 1 60,000.00 0.00 60,000.00
Russia 1 1 50,000.00 0.00 50,000.00
Classified total 3 3 182,827.00 72,827.00 110,000.00
Unclassified total 73 50 4,641,278.02 850,550.00 1,026,150.00
Total 76 53 4,824,105.02 923,377.00 1,136,150.00

Graph of spending by country and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by country and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (37 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Some unfun lessons I learned as a junior grantmaker (GW, IR)2022-05-23Linchuan Zhang Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Miscellaneous commentaryLongtermismThis post, written by a "part-time grantmaker since January 15th or so, first (and currently) as a guest manager on the Long-Term Future Fund" includes a bunch of "useful but not fun lessons" learned from his grantmaking experience. The important lessons: "(1) (Refresher) The top grants are much much better than the marginal grants, even ex ante. (2) All of the grantmakers are extremely busy (3) Most (within-scope) projects are usually rejected for personal factors or person-project fit reasons (4) Some of your most impactful work in grantmaking won’t look like evaluating grants (5) Most grantmakers don’t (and shouldn’t) spend a lot of time on evaluating any given grant. (6) It’s rarely worth your time to give detailed feedback"
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
How To Get Into Independent Research On Alignment/Agency (GW, IR)2021-11-18John Wentworth LessWrongEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Long-Term Future Fund Miscellaneous commentaryAI safetyJohn Wentworth, an independent AI safety researcher who makes a full-time-equivalent of $90,000 a year and is funded partly by the Long-Term Future Fund (LTFF), explains more about how he does independent research and how he's able to get paid for it. Wentworth would be cited as a success story of the LTFF by Zvi Mowshowitz in https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff#Access_to_Power_and_Money (GW, IR) and his post would be cited by the 2021 AI Alignment Review https://forum.effectivealtruism.org/posts/BNQMyWGCNWDdP2WyG/2021-ai-alignment-literature-review-and-charity-comparison#LTFF__Long_term_future_fund (GW, IR) "[the post] suggests that LTFF has been crucial to enabling the emergence of independent safety researcher as a viable occupation; this seems like a very major positive for the LTFF."
Public reports are now optional for EA Funds grantees (GW, IR)2021-11-12Asya Bergal Jonas Vollmer Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Global Health and Development Fund Miscellaneous commentaryEffective altruism|Animal welfare|AI safety|Global catastrophic risks|Global health and developmentThe blog post says: "Public reports are now explicitly optional for applicants to EA Funds." It further days: "If you are an individual applicant or a new organization, choosing not to have a public report will very rarely affect the chance that we fund you (and we will reach out to anyone for whom it would make a substantial difference). If you are an established organization, choosing not to have a public report may slightly decrease the chance that we fund you."
Things I often tell people about applying to EA Funds (GW, IR)2021-10-26Michael Aird Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Miscellaneous commentaryEffective altruism|Animal welfare|AI safety|Global catastrophic risksMichael Aird, currently serving as a guest manager on one of the EA Funds (the Infrastructure Fund) lists various things he often tells people about applying to the EA Funds: Your application can be quick and unpolished, Your application can leave some questions open, can suggest multiple possible scenarios, and can “start a dialogue”, Maybe think about pilots and proxies, Maybe think big, You can apply for a planning/exploration grant, Encourage other people to apply!, You should consider applying to both EA Funds and other funders, Decisions and transfers can be fast, There’s usually some flexibility in how the money is used, and sometimes a lot of flexibility, It doesn’t matter much whether you select EAIF or LTFF. The post also includes object-level suggestions.
What would you do if you had half a million dollars? (GW, IR)2021-07-17Patrick Brinich-Langlois Effective Altruism ForumDonor lottery Effective Altruism Funds: Long-Term Future Fund Longview Philanthropy Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Longview Philanthropy Effective Altruism Funds: Effective Altruism Infrastructure Fund Request for proposalsPatrick Brinich-Langlois announces that he won the 2020/2021 $500,000 donor lottery, and is looking for where best to donate. The post suggests that he is inclined to donating the money to an existing grantmaking body, such as the Long-Term Future Fund, Longview Philanthropy, the EA Infrastructure Fund, or the Patient Philanthropy Fund.
You can now apply to EA Funds anytime! (LTFF & EAIF only) (GW, IR)2021-06-17Jonas Vollmer Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Request for proposalsAI safety|Global catastrophic risks|Effective altruismTwo of the Effective Altruism Funds, the Long-Term Future Fund and Infrastructure Fund, announce that they are switching to rolling applications, and they "now aim to evaluate most grants within 21 days of submission (and all grants within 42 days), regardless of when they have been submitted." The post also says: "You can now suggest that we give money to other people, or let us know about ideas for how we could spend our money." And: "We fund student scholarships, career exploration, local groups, entrepreneurial projects, academic teaching buy-outs, top-up funding for poorly paid academics, and many other things. We can make anonymous grants without public reporting."
2018-2019 Long-Term Future Fund Grantees: How did they do? (GW, IR)2021-06-16Nuno Sempere Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Robert Miles Kocherga Third-party coverage of donor strategyAI safety|Global catastrophic risks|Epistemic institutionsNuno Sempere reviews, based on public information, the outcomes of grants made by the Long-Term Future Fund in 2018 and 2019, and publishes a summary table as well as sample evaluations for two grantees (Robert Miles and Kocherga). Detailed evaluations of all grantees are kept private to avoid creating unnecessary hurt and conflict with the negative evaluations, and a separate question post https://forum.effectivealtruism.org/posts/4mgBR5fwJ9AZeugZC/what-should-the-norms-around-privacy-and-evaluation-in-the (GW, IR) asks for guidance on norms around publishing the detailed evaluation.
The Long-Term Future Fund has room for more funding, right now (GW, IR)2021-03-28Asya Bergal Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Long-Term Future Fund Donee donation caseAI safety|Global catastrophic risks|LongtermismIn the blog post, Asya Bergal, fund chair for the Long-Term Future Fund, says that LTFF has significant room for more funding right now. The post says: "The Long-Term Future Fund is on track to approve $1.5M - $2M of grants this round. This is 3 - 4x what we’ve spent in any of our last five grant rounds and most of our current fund balance. [...] I don’t know if my perceived increase in quality applications will persist, but I no longer think it’s implausible for the fund to spend $4M - $8M this year while maintaining our previous bar for funding. This is up from my previous guess of $2M."
EA Funds has appointed new fund managers (GW, IR)2021-03-23Jonas Vollmer Sam Deere Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Status changeEffective altruism|Animal welfare|AI safety|Global catastrophic risks|LongtermismIn this post, Jonas Vollmer and Sam Deere announce a new set of fund managers for three of the EA Funds: Animal Welfare Fund, Long-Term Future Fund, and EA Infrastructure Fund. The post says: "Existing fund managers were given the opportunity to re-apply if they wished, and new candidates were sourced through our networks. We received 66 applications from new candidates. Fund managers were appointed on the basis of their performance on work tests, their past experience in grantmaking or other relevant areas, and formal and informal references. These fund managers have been appointed for a two-year term, after which we will run a similar process again. [...] We’re also experimenting with a new system of guest fund managers, allowing people who might be a good fit to provide input to the fund for a single grant round. [...] We hope that these changes will substantially increase each fund’s capacity to evaluate grants."
EA Funds is more flexible than you might think (GW, IR)2021-03-05Jonas Vollmer Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Broad donor strategyEffective altruism|Animal welfare|AI safety|Global catastrophic risks|LongtermismIn this post, Jonas Vollmer, who leads EA Funds, explains ways that three of the EA Funds (the EA Infrastructure Fund, the Long-Term Future Fund, and the Animal Welfare Fund), are more flexible than many potential applicants might think. Some forms of flexibility listed are: long-term relationships, academic scholarships, teaching buy-outs, organization funding, large grants, off-cycle grants, anonymized grants, and forwarding to other funders.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
Giving What We Can & EA Funds now operate independently of CEA (GW, IR)2020-12-21Max Dalton Jonas Vollmer Luke Freemaan Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Grants Giving What We Can Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Giving What We Can Status changeAnimal welfare|Global health and development|AI safety|Global catastrophic risks|Effective altruism|LongtermismThis cross-post of https://www.centreforeffectivealtruism.org/blog/giving-what-we-can-and-ea-funds-now-operate-independently-of-cea/ announces that Giving What We Can (GWWC), operated by Luke Freeman, and the Effective Altruism Funds (EA Funds), operated by Jonas Vollmer, are now operated independent of the Centre for Effective Altruism. Also, Effective Altruism Grants (EA Grants) is now fully closed. The plan to close it had been announced in April 2020 at https://forum.effectivealtruism.org/posts/SX6vKhRQsFj8AjYrM/brief-update-on-ea-grants (GW, IR) but some existing grant commitments needed to be honored before fully closing the program out. The post includes growth, retention, and content plans for GWWC, and donor satisfaction, donation, and grantmaking data for EA Funds.
Long-Term Future Fund and EA Meta Fund applications open until June 12th (GW, IR)2020-05-15Alex Foster Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Request for proposalsAI safety|Global catastrophic risks|Effective altruismThe blog post links to the application forms for two of the EA Funds (Long-Term Future Fund and Meta Fund) and gives a deadline of June 12, 2020 for the next round for both. It also announces that the minimum allowed grant size is reduced from $10,000 to $5,000.
Brief update on EA Grants (GW, IR)2020-04-21Nicloe Ross Centre for Effective AltruismEffective Altruism Grants Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund ShutdownNicole Ross announces that EA Grants is no longer accepting new applications. Grantseekers are encouraged to apply to one of the EA Funds instead; the deadline for application for the upcoming round for the the EA Meta Fund, the Animal Welfare Fund, and the Long-Term Future Fund is June 15, 2020.
Long-Term Future Fund and EA Meta Fund applications open until January 31st (GW, IR)2020-01-06Alex Foster Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Request for proposalsAI safety|Global catastrophic risks|Effective altruismThe blog post links to the application forms for two of the EA Funds (Long-Term Future Fund and Meta Fund) and gives a deadline of January 31, 2020 for the next round for both. It also provides information on the kinds of grants that each fund is interested in giving. The EA Meta Fund description clarifies that projects related to community building should instead seek funding from Effective Altruism Community Building Grants, but local groups doing other kinds of projects can still apply to the EA Meta Fund.
Effective Altruism Funds Project Updates (GW, IR)2019-12-20Sam Deere Effective Altruism FundsEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Broad donor strategyAnimal welfare|Global health and development|AI safety|Global catastrophic risks|Effective altruismThe blog post is by Sam Deere of the Centre for Effective Altruism, who is the project lead for Effective Altruism Funds (EA Funds). The blog post goes over the purpose of EA Funds, structure of fund management teams, the use of the EA Funds platform to directly donate to charities, and the project status and relationship with CEA. Regarding the last point: "Currently EA Funds is a project wholly within the central part of the Centre for Effective Altruism (as opposed to a satellite project housed within the same legal organization, like 80,000 Hours or the Forethought Foundation). However, we’re currently investigating whether this should change. This is largely driven by a divergence in organizational priorities – specifically, that CEA is focusing on building communities and spaces for discussing EA ideas (e.g. local groups, EA Global and related events, and the EA Forum), whereas EA Funds is primarily fundraising-oriented." The post also announces recent updates to the EA Funds website and the launch of a publicly-accessible dashboard for fund statistics https://app.effectivealtruism.org/funds/about/stats
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th (GW, IR)2019-09-13Denise Melchin Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Request for proposalsAI safety|Global catastrophic risks|Effective altruismThe blog post announces that two of the funds under Effective Altruism Funds, namely the Long-Term Future Fund and the EA Meta Fund, are open for rolling applications. The application window for the current round ends on October 11. This is a followup to a similar post https://forum.effectivealtruism.org/posts/wuKRAX9uD9akupTJ3/long-term-future-fund-and-ea-meta-fund-applications-open (GW, IR) for the previous grant round
Long Term Future Fund and EA Meta Fund applications open until June 28th (GW, IR)2019-06-10Oliver Habryka Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Request for proposalsAI safety|Global catastrophic risks|Effective altruismThe blog post announces that two of the funds under Effective Altruism Funds, namely the Long-Term Future Fund and the EA Meta Fund, are open for rolling applications. The application window for the current round ends on June 28. Response time windows will be 3-4 months (i.e., after the end of the corresponding application cycle). In rare cases, grants may be made out-of-cycle. Grant amounts must be at least $10,000, and will generally be under $100,000. The blog post gives guidelines on the kinds of applications that each fund will accept
Thoughts on the EA Hotel (GW, IR)2019-04-25Oliver Habryka Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Centre for Enabling EA Learning & Research Evaluator review of doneeEffective altruism/housingWith permission from Greg Colbourn of the EA Hotel, Habryka publicly posts the feedback he sent to the EA Hotel, who was rejected from the April 2019 funding round by the Long Term Future Fund. Habryka first lists three reasons he is excited about the Hotel: (a) Providing a safety net, (b) Acting on historical interest, (c) Building high-dedication cultures. He articulates three concrete models of concerns: (1) Initial overeagerness to publicize the EA Hotel (a point he now believes is mostly false, based on Greg Colbourn's response), (2) Significant chance of the EA Hotel culture becoming actively harmful for residents, (3) No good candidate to take charge of long-term logistics of running the hotel. Habryka concludes by saying he thinks all his concerns can be overcome. At the moment, he thinks the hotel should be funded for the next year, but is unsure of whether they should be given money to buy the hotel next door. The comment replies include one by Greg Colbourn, giving his backstory on the media attention (re: (1)) and discussing the situation with (2) and (3). There are also other replies, including one from casebash, who stayed at the hotel for a significant time
This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year (GW, IR)2019-04-17Evan Gaensbauer Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Third-party coverage of donor strategyGlobal catastrophic risks/AI safety/far futureEvan Gaensbauer reviews the grantmaking of the Long Term Future Fund since the management structure change in 2018 (with Nick Beckstead leaving). He uses the jargon "counterfactually unique" for grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded. Based on that measure, he calculates that 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. After excluding the grants that people have expressed serious concerns about in the comments, he says: "16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants." He calls these numbers "an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago." and considers the grants and the grant report an overall success. In a reply comment, Milan Griffes thanks him for the comment, which he calls an "audit"
You received almost 100 applications as far as I'm aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn't have time to vet them all. What other reasons did you have for rejecting applications? (GW, IR)2019-04-08Risto Uuk Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Third-party coverage of donor strategyThe question by Risto Uuk is answered by Oliver Habryka giving the typical factors that might cause the Long Term Future Fund to reject an application. Of the factors listed, one that generates a lot of discussion is when the fund managers have no way of assessing the applicant without investing significant amounts of time, beyond what they have available. This is considered concerning because it creates a bias toward grantees who are better networked and known to the funders. The need for grant amounts that are big enough to justify the overhead costs leads to further discussion of the overhead costs of the marginal and average grant. Conversation participants include Oliver Habryka, Peter Hurford, Ben Kuhn, Michelle Hutchinson, Jonas Vollmer, John Maxwell IV, Jess Whittlestone, Milan Griffes, Evan Gaensbauer, and others
Major Donation: Long Term Future Fund Application Extended 1 Week (GW, IR)2019-02-16Oliver Habryka Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Long-Term Future Fund Request for proposalsAI safety|Global catastrophic risksThe blog post announces that the EA Long-Term Future Fund has received a large donation, which doubles the amount of money available for granting to ~$1.2 million. It extends the deadline for applications at at https://docs.google.com/forms/d/e/1FAIpQLSeDTbCDbnIN11vcgHM3DKq6M0cZ3itAy5GIPK17uvTXcz8ZFA/viewform?usp=sf_link by 1 week, to 2019-02-24 midnight PST. The application form was previously announced at https://forum.effectivealtruism.org/posts/oFeGLaJ5bZBBRbjC9/ea-funds-long-term-future-fund-is-open-to-applications-until (GW, IR) and supposed to be open till 2019-02-07 for the February 2019 round of grants. Cross-posted to LessWrong at https://www.lesswrong.com/posts/ZKsSuxHWNGiXJBJ9Z/major-donation-long-term-future-fund-application-extended-1 (GW, IR)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th (GW, IR)2019-01-17Oliver Habryka Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Request for proposalsAI safety|Global catastrophic risksCross-posted to LessWrong at https://www.lesswrong.com/posts/dvGE8JSeFHtmHC6Gb/ea-funds-long-term-future-fund-is-open-to-applications-until (GW, IR) The post seeks proposals for the Long-Term Future Fund. Proposals must be submitted by 2019-02-07 at https://docs.google.com/forms/d/e/1FAIpQLSeDTbCDbnIN11vcgHM3DKq6M0cZ3itAy5GIPK17uvTXcz8ZFA/viewform?usp=sf_link to be considered for the round of grants being announced mid-February. From the application, excerpted in the post: "We are particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$100k, but more than $10k). Here are a few examples of project types that we're open to funding an individual or group for (note that this list is not exhaustive)"
Long-Term Future Fund AMA (GW, IR)2018-12-18Helen Toner Oliver Habryka Alex Zhu Matt Fallshaw Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Long-Term Future Fund Donee AMAAI safety|Global catastrophic risksThe post is an Ask Me Anything (AMA) for the Long-Term Future Find. The question and answers are in the post comments. Questions are asked by a number of people including Luke Muehlhauser, Josh You, Peter Hurford, Alex Foster, and Robert Jones. Fund managers Oliver Habryka, Matt Fallshaw, Helen Toner, and Alex Zhu respond in the comments. Fund manager Matt Wage does not appear to have participated. Questions cover the amount of time spent evaluating grants, the evaluation criteria, the methods of soliciting grants, and research that would help the team
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) (GW, IR)2018-11-20Oliver Habryka Effective Altruism ForumEffective Altruism Funds: Long-Term Future Fund Request for proposalsAI safety|Global catastrophic risksThe post seeks proposals for the CEA Long-Term Future Fund. Proposals must be submitted by 2018-11-24 at https://docs.google.com/forms/d/e/1FAIpQLSf46ZTOIlv6puMxkEGm6G1FADe5w5fCO3ro-RK6xFJWt7SfaQ/viewform in order to be considered for the round of grants to be announced by the end of November 2018
Announcing new EA Funds management teams (GW, IR)2018-10-27Marek Duda Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Broad donor strategyAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismThe post announces the transition of the Effective Altruism Funds management to teams, with a chair, team members, and advisors. The EA Community Fund is renamed the EA Meta Fund, and has chair Luke Ding and team Denise Melchin, Matt Wage, Alex Foster, and Tara MacAulay, with advisor Nick Beckstead. The long-term future fund has chair Matt Fallshaw, and team Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with advisors Nick Beckstead and Jonas Vollmer. The animal welfare fund has chair Lewis Bollard (same as before) and team Jamie Spurgeon, Natalie Cargill, and Toni Adleberg. The global development fund continues to be solely managed by Elie Hassenfeld. The granting schedule will be thrice a year: November, February, and June for all funds except the Global Development Fund, which will be in December, March, and July.
EA Funds - An update from CEA (GW, IR)2018-08-07Marek Duda Centre for Effective AltruismEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Broad donor strategyAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismMarek Duda gives an update on work on the EA Funds donation platform, the departure of Nick Beckstead from managing the EA Community and Long-Term Future Funds, and the experimental creation of "Junior" Funds
The EA Community and Long-Term Future Funds Lack Transparency and Accountability (GW, IR)2018-07-23Evan Gaensbauer Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Evaluator review of doneeAnimal welfare|global health|AI safety|global catastrophic risks|effective altruismEvan Gaensbauer builds on past criticism of the EA Funds by Henry Stanley at http://effective-altruism.com/ea/1k9/ea_funds_hands_out_money_very_infrequently_should/ and http://effective-altruism.com/ea/1mr/how_to_improve_ea_funds/ Gaensbauer notes that the Global Health and Development Fund and the Animal Welfare Fund have done a better job of paying out and announcing payouts. However, the Long-Term Future Fund and EA Community Fund, both managed by Nick Beckstead, have announced only one payout, and have missed their self-imposed date for announcing the remaining payouts. Some comments by Marek Duda of the Centre for Effective Altruism (the parent of EA Funds) are also discussed
How to improve EA Funds (GW, IR)2018-04-04Henry Stanley Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Evaluator review of doneeAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismHenry Stanley echoes thoughts expressed in his previous post http://effective-altruism.com/ea/1k9/ea_funds_hands_out_money_very_infrequently_should/ and argues for regular disbursement, holding funds in interest-bearing assets, and more clarity about fund manager bandwidth. Comments also discuss Effective Altruism Grants
EA Funds hands out money very infrequently - should we be worried? (GW, IR)2018-01-31Henry Stanley Effective Altruism ForumEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Miscellaneous commentaryAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismHenry Stanley expresses concern that the Effective Altruism Funds hands out money very infrequently. Commenters include Peter Hurford (who suggests a percentage-based approach), Elie Hassenfeld, the manager of the global health and development fund, and Evan Gaensbauer, a person well-connected in effective altruist social circles
What is the status of EA funds? They seem pretty dormant2017-12-10Ben West Effective Altruism Facebook groupEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Miscellaneous commentaryAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismBen West, wondering whether to donate to the Effective Altruism Funds for his end-of-year donation, wonders whether the Funds are dormant, since no donations from the fund have been announced since April. In the comments, Marek Duda of the Centre for Effective Altruism reports that the Funds pages have been updated to include some recent donations, and West updates his post to note that
Discussion: Adding New Funds to EA Funds (GW, IR)2017-06-01Kerry Vaughan Centre for Effective AltruismEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Broad donor strategyAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismKerry Vaughan of Effective Altruism Funds discusses the alternatives being considered regarding expanding the number of funds, and asks readers for opinions
Update on Effective Altruism Funds (GW, IR)2017-04-20Kerry Vaughan Centre for Effective AltruismEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Periodic donation list documentationAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismKerry Vaughan provides a progress report on the beta launch of EA Funds, and says it will go on beyond beta. The post includes information on reception of EA Funds so far, money donated to the funds, and fund allocations for the money donated so far
EA Funds Beta Launch (GW, IR)2017-02-28Tara MacAulay Centre for Effective AltruismEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund LaunchAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismTara MacAulay of the Centre for Effective Altruism (CEA), the parent of Effective Altruism Funds, describes the beta launch of the project. CEA will revisit within three months to decide whether to make the EA Funds permanent
Introducing the EA Funds (GW, IR)2017-02-09William MacAskill Centre for Effective AltruismEffective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund Effective Altruism Funds: Effective Altruism Infrastructure Fund Effective Altruism Funds: Long-Term Future Fund Effective Altruism Funds: Animal Welfare Fund Effective Altruism Funds: Global Health and Development Fund LaunchAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismWilliam MacAskill of the Centre for Effective Altruism (CEA) proposes EA Funds, inspired by the Shulman/Christiano donor lottery from 2016-12, while also incorporating elements of the EA Giving Group run by Nick Beckstead

Full list of donations in reverse chronological order (76 donations)

Graph of top 10 donees (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DoneeAmount (current USD)Amount rank (out of 76)Donation dateCause areaURLInfluencerNotes
Neel Nanda (Earmark: Arthur Conmy)52,000.00302023-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "6 months of funding for Arthur Conmy to work with me (Neel Nanda) on mechanistic interpretability research"

Other notes: Intended funding timeframe in months: 6.
Anonymous (Earmark: SERI-MATS program|Victoria Krakovna|Tom Everitt|Jonathan Richens)52,200.00292023-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "6 months MATS extension to work on a paper on (dis)empowerment with Victoria Krakovna, Tom Everitt, Jonathan Richens"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 6.
Hoagy Cunningham (Earmark: SERI-MATS program)35,300.00422023-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "6 month SERI MATS London extension phase for continuing and scaling up the sparse coding project"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 6.
Ann-Kathrin Dombrowski (Earmark: SERI-MATS program)27,260.00552023-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "3-months salary for SERI MATS extention to work on internal concept extraction"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 3.
Shashwat Goel (Earmark: SERI-MATS program)12,000.00702023-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "SERI MATS 3-month extension to study knowledge removal in Language Models"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 3.
Anonymous (Earmark: SERI-MATS program|Neel Nanda)7,400.00742023-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "4-month salary for remote part-time mechanistic interpretability research under Neel Nanda extending SERI MATS research"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 4.
Robert Miles121,575.0092023-07AI safety/movement growthhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "1yr salary to make videos and podcasts about AI Safety/Alignment, and build a community to help new people get involved"

Other notes: Intended funding timeframe in months: 12.
Alexander Turner40,000.00372023-04AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Conference publication of interpretability and LM-steering results"
Quentin Feuillade--Montixi (Earmark: SERI-MATS program)32,000.00442023-04AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "4 month extension of SERIMats in London, mentored by Janus and Nicholas Kees Dupuis to work on cyborgism"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. The grants database only provides the quarter in which the grant was made; see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month. Intended funding timeframe in months: 4.
Alexander Turner30,000.00452023-04AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "5 months of funding for office space for collaboration on interpretability/model-steering alignment research"

Other notes: Intended funding timeframe in months: 5.
Alexander Turner220,000.0032023-04AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundOliver Habryka Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "Year-long salary for shard theory and RL mechanistic interpretability research"

Donor reason for selecting the donee: At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) grant evaluator Oliver Habryka writes, explaining the reasoning for this grant and other grants made to Turner: "The basic reasoning here is that despite me not feeling that excited about the research directions Alex keeps choosing, within the direction he has chosen, Alex has done quite high-quality work, and also seems to often have interesting and useful contributions in online discussions and private conversations. I also find his work particularly interesting, since I think that within a broad approach I often expected to be fruitless, Alex has produced more interesting insight than I expected. This in itself has made me more interested in further supporting Alex, since someone producing work that shows that I was at least partially wrong about a research direction being not very promising is more important to incentivize than work whose effects I am pretty certain of. [...] In-short, the more recent steering vector work seems like the kind of “obvious thing to try that could maybe help” that I would really like to saturate with work happening in the field, and the work on formalizing power-seeking theorems is also the kind of stuff that seems worth having done

Donor reason for donating that amount (rather than a bigger or smaller amount): At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) grant evaluator Oliver Habryka writes, explaining the process for deciding the grant amount for this and other grants to Turner: "Another aspect of this grant that I expect to have somewhat wide-ranging consequences is the stipend level we set on. Some basic principles that have lead me to suggest this stipend level: (1) I have been using the anchor of “industry stipend minus 30%” as a useful heuristic for setting stipend levels for LTFF grants. The goal in that heuristic was to find a relatively objective standard that would allow grantees to think about stipend expectations on their own without requiring a lot of back and forth, while hitting a middle ground in the incentive landscape between salaries being so low that lots of top talent would just go into industry instead of doing impactful work, and avoiding grifter problems with people asking for LTFF grants because they expect they will receive less supervision and can probably get away without a ton of legible progress. (2) In general I think self-employed salaries should be ~20-40% higher, to account for additional costs like health insurance, payroll taxes, administration overhead, and other things that an employer often takes care of.

Other notes: The Long-Term Future Fund makes a number of other related grants to Turner, including a grant at around the same time of $115,411 for a team led by Turner. The grants all share similar reasoning, that https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) describes. The grants database only provides the quarter in which the grant was made (and seems to be off by one quarter); see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month. Intended funding timeframe in months: 12.
Robert Miles54,962.00282023-03AI safety/movement growthhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Fellows for the AISafety.info Distillation Fellowship, improving our single-point-of-access to AI safety"

Other notes: The grants database only provides the quarter in which the grant was made; see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month.
Matt MacDermott (Earmark: SERI-MATS program)24,000.00612023-03AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "3-month salary for SERI-MATS extension"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. The grants database only provides the quarter in which the grant was made; see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month. Intended funding timeframe in months: 3.
Alexander Turner115,411.00102023-03AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundOliver Habryka Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Writing new motivations into a policy network by understanding and controlling its internal decision-influences"; at https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) (this is mainly about another grant) Oliver Habryka writes: "We also made another grant in 2023 to a team led by Alex Turner for their post on steering vectors for $115,411 (total includes payment to 5 team members, including, without limitation, travel expenses, office space, and stipends)." https://www.lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector (GW, IR) is the linked post.

Donor reason for selecting the donee: At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) grant evaluator Oliver Habryka writes, explaining the reasoning for this grant and other grants made to Turner: "The basic reasoning here is that despite me not feeling that excited about the research directions Alex keeps choosing, within the direction he has chosen, Alex has done quite high-quality work, and also seems to often have interesting and useful contributions in online discussions and private conversations. I also find his work particularly interesting, since I think that within a broad approach I often expected to be fruitless, Alex has produced more interesting insight than I expected. This in itself has made me more interested in further supporting Alex, since someone producing work that shows that I was at least partially wrong about a research direction being not very promising is more important to incentivize than work whose effects I am pretty certain of. [...] In-short, the more recent steering vector work seems like the kind of “obvious thing to try that could maybe help” that I would really like to saturate with work happening in the field, and the work on formalizing power-seeking theorems is also the kind of stuff that seems worth having done

Donor reason for donating that amount (rather than a bigger or smaller amount): At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) grant evaluator Oliver Habryka writes, explaining the process for deciding the grant amount for this and other grants to Turner: "Another aspect of this grant that I expect to have somewhat wide-ranging consequences is the stipend level we set on. Some basic principles that have lead me to suggest this stipend level: (1) I have been using the anchor of “industry stipend minus 30%” as a useful heuristic for setting stipend levels for LTFF grants. The goal in that heuristic was to find a relatively objective standard that would allow grantees to think about stipend expectations on their own without requiring a lot of back and forth, while hitting a middle ground in the incentive landscape between salaries being so low that lots of top talent would just go into industry instead of doing impactful work, and avoiding grifter problems with people asking for LTFF grants because they expect they will receive less supervision and can probably get away without a ton of legible progress. (2) In general I think self-employed salaries should be ~20-40% higher, to account for additional costs like health insurance, payroll taxes, administration overhead, and other things that an employer often takes care of.

Other notes: The Long-Term Future Fund makes a number of other related grants to Turner, including a grant at around the same time of $220,000 for 1 year of stipend coverage. The grants all share similar reasoning, that https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alexander_Turner___220_000___Year_long_stipend_for_shard_theory_and_RL_mechanistic_interpretability_research (GW, IR) describes. The grants database only provides the quarter in which the grant was made; see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month.
Jonathan Ng (Earmark: SERI-MATS program)32,650.00432023-01AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "6-month salary for me to continue the SERI MATS project on expanding the "Discovering Latent Knowledge" paper"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 6.
Kaarel Hänni, Kay Kozaronek, Walter Laurito, and Georgios Kaklmanos (Earmark: SERI-MATS program)167,480.0052023-01AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundThomas Larsen Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications). https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Kaarel_H_nni__Kay_Kozaronek__Walter_Laurito__and_Georgios_Kaklmanos___167_480___Implementing_and_expanding_on_the_research_methods_of_the___Discovering_Latent_Knowledge__paper__ (GW, IR) notes that in this case the grantees also published a blog post with a section https://www.lesswrong.com/posts/bFwigCDMC5ishLz7X/rfc-possible-ways-to-expand-on-discovering-latent-knowledge#Potential_directions (GW, IR) on potential directions to take, and this, along with the grantee's track record, was used for grant evaluation.

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "6-month salary for 4 people to continue their SERI-MATS project on expanding the "Discovering Latent Knowledge" paper"; see https://arxiv.org/abs/2212.03827 for the paper being extended, and https://www.lesswrong.com/posts/bFwigCDMC5ishLz7X/rfc-possible-ways-to-expand-on-discovering-latent-knowledge#Potential_directions (GW, IR) for potential directions that might be funded by the grant.

Donor reason for selecting the donee: At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Kaarel_H_nni__Kay_Kozaronek__Walter_Laurito__and_Georgios_Kaklmanos___167_480___Implementing_and_expanding_on_the_research_methods_of_the___Discovering_Latent_Knowledge__paper__ (GW, IR) the grant evaluator Thomas Larsen says: "My cruxes for this type of grant are: (1) If done successfully, would this project help with alignment? (2) How likely is this team to be successful? My thoughts on (1): I think that Eliciting Latent Knowledge (ELK) is an important subproblem of alignment, and I think it can be directly applied to combat deceptive alignment. [...] I think that this technique might help us detect this danger [in AGI systems], but given that we can't train against it, it doesn't let us actually fix the underlying problem. Thus, the lab will be in the difficult position of continuing on, or having to train against their detection system. I still think that incremental progress on detecting deception is good, because it can help push for a stop in capabilities growth before prematurely continuing to AGI. My thoughts on (2): [...] [The ideas described in the LessWrong post] don't seem amazing, but they seem like reasonable things to try. I expect that the majority of the benefit will come from staring at the model internals and the results of the techniques and then iterating. I hope that this process will churn out more and better ideas. [...] This team did get strong references from Colin Burns and John Wentworth, which makes me a lot more excited about the project. All things considered, I'm excited about giving this team a chance to work on this project, and see how they are doing. I'm also generally enthusiastic about teams trying their hand at alignment research. "

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. Intended funding timeframe in months: 6.
AI Safety Camp (Earmark: Remmelt Ellen)72,500.00212022-12AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Cover participant stipends for AI Safety Camp Virtual 2023"; see https://www.alignmentforum.org/posts/9AXSrp5MAThZZEfTc/ai-safety-camp-virtual-edition-2023 for the announcement and details; it says: "AI Safety Camp Virtual 8 will be a 3.5-month long online research program from 4 March to 18 June 2023, where participants form teams to work on pre-selected projects."

Donor reason for donating at this time (rather than earlier or later): The grant is made about three months prior to the start of the camp being funded, and about one month before the announcement post https://www.alignmentforum.org/posts/9AXSrp5MAThZZEfTc/ai-safety-camp-virtual-edition-2023 seeking applications.
Intended funding timeframe in months: 4
David Udell (Earmark: SERI-MATS program)100,000.00112022-10AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "One-year full-time salary to work on alignment distillation and conceptual research with Team Shard after SERI MATS"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. The grants database only provides the quarter in which the grant was made; see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month. See https://www.alignmentforum.org/users/david-udell for the grantee's Alignment Forum profile and contributions. Intended funding timeframe in months: 12.
Jeremy Gillen (Earmark: SERI-MATS program)40,000.00372022-10AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "6-month salary to work on the research I started during SERI MATS, solving alignment problems in model based RL"

Other notes: This is one of several grants made by the Long-Term Future Fund to SERI-MATS scholars to work on their SERI-MATS research, including its continuation beyond the original program. The grants database only provides the quarter in which the grant was made; see https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Other_grants_we_made_during_this_period (GW, IR) for the precise month. See https://jezgillen.github.io/ for the grantee's webpage. Intended funding timeframe in months: 6.
SERI-MATS program27,000.00562022-10AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Scaleing up the number of people working in alignment theory"

Donor reason for selecting the donee: No explicit reason is given for this grant, but the reasoning is likely similar to that provided at https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#SERI_MATS_program___316_000___8_weeks_scholars_program_to_pair_promising_alignment_researchers_with_renowned_mentors___Originally_evaluated_by_Asya_Bergal_ (GW, IR) for another grant of $316,000 made to the same grantee at around the same time.

Other notes: See https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#SERI_MATS_program___316_000___8_weeks_scholars_program_to_pair_promising_alignment_researchers_with_renowned_mentors___Originally_evaluated_by_Asya_Bergal_ (GW, IR) for another grant from the Long-Term Future Find to SERI-MATS program for a much larger amount ($316,000) at around the same time.
SERI-MATS program316,000.0022022-10AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundCaleb Parikh Asya Bergal Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications). https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#SERI_MATS_program___316_000___8_weeks_scholars_program_to_pair_promising_alignment_researchers_with_renowned_mentors___Originally_evaluated_by_Asya_Bergal_ (GW, IR) says that the grant evaluator is Caleb Parikh, but was originally Asya Bergal.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "8 weeks scholars program to pair promising alignment researchers with renowned mentors". https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#SERI_MATS_program___316_000___8_weeks_scholars_program_to_pair_promising_alignment_researchers_with_renowned_mentors___Originally_evaluated_by_Asya_Bergal_ (GW, IR) says: "SERI MATS is a program that helps established AI safety researchers find mentees. The program has grown substantially since we first provided funding, and now supports 15 mentors, but at the time, the mentors were Alex Gray, Beth Barnes, Evan Hubinger, John Wentworth, Leo Gao, Mark Xu, and Stuart Armstrong. Mentors took part in the program in Berkeley in a shared office space."

Donor reason for selecting the donee: At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#SERI_MATS_program___316_000___8_weeks_scholars_program_to_pair_promising_alignment_researchers_with_renowned_mentors___Originally_evaluated_by_Asya_Bergal_ (GW, IR) the grant evaluator Caleb Parikh writes: "When SERI MATS was founded, there were very few opportunities for junior researchers to try out doing alignment research. Many opportunities were informal mentorship positions, sometimes set up through cold emails or after connecting at conferences. The program has generally received many more qualified applicants than they have places for, and the vast majority of fellows report a positive experience of the program. I also believe the program has substantially increased the number of alignment research mentorship positions available. I think that SERI MATS is performing a vital role in building the talent pipeline for alignment research. I am a bit confused about why more organisations don’t offer larger internship programs so that the mentors can run their programs ‘in-house’. My best guess is that MATS is much better than most organisations running small internship programs for the first time, particularly in supporting their fellows holistically (often providing accommodation and putting significant effort into the MATS fellows community). One downside of the program relative to an internship at an organisation is that there are fewer natural routes to enter a managed position, though many fellows have gone on to receive LTFF grants for independent projects or continued their mentorship under the same mentor."

Donor retrospective of the donation: The Long-Term Future Fund would make several grants to SERI-MATS scholars, including some grants for the scholars continuing to work on the projects after having finished the SERI-MATS program. This suggests satisfaction with the grant outcome.

Other notes: Another grant of $27,000 is made by the Long-Term Future Fund to SERI-MATs program, with "Scaleing up the number of people working in alignment theory" as the stated purpose. Intended funding timeframe in months: 2.
Alignment Research Center72,000.00222022-10AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundAsya Bergal Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "A research & networking retreat for winners of the Eliciting Latent Knowledge contest" https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alignment_Research_Center__54_543__Support_for_a_research___networking_event_for_winners_of_the_Eliciting_Latent_Knowledge_contest (GW, IR) (written by Asya Bergal) gives further detail: "This was funding a research & networking event for the winners of the Eliciting Latent Knowledge contest run in early 2022; the plan for the event was mainly for it to be participant-led, with participants sharing what they were working on and connecting with others, along with professional alignment researchers visiting to share their own work with participants." The LessWrong post https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals (GW, IR) is linked for more detail on the Eliciting Latent Knowledge contest.

Donor reason for selecting the donee: The grant page section https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alignment_Research_Center__54_543__Support_for_a_research___networking_event_for_winners_of_the_Eliciting_Latent_Knowledge_contest (GW, IR) written by Asya Bergal says: "I think the case for this grant is pretty straightforward: the winners of this contest are (presumably) selected for being unusually likely to be able to contribute to problems in AI alignment, and retreats, especially those involving interactions with professionals in the space, have a strong track record of getting people more involved with this work."

Other notes: https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alignment_Research_Center__54_543__Support_for_a_research___networking_event_for_winners_of_the_Eliciting_Latent_Knowledge_contest (GW, IR) gives a grant amount of $54,543, but the grants database gives an amount of $72,000.
Conjecture72,827.00202022-10AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundCaleb Parikh Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications). According to https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Conjecture___72_827___Funding_for_a_2_day_workshop_to_connect_alignment_researchers_from_the_US__UK__and_AI_researchers_and_entrepreneurs_from_Japan_ (GW, IR) "Conjecture applied for funding to host a two day AI safety workshop in Japan in collaboration with Araya (a Japanese AI company). [...] Conjecture shared the invite list with me ahead of the event"

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "A 2-day workshop to connect alignment researchers from the US, UK, and AI researchers and entrepreneurs from Japan"; at https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Conjecture___72_827___Funding_for_a_2_day_workshop_to_connect_alignment_researchers_from_the_US__UK__and_AI_researchers_and_entrepreneurs_from_Japan_ (GW, IR) grant evaluator Caleb Parikh writes: "Conjecture applied for funding to host a two day AI safety workshop in Japan in collaboration with Araya (a Japanese AI company). They planned to invite around 40 people, with half of the attendees being AI researchers, and half being alignments researchers from the US and UK. Japanese researchers were generally senior, leading labs, holding postdoc positions in academia, or holding senior technical positions at tech companies." See https://www.lesswrong.com/posts/tAQRxccEDYZY5vxvy/japan-ai-alignment-conference (GW, IR) for the description of the conference, held March 11 and 12, 2023.

Donor reason for selecting the donee: At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Conjecture___72_827___Funding_for_a_2_day_workshop_to_connect_alignment_researchers_from_the_US__UK__and_AI_researchers_and_entrepreneurs_from_Japan_ (GW, IR) grant evaluator Caleb Parikh writes: "To my knowledge, there has been very little AI safety outreach conducted amongst strong academic communities in Asia (e.g. in Japan, Singapore, South Korea …). On the current margin, I am excited about more outreach being done in these countries within ultra-high talent groups. The theory of change for the grant seemed fairly straightforward: encourage talented researchers who are currently working in some area of AI to work on AI safety, and foster collaborations between them and the existing alignment community. Conjecture shared the invite list with me ahead of the event, and I felt good about the set of alignment researchers invited from the UK and US. I looked into the Japanese researchers briefly, but I found it harder to gauge the quality of invites given my lack of familiarity with the Japanese AI scene. I also trust Conjecture to execute operationally competently on events of this type, having assisted other AI safety organisations (such as SERI MATS) in the past. [...] Overall, I thought this grant was pretty interesting, and I think that the ex-ante case for it was pretty solid. I haven’t reviewed the outcomes of this grant yet, but I look forward to reviewing and potentially making more grants in this area."

Donor reason for donating at this time (rather than earlier or later): The timing was likely determined by the planned timing of the conference; as announced at https://www.lesswrong.com/posts/tAQRxccEDYZY5vxvy/japan-ai-alignment-conference (GW, IR) the conference was finally held on March 11 and 12, 2023.
Intended funding timeframe in months: 1

Donor retrospective of the donation: At https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Conjecture___72_827___Funding_for_a_2_day_workshop_to_connect_alignment_researchers_from_the_US__UK__and_AI_researchers_and_entrepreneurs_from_Japan_ (GW, IR) grant evaluator Caleb Parikh writes: "Update: Conjecture kindly directed me towards this retrospective and have informed me that some Japanese attendees of their conference are thinking of creating an alignment org." This links to the retrospective https://www.lesswrong.com/posts/Yc6cpGmBieS7ADxcS/japan-ai-alignment-conference-postmortem (GW, IR) written in April 2023 by Conjecture (the grantee) about the conference.

Other notes: Affected countries: United States|United Kingdom|Japan.
AI Safety Hub60,000.00262022-07AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Organising paid internships for promising Oxford students to try out supervised AI Safety research projects this summer"

Other notes: Intended funding timeframe in months: 2.
Robert Miles82,000.00162022-01AI safety/movement growthhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Living expenses during project

Intended use of funds: The grants database gives the following intended use of funds: "1-year salary to make videos and podcasts about AI Safety/Alignment, and to build a community to help new people get involved"

Donor retrospective of the donation: A similar grant in 2023 Q3 from the Long-Term Future Fund to the same grantee (Robert Miles), with the same structure and purpose, suggests satisfaction with the grant outcome.

Other notes: Intended funding timeframe in months: 12.
AI Safety Support80,000.00172022-01AI safety/movement growthhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety"

Donor retrospective of the donation: AI Safety Support would be shut down about 1.5 years later; see https://forum.effectivealtruism.org/posts/Bjr6FXvnKqb37uMPP/shutting-down-ai-safety-support (GW, IR) for details.
Alexander Turner1,050.00762022-01AI safety/movement growthhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Ad campaign for "Optimal Policies Tend To Seek Power" to ML researchers on Twitter"
AI Safety Support (Earmark: JJ Hepburn)25,000.00592021-07AI safetyhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "6-month salary for JJ Hepburn to continue providing 1-on-1 support to early AI safety researchers and transition AI safety support"

Donor retrospective of the donation: A followup grant to AI Safety Support about six months later, at the end of the timeframe covered by this grant, suggests continued satisfaction with the grant outcome. AI Safety Support would be shut down about two years later; see https://forum.effectivealtruism.org/posts/Bjr6FXvnKqb37uMPP/shutting-down-ai-safety-support (GW, IR) for details.

Other notes: Intended funding timeframe in months: 6.
Rethink Priorities70,000.00232021-04-01Global catastrophic riskshttps://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grantsAsya Bergal Adam Gleave Oliver Habryka Evan Hubinger Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant is for "Researching global security, forecasting, and public communication." In more detail: "(1) Global security (conflict, arms control, avoiding totalitarianism) (2) Forecasting (estimating existential risk, epistemic challenges to longtermism) (3) Polling / message testing (identifying longtermist policies, figuring out how to talk about longtermism to the public)." The longtermist hires are Linchuan Zhang, David Reinstein, and 50% of Michael Aird.

Donor reason for selecting the donee: Regarding the researchers who would effectively be funded, the grant evaluator Asya Bergal writes at https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#rethink-priorities--70000 "Rethink’s longtermist team is very new and is proposing work on fairly disparate topics, so I think about funding them similarly to how I would think about funding several independent researchers. Their longtermist hires are Linchuan Zhang, David Reinstein, and 50% of Michael Aird (he will be spending the rest of his time as a Research Scholar at FHI). I’m not familiar with David Reinstein. Michael Aird has produced a lot of writing over the past year, some of which I’ve found useful. l haven’t looked at any written work Linchuan Zhang has produced (and I’m not aware of anything major), but he has a good track record in forecasting, I’ve appreciated some of his EA forum comments, and my impression is that several longtermist researchers I know think he’s smart. Evaluating them as independent researchers, I think they’re both new and promising enough that I’m interested in paying for a year of their time to see what they produce." Regarding the intended uses of funds, Bergal writes: "Broadly, I am most excited about the third of these [polling / message testing], because I think there’s a clear and pressing need for it. I think work in the other two areas could be good, but feels highly dependent on the details (their application only described these broad directions)." Bergal links to https://forum.effectivealtruism.org/posts/h566GT4ECfJAB38af/some-quick-notes-on-effective-altruism?commentId=SD7rcJmY5exTR3aRu (GW, IR) for more context. Bergal also gives specific examples on areas she might be interested in.

Donor reason for donating that amount (rather than a bigger or smaller amount): At https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#rethink-priorities--70000 grant evaluator Asya Bergal writes: "We decided to pay 25% of the budget that Rethink requested, which I guessed was our fair share given Rethink’s other funding opportunities." https://80000hours.org/articles/coordination/#when-deciding-where-to-donate-consider-splitting-or-thresholds is linked for more context on fair share.
Percentage of total donor spend in the corresponding batch of donations: 4.24%

Donor reason for donating at this time (rather than earlier or later): The time is shortly after Rethink Priorities started growing its longtermist team, and is a result of Rethink Priorities seeking funding to support the longtermist team's work.
Legal Priorities Project135,000.0082021-04-01AI safety/governancehttps://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grantsDaniel Eth Asya Bergal Adam Gleave Oliver Habryka Evan Hubinger Ozzie Gooen Donation process: Grant selected from a pool of applicants. https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#legal-priorities-project--135000 says: "The Legal Priorities Project (LPP) applied for funding to hire Suzanne Van Arsdale and Renan Araújo to conduct academic legal research, and Alfredo Parra to perform operations work. All have previously been involved with the LPP, and Suzanne and Renan contributed to the LPP’s research agenda."

Intended use of funds (category): Direct project expenses

Intended use of funds: https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#legal-priorities-project--135000 says: "Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. The Legal Priorities Project (LPP) applied for funding to hire Suzanne Van Arsdale and Renan Araújo to conduct academic legal research, and Alfredo Parra to perform operations work. All have previously been involved with the LPP, and Suzanne and Renan contributed to the LPP’s research agenda."

Donor reason for selecting the donee: https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#legal-priorities-project--135000 (written by Daniel Eth) says: "I’m excited about this grant for reasons related to LPP as an organization, the specific hires they would use the grant for, and the proposed work of the new hires." It goes into considerable further detail regarding the reasons.

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined based on the amount needed for the intended uses of the grant funds.
Percentage of total donor spend in the corresponding batch of donations: 8.18%
Center for Human-Compatible AI (Earmark: Cody Wild|Steven Wang)48,000.00352021-04-01AI safetyhttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grantsEvan Hubinger Oliver Habryka Asya Bergal Adam Gleave Daniel Eth Ozzie Gooen Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "hiring research engineers to support CHAI’s technical research projects." "This grant is to support Cody Wild and Steven Wang in their work assisting CHAI as research engineers, funded through BERI."

Donor reason for selecting the donee: Grant investigator and main influencer Evan Hubinger writes: "Overall, I have a very high opinion of CHAI’s ability to produce good alignment researchers—Rohin Shah, Adam Gleave, Daniel Filan, Michael Dennis, etc.—and I think it would be very unfortunate if those researchers had to spend a lot of their time doing non-alignment-relevant engineering work. Thus, I think there is a very strong case for making high-quality research engineers available to help CHAI students run ML experiments. [...] both Cody and Steven have already been working with CHAI doing exactly this sort of work; when we spoke to Adam Gleave early in the evaluation process, he seems to have found their work to be positive and quite helpful. Thus, the risk of this grant hurting rather than helping CHAI researchers seems very minimal, and the case for it seems quite strong overall, given our general excitement about CHAI."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; a grant of $75,000 for a similar purpose was made to the grantee in the Septembe 2020 round, so the timing is likely partly determined by the need to renew funding for the people (Cody Wild and Steven Wang) funded through the previous grant.

Other notes: The grant page says: "Adam Gleave [one of the fund managers] did not participate in the voting or final discussion around this grant." The EA Forum post https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 5.15%.
AI Safety Camp85,000.00152021-04-01AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grantsOliver Habryka Asya Bergal Adam Gleave Daniel Eth Evan Hubinger Ozzie Gooen Donation process: Grant selected from a pool of applicants. This particular grantee had received grants in the past, and the grantmaking process was mainly based on soliciting more reviews and feedback from participants in AI Safety Camps funded by past grants.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "running a virtual and physical camp where selected applicants test their fit for AI safety research." Unlike previous grants, no specific date or time is provided for the grant.

Donor reason for selecting the donee: Grant page says: "some alumni of the camp reported very substantial positive benefits from attending the camp, while none of them reported noticing any substantial harmful consequences. [...] all alumni I reached out to thought that the camp was at worst, only a slightly less valuable use of their time than what they would have done instead, so the downside risk seems relatively limited. [...] the need for social events and workshops like this is greater than I previously thought, and that they are in high demand among people new to the AI Alignment field. [...] there is enough demand for multiple programs like this one, which reduces the grant’s downside risk, since it means that AI Safety Camp is not substantially crowding out other similar camps. There also don’t seem to be many similar events to AI Safety Camp right now, which suggests that a better camp would not happen naturally, and makes it seem like a bad idea to further reduce the supply by not funding the camp."

Donor reason for donating that amount (rather than a bigger or smaller amount): No specific reasons are given for the amount, but it is larger than previous grants, possibly reflecting the expanded scope of virtual and physical camp.
Percentage of total donor spend in the corresponding batch of donations: 5.15%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round as well as possibly by time taken to collect and process feedback from past grant participants. The pausing of in-person camps during the COVID-19 pandemic may also explain the gap since the previous grant.
AI Impacts75,000.00182020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000Adam Gleave Oliver Habryka Asya Bergal Matt Wage Helen Toner Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant for "answering decision-relevant questions about the future of artificial intelligence."

Donor reason for selecting the donee: Grant investigator and main influencer Adam Gleave writes: "Their work has and continues to influence my outlook on how and when advanced AI will develop, and I often see researchers I collaborate with cite their work in conversations. [...] Overall, I would be excited to see more research into better understanding how AI will develop in the future. This research can help funders to decide which projects to support (and when), and researchers to select an impactful research agenda. We are pleased to support AI Impacts' work in this space, and hope this research field will continue to grow.

Donor reason for donating that amount (rather than a bigger or smaller amount): Grant investigator and main influencer Adam Gleave writes: "We awarded a grant of $75,000, approximately one fifth of the AI Impacts budget. We do not expect sharply diminishing returns, so it is likely that at the margin, additional funding to AI Impacts would continue to be valuable. When funding established organizations, we often try to contribute a "fair share" of organizations' budgets based on the Fund's overall share of the funding landscape. This aids coordination with other donors and encourages organizations to obtain funding from diverse sources (which reduces the risk of financial issues if one source becomes unavailable)."
Percentage of total donor spend in the corresponding batch of donations: 19.02%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Intended funding timeframe in months: 12

Other notes: The grant page says: "(Recusal note: Due to working as a contractor for AI Impacts, Asya Bergal recused herself from the discussion and voting surrounding this grant.)" The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant.
Center for Human-Compatible AI75,000.00182020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000Oliver Habryka Adam Gleave Asya Bergal Matt Wage Helen Toner Donation process: Donee submitted grant application through the application form for the September 2020 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "hiring a research engineer to support CHAI’s technical research projects."

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka gives these reasons for the grant: "Over the last few years, CHAI has hosted a number of people who I think have contributed at a very high quality level to the AI alignment problem, most prominently Rohin Shah [...] I've also found engaging with Andrew Critch's thinking on AI alignment quite valuable, and I am hopeful about more work from Stuart Russell [...] the specific project that CHAI is requesting money for seems also quite reasonable to me. [...] it seems quite important for them to be able to run engineering-heavy machine learning projects, for which it makes sense to hire research engineers to assist with the associated programming tasks. The reports we've received from students at CHAI also suggest that past engineer hiring has been valuable and has enabled students at CHAI to do substantially better work."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Oliver Habryka writes: "Having thought more recently about CHAI as an organization and its place in the ecosystem of AI alignment,I am currently uncertain about its long-term impact and where it is going, and I eventually plan to spend more time thinking about the future of CHAI. So I think it's not that unlikely (~20%) that I might change my mind on the level of positive impact I'd expect from future grants like this. However, I think this holds less for the other Fund members who were also in favor of this grant, so I don't think my uncertainty is much evidence about how LTFF will think about future grants to CHAI."

Donor retrospective of the donation: A later grant round https://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants includes a $48,000 grant from the LTFF to CHAI for a similar purpose, suggesting continued satisfaction and a continued positive assessment of the grantee.

Other notes: Adam Gleave, though on the grantmaking team, recused himself from discussions around this grant since he is a Ph.D. student at CHAI. Grant investigator and main influencer Oliver Habryka includes a few concerns: "Rohin is leaving CHAI soon, and I'm unsure about CHAI's future impact, since Rohin made up a large fraction of the impact of CHAI in my mind. [...] I also maintain a relatively high level of skepticism about research that tries to embed itself too closely within the existing ML research paradigm. [...] A concrete example of the problems I have seen (chosen for its simplicity more than its importance) is that, on several occasions, I've spoken to authors who, during the publication and peer-review process, wound up having to remove some of their papers' most important contributions to AI alignment. [...] Another concern: Most of the impact that Rohin contributed seemed to be driven more by distillation and field-building work than by novel research. [...] I believe distillation and field-building to be particularly neglected and valuable at the margin. I don't currently see the rest of CHAI engaging in that work in the same way." The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 19.02%.
Machine Intelligence Research Institute100,000.00112020-04-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Oliver Habryka Adam Gleave Intended use of funds (category): Organizational general support

Other notes: In the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ MIRI mentions the grant along with a $7.7 million grant from the Open Philanthropy Project and a $300,000 grant from Berkeley Existential Risk Initiative. Percentage of total donor spend in the corresponding batch of donations: 20.48%.
80,000 Hours100,000.00112020-04-14Effective altruism/movement growth/career counselinghttps://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Oliver Habryka Adam Gleave Intended use of funds (category): Organizational general support

Other notes: Percentage of total donor spend in the corresponding batch of donations: 20.48%.
AI Safety Camp29,000.00532019-11-21AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/november-2019-long-term-future-fund-grantsMatt Wage Helen Toner Oliver Habryka Alex Zhu Donation process: Grant selected from a pool of applicants. More details on the grantmaking process were not included in this round.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund the fifth AI Safety Camp. This camp is to be held in Toronto, Canada.

Donor reason for selecting the donee: The grant page says: "This round, I reached out to more past participants and received responses that were, overall, quite positive. I’ve also started thinking that the reference class of things like the AI Safety Camp is more important than I had originally thought."

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined based on what was requested in application. It is comparable to previous grant amounts of $25,000 and $41,000, that were also to run an AI Safety Camp.
Percentage of total donor spend in the corresponding batch of donations: 6.22%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round and of when the grantee intends to hold the next AI Safety Camp.
Intended funding timeframe in months: 1

Donor retrospective of the donation: The followup $85,000 grant (2021-04-01), also investigated by Oliver Habryka, would be accompanied by a more positive assessment based on processing more feedback from camp participants.
Alexander Siegenfeld20,000.00642019-08-30AI safety/deconfusion researchhttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Helen Toner Matt Wage Oliver Habryka Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Alex Zhu was the fund manager most excited about the grant, and responsible for the public write-up

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant for "Characterizing the properties and constraints of complex systems and their external interactions." Specifically, the grantee's "His goal is to get a better conceptual understanding of multi-level world models by coming up with better formalisms for analyzing complex systems at differing levels of scale, building off of the work of Yaneer Bar-Yam." Also: "Alexander plans to publish a paper on his research; it will be evaluated by researchers at MIRI, helping him decide how best to pursue further work in this area."

Donor reason for selecting the donee: Alex Zhu says in the grant write-up: "I decided to recommend funding to Alexander because I think his research directions are promising, and because I was personally impressed by his technical abilities and his clarity of thought. Tsvi Benson-Tilsen, a MIRI researcher, was also impressed enough by Alexander to recommend that the Fund support him." A conflict of interest is also declared: "Alexander and I have been friends since our undergraduate years at MIT."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: Percentage of total donor spend in the corresponding batch of donations: 4.55%.
AI Safety Camp41,000.00362019-08-30AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Oliver Habryka was the fund manager most excited about the grant, and responsible for the public write-up.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund the 4th AI Safety Camp (AISC) - a research retreat and program for prospective AI safety researchers. From the grant application: "Compared to past iterations, we plan to change the format to include a 3 to 4-day project generation period and team formation workshop, followed by a several-week period of online team collaboration on concrete research questions, a 6 to 7-day intensive research retreat, and ongoing mentoring after the camp. The target capacity is 25 - 30 participants, with projects that range from technical AI safety (majority) to policy and strategy research." The project would later spin off as the AI Safety Research Program https://aisrp.org/

Donor reason for selecting the donee: Habryka, in his grant write-up, says: "I generally think that hackathons and retreats for researchers can be very valuable, allowing for focused thinking in a new environment. I think the AI Safety Camp is held at a relatively low cost, in a part of the world (Europe) where there exist few other opportunities for potential new researchers to spend time thinking about these topics, and some promising people have attended. " He also notes two positive things: (1) The attendees of the second camp all produced an artifact of their research (e.g. an academic writeup or code repository). (2) Changes to the upcoming camp address some concerns raised in feedback on previous camps.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for amount given, but the amount is likely determined by the budget requested by the grantee. For comparison, the amount granted for the previous AI safety camp was $25,000, i.e., a smaller amount. The increased grant size is likely due to the new format of the camp making it longer.
Percentage of total donor spend in the corresponding batch of donations: 9.34%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round as well as intended timing of the 4th AI Safety Camp the grant is for.
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Habryka writes: "I will not fund another one without spending significantly more time investigating the program."

Other notes: Habryka notes: "After signing off on this grant, I found out that, due to overlap between the organizers of the events, some feedback I got about this camp was actually feedback about the Human Aligned AI Summer School, which means that I had even less information than I thought. In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation." The project funded by the grant would later spin off as the AI Safety Research Program https://aisrp.org/ and the page https://aisrp.org/?page_id=116 would include details on the project outputs.
Stag Lynn23,000.00622019-08-30AI safety/upskillinghttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Helen Toner Matt Wage Oliver Habryka Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Alex Zhu was the fund manager most excited about the grant, and responsible for the public write-up. Alex Zhu's write-up disclosed a potential conflict of interest because Stag was living with him and helping him with odd jobs. So, comments from Oliver Habryka, another fund manager, are also included

Intended use of funds (category): Living expenses during project

Intended use of funds: Grantee's "current intention is to spend the next year improving his skills in a variety of areas (e.g. programming, theoretical neuroscience, and game theory) with the goal of contributing to AI safety research, meeting relevant people in the x-risk community, and helping out in EA/rationality related contexts wherever he can (eg, at rationality summer camps like SPARC and ESPR)." Two projects he may pursue include (1) working to implement certificates of impact in the EA/X-risk community, (2) working as an unpaid personal assistant to someone in EA who is sufficiently busy for this form of assistance to be useful, and sufficiently productive for the assistance to be valuable

Donor reason for selecting the donee: Alex Zhu, the fund manager most excited about the grant, writes: "I recommended funding Stag because I think he is smart, productive, and altruistic, has a track record of doing useful work, and will contribute more usefully to reducing existential risk by directly developing his capabilities and embedding himself in the EA community than he would by finishing his undergraduate degree or working a full-time job." Oliver Habryka, another fund manager, writes: "I’ve interacted with Stag in the past and have broadly positive impressions of him, in particular his capacity for independent strategic thinking." He cites Stag's success in Latvian and Galois Mathematics Olympiads, and Stag's contributions to improving ESPR and SPARC, as well as Stag's decision to contribute to those projects, taking this as "another signal of Stag’s talent at selecting and/or improving projects."

Donor reason for donating that amount (rather than a bigger or smaller amount): No amount-specific reason given, but the amount is likely selected to cover a reasonable fraction of living costs for a year
Percentage of total donor spend in the corresponding batch of donations: 5.24%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Intended funding timeframe in months: 12
Roam Research10,000.00712019-08-30Epistemic institutionshttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Helen Toner Matt Wage Oliver Habryka Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Alex Zhu was the fund manager most excited about the grant, and responsible for the public write-up

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the continued development of Roam, a web application from Conor White-Sullivan filling a similar niche as Workflowy. Roam automates the Zettelkasten method, "a note-taking / document-drafting process based on physical index cards." The grant write-up says: "This funding will support Roam’s general operating costs, including expenses for Conor, one employee, and several contractors."

Donor reason for selecting the donee: Fund manager Alex Zhu writes: "On my inside view, if Roam succeeds, an experienced user of the note-taking app Workflowy will get at least as much value switching to Roam as they got from using Workflowy in the first place. (Many EAs, myself included, see Workflowy as an integral part of our intellectual process, and I think Roam might become even more integral than Workflowy" and links to Sarah Constantin's posts on Roam: https://www.facebook.com/sarah.constantin.543/posts/242611079943317 and https://srconstantin.posthaven.com/how-to-make-a-memex

Other notes: Percentage of total donor spend in the corresponding batch of donations: 2.28%.
High Impact Policy Engine60,000.00262019-08-30Effective altruism/government policyhttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsHelen Toner Matt Wage Oliver Habryka Alex Zhu Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Helen Toner was the fund manager most excited about the grant, and responsible for the public write-up.

Intended use of funds (category): Direct project expenses

Intended use of funds: According to the grant write-up: "This grant funds part of the cost of a full-time staff member for two years, plus some office and travel costs." Also: "HIPE’s primary activities are researching how to have a positive impact in the UK government; disseminating their findings via workshops, blog posts, etc.; and providing one-on-one support to interested individuals."

Donor reason for selecting the donee: The grant write-up says: "Our reasoning for making this grant is based on our impression that HIPE has already been able to gain some traction as a volunteer organization, and on the fact that they now have the opportunity to place a full-time staff member within the Cabinet Office. [...] The fact that the Cabinet Office is willing to provide desk space and cover part of the overhead cost for the staff member suggests that HIPE is engaging successfully with its core audiences.

Donor reason for donating that amount (rather than a bigger or smaller amount): Explicit calculations for the amount are not included, but the grant write-up says that it funds "part of the cost of a full-time staff member for two years, plus some office and travel costs." At around the same time, the Meta Fund grants $40,000 to HIPE, also to cover these costs. It is likely that the combined $100,000 covers part or all of the cost.
Percentage of total donor spend in the corresponding batch of donations: 13.67%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round, as well as by the opportunity that has been opened by the potential for a two-year job in the UK civil service if HIPE secures funding
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: The write-up says: "HIPE does not yet have robust ways of tracking its impact, but they expressed strong interest in improving their impact tracking over time. We would hope to see a more fleshed-out impact evaluation if we were asked to renew this grant in the future."

Other notes: Helen Toner, the fund manager most excited about the grant and the author of the grant write-up, writes: "I’ll add that I personally see promise in the idea of services that offer career discussion, coaching, and mentoring in more specialized settings. (Other fund members may agree with this, but it was not part of our discussion when deciding whether to make this grant, so I’m not sure.)". Affected countries: United Kingdom.
Alexander Gietelink Oldenziel30,000.00452019-08-30AI safetyhttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Helen Toner Matt Wage Oliver Habryka Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Alex Zhu was the fund manager most excited about the grant, and responsible for the public write-up

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the work of Alexander Gietelink Oldenziel who is interning at the Machine Intelligence Research Institute (MIRI) at the time of the grant. The grant money provides additional resources for the grantee to continue digging deeper into the topics after his internship at MIRI ends (while staying in regular contact with MIRI researchers); the write-up estimates that it will last him 1.5 years.

Donor reason for selecting the donee: The reasons are roughly similar to the Long-Term Future Fund's past reasons for supporting MIRI and its research agenda, as outlined in the April 2019 report https://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendations Also, Alex Zhu says in the grant write-up: "I have also spoken to him in some depth, and was impressed both by his research taste and clarity of thought."

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount chosen to be sufficient to allow the grantee to continue digging into AI safety for 1.5 years after his internship with MIRI ends
Percentage of total donor spend in the corresponding batch of donations: 6.83%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round, and also by the grantee's internship with MIRI coming to an end
Intended funding timeframe in months: 18
Jacob Lagerros27,000.00562019-03-20AI safety/forecastinghttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project|Direct project expenses

Intended use of funds: Grant to build a private platform where AI safety and policy researchers have direct access to a base of superforecaster-equivalents. Lagerros previously received two grants to work on the project: a half-time salary from Effective Altruism Grants, and a grant for direct project expenses from Berkeley Existential Risk Initiative.

Donor reason for selecting the donee: Grant investigator and main influencer notes the same high-level reasons for the grant as for similar grants to Anothony Aguirre (Metaculus) and Ozzie Gooen (Foretold); the general reasons are explained in the grant writeup for Gooen. Habryka also mentions Lagerros being around the community for 3 years, and having done useful owrk and received other funding. Habryka mentions he did not assess the grant in detail; the main reason for granting from the Long-Term Future Fund was due to logistical complications with other grantmakers (FHI and BERI), who already vouched for the value of the project

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.92%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments discuss this and the other forecasting grants, and include the question "why are you acting as grant-givers here rather than as special interest investors?" It is also included in a list of potentially concerning grants in a portfolio evaluation comment https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM (GW, IR) by Evan Gaensbauer.
Nikhil Kunapuli30,000.00452019-03-20AI safety/deconfusion researchhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Matt Wage Helen Toner Matt Fallshaw Oliver Habryka Donation process: Donee submitted grant application through the application form for the March 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grantee is doing independent deconfusion research for AI safety. His approach is to develop better foundational understandings of various concepts in AI safety, like safe exploration and robustness to distributional shift, by exploring these concepts in complex systems science and theoretical biology, domains outside of machine learning for which these concepts are also applicable.

Donor reason for selecting the donee: Fund manager Alex Zhu says: "I recommended that we fund Nikhil because I think Nikhil’s research directions are promising, and because I personally learn a lot about AI safety every time I talk with him."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Donor thoughts on making further donations to the donee: Alex Zhu, in his grant write-up, says that the quality of the work will be assessed by researchers at MIRI. Although it is not explicitly stated, it is likely that this evaluation will influence the decision of whether to make further grants

Other notes: The grant reasoning is written up by Alex Zhu and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Anand Srinivasan30,000.00452019-03-20AI safety/deconfusion researchhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Matt Wage Helen Toner Matt Fallshaw Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grantee is doing independent deconfusion research for AI safety. His angle of attack is to develop a framework that will allow researchers to make provable claims about what specific AI systems can and cannot do, based off of factors like their architectures and their training processes.

Donor reason for selecting the donee: Grantee worked with main grant influencer Alex Zhu at an enterprise software company that they cofounded. Alex Zhu says in his grant write-up: "I recommended that we fund Anand because I think Anand’s research directions are promising, and I personally learn a lot about AI safety every time I talk with him."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Donor thoughts on making further donations to the donee: Alex Zhu, in his grant write-up, says that the quality of the work will be assessed by researchers at MIRI. Although it is not explicitly stated, it is likely that this evaluation will influence the decision of whether to make further grants

Other notes: The quality of grantee's work will be judged by researchers at the Machine Intelligence Research Institute. The grant reasoning is written up by Alex Zhu and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Effective Altruism Russia (Earmark: Mikhail Yagudin)28,000.00542019-03-20Epistemic institutionshttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted). The grant would ultimately not be funded by CEA; while CEA was deciding whether to fund the grant, a private donor stepped in to fund the grant.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to Mikhail Yagudin for Effective Altruism Russia to give copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020.

Donor reason for selecting the donee: In the grant write-up, Oliver Habryka explains his evaluation of the grant as based on three questions: (1) What effects does reading HPMOR have on people? (2) How good of a target group are Math Olympiad winners for these effects? (3) Is the team competent enough to execute on their plan?

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee). The comments include more discussion of the unit economics of the grant, and whether the effective cost of $43/copy is reasonable
Percentage of total donor spend in the corresponding batch of donations: 3.03%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed. The need to secure money in advance of the events for which the money will be used likely affected the timing of the application

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) There is a lot of criticism and discussion of the grant in the comments. The grant would ultimately not be funded by CEA; while CEA was deciding whether to fund the grant, a private donor stepped in to fund the grant.
David Girardo30,000.00452019-03-20AI safety/deconfusion researchhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsAlex Zhu Matt Wage Helen Toner Matt Fallshaw Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grantee is doing independent deconfusion research for AI safety. His angle of attack is to elucidate the ontological primitives for representing hierarchical abstractions, drawing from his experience with type theory, category theory, differential geometry, and theoretical neuroscience.

Donor reason for selecting the donee: The main investigator and influencer for the grant, Alex Zhu, finds the research directions promising. Tsvi Benson-Tilsen, a MIRI researcher, has also recommended that grantee get funding.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Donor thoughts on making further donations to the donee: The quality of the grantee's work will be assessed by researchers at MIRI

Other notes: The grant reasoning is written up by Alex Zhu and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Machine Intelligence Research Institute50,000.00312019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Grant investigation and influencer Oliver Habryka believes that MIRI is making real progress in its approach of "creating a fundamental piece of theory that helps humanity to understand a wide range of powerful phenomena" He notes that MIRI started work on the alignment problem long before it became cool, which gives him more confidence that they will do the right thing and even their seemingly weird actions may be justified in ways that are not yet obvious. He also thinks that both the research team and ops staff are quite competent

Donor reason for donating that amount (rather than a bigger or smaller amount): Habryka offers the following reasons for giving a grant of just $50,000, which is small relative to the grantee budget: (1) MIRI is in a solid position funding-wise, and marginal use of money may be lower-impact. (2) There is a case for investing in helping grow a larger and more diverse set of organizations, as opposed to putting money in a few stable and well-funded onrganizations.
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Oliver Habryka writes: "I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) . Despite these, Habryka recommends a relatively small grant to MIRI, because they are already relatively well-funded and are not heavily bottlenecked on funding. However, he ultimately decides to grant some amount to MIRI, giving some explanation. He says he will think more about this before the next funding round.
Tegan McCaslin30,000.00452019-03-20AI safety/forecastinghttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant for independent research projects relevant to AI forecasting and strategy, including (but not necessarily limited to) some of the following: (1) Does the trajectory of AI capability development match that of biological evolution? (2) How tractable is long-term forecasting? (3) How much compute did evolution use to produce intelligence? (4)Benchmarking AI capabilities against insects. Short doc on (1) and (2) at https://docs.google.com/document/d/1hTLrLXewF-_iJiefyZPF6L677bLrUTo2ziy6BQbxqjs/edit

Donor reason for selecting the donee: Reasons for the grant from Oliver Habryka, the main influencer, include: (1) It's easier to relocate someone who has already demonstrated trust and skills than to find someone completely new, (2.1) It's important to give good researchers runway while they find the right place. Habryka notes: "my brief assessment of Tegan’s work was not the reason why I recommended this grant, and if Tegan asks for a new grant in 6 months to focus on solo research, I will want to spend significantly more time reading her output and talking with her, to understand how these questions were chosen and what precise relation they have to forecasting technological progress in AI."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee). Habryka also mentions that he is interested only in providing limited runway, and would need to assess much more carefully for a more long-term grant
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. However, it is also related to the grantee's situation (she has just quit her job at AI Impacts, and needs financial runway to continue pursuing promising research projects)
Intended funding timeframe in months: 6

Donor thoughts on making further donations to the donee: The grant investigator Oliver Habryka notes: "if Tegan asks for a new grant in 6 months to focus on solo research, I will want to spend significantly more time reading her output and talking with her, to understand how these questions were chosen and what precise relation they have to forecasting technological progress in AI."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments on the post do not discuss this specific grant, but a grant to Lauren Lee that includes somewhat similar reasoning (providing people runway after they leave their jobs, so they can explore better) attracts some criticism.
Metaculus (Earmark: Anthony Aguirre)70,000.00232019-03-20Epistemic institutions/forecastinghttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to Anothony Aguirre to expand the Metaculus prediction platform along with its community. Metaculus.com is a fully-functional prediction platform with ~10,000 registered users and >120,000 predictions made to date on more than >1000 questions. The two major high-priority expansions are: (1) An integrated set of extensions to improve user interaction and information-sharing. This would include private messaging and notifications, private groups, a prediction “following” system to create micro-teams within individual questions, and various incentives and systems for information-sharing. (2) Link questions into a network. Users would express links between questions, from very simple (“notify me regarding question Y when P(X) changes substantially) to more complex (“Y happens only if X happens, but not conversely”, etc.) Information can also be gleaned from what users actually do.

Donor reason for selecting the donee: The grant investigator and main influencer, Oliver Habryka, refers to reasoning included in the grant to Ozzie Gooen for Foretold, that is made in the same batch of grants and described at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) He also lists these reasons for liking Metaculus: (1) Valuable service in the past few years, (2) Cooperation with the X-risk space to get answers to important questions

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee requested $150,000, but Oliver Habryka, the grant investigator, was not confident enough in the grant to recommend the full amount. Some concerns mentioned: (1) Lack of a dedicated full-time resource, (2) Overlap with the Good Judgment Project, that reduces its access to resources and people
Percentage of total donor spend in the corresponding batch of donations: 7.58%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments discuss this and the other forecasting grants, and include the question "why are you acting as grant-givers here rather than as special interest investors?" It is also included in a list of potentially concerning grants in a portfolio evaluation comment https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM (GW, IR) by Evan Gaensbauer.
Orpheus Lummis10,000.00712019-03-20AI safety/upskillinghttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant for upskilling in contemporary AI techniques, deep RL and AI safety, before pursuing a ML PhD. Notable planned subprojects: (1) Engaging with David Krueger’s AI safety reading group at Mila (then known as the Montreal Institute for Learning Algorithms) (2) Starting & maintaining a public index of AI safety papers, to help future literature reviews and to complement https://vkrakovna.wordpress.com/ai-safety-resources/ as a standalone wiki-page (eg at http://aisafetyindex.net ) (3) From-scratch implementation of seminal deep RL algorithms (4) Going through textbooks: Goodfellow Bengio Courville 2016, Sutton Barto 2018 (5) Possibly doing the next AI Safety camp (6) Building a prioritization tool for English Wikipedia using NLP, building on the literature of quality assessment (https://paperpile.com/shared/BZ2jzQ) (7) Studying the AI Alignment literature

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka is impressed with the results of the AI Safety Unconference organized by Lummis after NeurIPS with Long-Term Future Fund money. However, he is not confident of the grant, writing: "I don’t know Orpheus very well, and while I have received generally positive reviews of their work, I haven’t yet had the time to look into any of those reviews in detail, and haven’t seen clear evidence about the quality of their judgment." Habryka also favors more time for self-study and reflection, and is excited about growing the Montral AI alignment community. Finally, Habryka thinks the grant amount is small and is unlikely to have negative consequences

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee). The small amount is also one reason grant investigator Oliver Habryka is comfortable making the grant despite not investigating thoroughly
Percentage of total donor spend in the corresponding batch of donations: 1.08%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments on the post do not discuss this specific grant.
Lauren Lee20,000.00642019-03-20Rationality communityhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted). The grant would ultimately be funded by a private donor after CEA declined to fund the grant due to it not meeting the necessary legal requirements for individual grants.

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant for working to prevent burnout and boost productivity within the EA and X-risk communities. From the grant application: (1) Grant requested to spend the coming year thinking about rationality and testing new projects. (2) The goal is to help individuals and orgs in the x-risk community orient towards and achieve their goals. (A) Training the skill of dependability. (B) Thinking clearly about AI risk. (C) Reducing burnout. (3) Measurable outputs include programs with 1-on-1 sessions with individuals or orgs, X-risk orgs spending time/money on services, writings or talks, workshops with feedback forms, and improved personal effectiveness

Donor reason for selecting the donee: Grant investigator and main influencer Habryka describes his grant reasoning as follows: "In sum, this grant hopefully helps Lauren to recover from burning out, get the new rationality projects she is working on off the ground, potentially identify a good new niche for her to work in (alone or at an existing organization), and write up her ideas for the community."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.17%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Intended funding timeframe in months: 6

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Oliver Habrkya qualifies the likelihood of giving another grant as follows: "I think that she should probably aim to make whatever she does valuable enough that individuals and organizations in the community wish to pay her directly for her work. It’s unlikely that I would recommend renewing this grant for another 6 month period in the absence of a relatively exciting new research project/direction, and if Lauren were to reapply, I would want to have a much stronger sense that the projects she was working on were producing lots of value before I decided to recommend funding her again."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The grant receives criticism in the comments, including 'This is ridiculous, I'm sure she's a great person but please don't use the gift you received to provide sinecures to people "in the community"'. The grant would ultimately be funded by a private donor after CEA declined to fund the grant due to it not meeting the necessary legal requirements for individual grants.
Kocherga (Earmark: Vyacheslav Matyuhin)50,000.00312019-03-20Rationality communityhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to Vyacheslav Matyuhin for Kocherga, an offline community hub for rationalists and EAs in Moscow. Kocherga's concrete plans with the grant include: (1) Add 2 more people to the team. (2) Implement a new community-building strategy. (3) Improve the rationalty workshops.

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka notes that the Russian rationality community has been successful, with projects such as https://lesswrong.ru (Russian translation of LessWrong sequences), kickstarter to distribute copies of HPMOR, and Kocherga, a financially self-sustaining anti-cafe in Moscow that hosts a variety of events for roughly 100 attendees per week. The grant reasoning references the LessWrong post https://www.lesswrong.com/posts/WmfapdnpFfHWzkdXY/rationalist-community-hub-in-moscow-3-years-retrospective (GW, IR) by Kocherga. The grant is being made by the Long-Term Future Fund because the EA Meta Fund decided not to make it

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR). Affected countries: Russia.
Connor Flexman20,000.00642019-03-20AI safety/forecastinghttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to perform independent research in collaboration with John Salvatier

Donor reason for selecting the donee: The grant was originally requested by John Salvatier (who is already funded by an EA Grant), as a grant to Salvatier to hire Flexman to help him. But Oliver Habryka (the primary person on whose recommendation the grant was made) ultimately decided to give the money to Flexman to give him more flexibility to switch if the work with Salvatier does not go well. Despite the reservations, Habryka considers significant negative consequences unlkely. Habryka also says: "I assign some significant probability that this grant can help Connor develop into an excellent generalist researcher of a type that I feel like EA is currently quite bottlenecked on." Habryka has two other reservations: potential conflict of interest because he lives in the same house as the recipient, and lack of concrete, externally verifiable evidence of competence

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.17%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) Habryka was the primary person on whose recommendation the grant was made. Habryka replies to a comment giving ideas on what independent research Flexman might produce if he stops working with Salvatier.
Eli Tyre30,000.00452019-03-20Epistemic institutionshttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support projects for rationality and community building interventions. Example projects: facilitating conversations between top people in AI alignment, organization advanced workshops on double crux, doing independent research projects such as https://www.lesswrong.com/posts/tj8QP2EFdP8p54z6i/historical-mathematicians-exhibit-a-birth-order-effect-too (GW, IR) (evaluating burth order effects in mathematicians), providing new EAs and rationalists with advice and guidance on how to get traction on working on important problems, and helping John Salvatier develop techniques around skill transfer. Grant investigator and main influencer Oliver Habryka writes: "the goal of this grant is to allow [Eli Tyre] to take actions with greater leverage by hiring contractors, paying other community members for services, and paying for other varied expenses associated with his projects."

Donor reason for selecting the donee: Grant investigation and main influencer is excited about the projects Tyre is interested in working on, and writes: "Eli has worked on a large variety of interesting and valuable projects over the last few years, many of them too small to have much payment infrastructure, resulting in him doing a lot of work without appropriate compensation. I think his work has been a prime example of picking low-hanging fruit by using local information and solving problems that aren’t worth solving at scale, and I want him to have resources to continue working in this space."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decision (GW, IR).
Shahar Avin40,000.00372019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Hiring an academic research assistant and other miscellaneous research expenses, for scaling up scenario role-play for AI strategy research and training.

Donor reason for selecting the donee: Donor writes: "We think positively of Shahar’s past work (for example this report), and multiple people we trust recommended that we fund him." The linked report is https://maliciousaireport.com/

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 4.33%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Tessa Alexanian26,250.00582019-03-20Biosecurity and pandemic preparednesshttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: A one day biosecurity summit, immediately following the SynBioBeta industry conference.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.84%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed
Intended funding timeframe in months: 1

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Effective Altruism Zürich (Earmark: Alex Lintz)17,900.00682019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsHelen Toner Matt Wage Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: A two-day workshop by Alex Lintz and collaborators from EA Zürich for effective altruists interested in AI governance careers, with the goals of giving participants background on the space, offering career advice, and building community.

Donor reason for selecting the donee: Donor writes: "We agree with their assessment that this space is immature and hard to enter, and believe their suggested plan for the workshop looks like a promising way to help participants orient to careers in AI governance."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 1.93%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed
Intended funding timeframe in months: 1

Other notes: The grant reasoning is written up by Helen Toner and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Robert Miles39,000.00412019-03-20AI safety/content creation/videohttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted).

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to create video content on AI alignment. Grantee has a YouTube channel at https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg (average 20,000 views per video) and also creates videos for the Computerphile channel https://www.youtube.com/watch?v=3TYT1QfdfsM (often more than 100,000 views per video).

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka favors the grant for these reasons: (1) Grantee explains AI alignment as primarily a technical problem, not a moral or political problem, (2) Grantee does not politicize AI safety, (3) Grantee's goal is to create interest in these problems from future researchers, and not to simply get as large of an audience as possible. Habryka notes that the grantee is the first skilled person in the X-risk community working full-time on producing video content. "Being the very best we have in this skill area, he is able to help the community in a number of novel ways (for example, he’s already helping existing organizations produce videos about their ideas)." In the previous grant round, the grantee had requested funding for a collaboration with RAISE to produce videos for them, but Habryka felt it was better to fund the grantee directly and allow him to decide which organizations he wanted to help with his videos.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 4.22%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor retrospective of the donation: Several followup grants to Robert Miles suggest continued satisfaction with the grant outcome.

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) in addition to being on https://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendations (the grant page). As of 2023-11-26, this grant is missing from the grants database https://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round but is still listed on the payout page for that batch of grants.
Lucius Caviola50,000.00312019-03-20Effective altruism/long-termismhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted). Donee also applied to the EA Meta Fund (another of the Effective Altruism Funds) and the total funding for the donee was split between the funds

Intended use of funds (category): Living expenses during project

Intended use of funds: Part of the costs for a 2-year postdoc at Harvard working with Professor Joshua Greene. Grantee plans to study the psychology of effective altruism and long-termism. The funding from the Long-Term Future Fund is roughly intended to cover the part of the costs that corresponds to the work on long-termism

Donor reason for donating that amount (rather than a bigger or smaller amount): Total funding requested by the donee appears to be $130,000. Of this, $80,000 is provided by the EA Meta Fund in their March 2019 grant round https://funds.effectivealtruism.org/funds/payouts/march-2019-ea-meta-fund-grants to cover the donee's work on effective altruism, while the remaining $50,000 is provided through this grant by the Long-Term Future Fund, and covers the work on long-termism. The reason for splitting funding in this way is not articulated
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed. However, the write-up for the $80,000 grant provided by the EA Meta Fund https://funds.effectivealtruism.org/funds/payouts/march-2019-ea-meta-fund-grants calls the grant a "time-bounded, specific opportunity that requires funding to initiate and explore" and similar reasoning may also apply to the $50,000 Long-Term Future Fund grant
Intended funding timeframe in months: 24

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Alexander Turner30,000.00452019-03-20AI safety/agent foundationshttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant for building towards a “Limited Agent Foundations” thesis on mild optimization and corrigibility. Grantee is a third-year computer science PhD student funded by a graduate teaching assistantship; to dedicate more attention to alignment research, he is applying for one or more trimesters of funding (spring term starts April 1).

Donor reason for selecting the donee: In the grant write-up, Oliver Habryka explains that he is excited by (a) Turner's posts to LessWrong reviewing many math textbooks useful for thinking about the alignment problem, (b) Turner not being intimidated by the complexity of the problem, and (c) Turner writing up his thoughts and hypotheses in a clear way, seeking feedback on them early, and making a set of novel contributions to an interesting sub-field of AI Alignment quite quickly (in the form of his work on impact measures, on which he recently collaborated with the DeepMind AI Safety team).

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed
Intended funding timeframe in months: 4

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments on the post do not discuss this specific grant. As of 2023-11-26, the grants database https://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round does not include this grant.
Foretold (Earmark: Ozzie Gooen)70,000.00232019-03-20Epistemic institutions/forecastinghttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant will be mainly used by Ozzie Gooen to pay programmers to work on Foretold at http://www.foretold.io/ a forecasting application that handles full probability distributions. This includes work on Ken.js, a private version of Wikidata that Gooen has started integrating with Foretold

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka gives these reasons for the grant, as well as other forecasting-related grants made to Anthony Aguirre (Metaculus) and Jacob Lagerros: (1) confusion about what is progress and what problems need solving, (2) need for many people to collaborate and document, (3) low-hanging fruit in designing better online platforms for making intellectual progress -- Habryka works on LessWrong 2.0 for that reason, and Gooen has past experience in the space with his building of Guesstimate, (4) promise and tractability for forecasting platforms in particular (for instance, work by Philip Tetlock and work by Robin Hanson), (5) Even though some platforms, such as Predictionbook and Guesstimate, did not get the traction they expected, others like the Good Judgment Project have been successful, so one should not overgeneralize from a few failures. In addition, Habryka has a positive impression of Gooen in both in-person interaction and online writing

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 7.58%

Donor reason for donating at this time (rather than earlier or later): Timing determined partly by timing of grant round. Gooen was a recipient of a previous $20,000 grant from the same fund (the EA Long-Term Future Fund) and found the money very helpful. He applied for more money in this round to scale the project up further

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments discuss this and the other forecasting grants, and include the question "why are you acting as grant-givers here rather than as special interest investors?" It is also included in a list of potentially concerning grants in a portfolio evaluation comment https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM (GW, IR) by Evan Gaensbauer.
Center for Applied Rationality150,000.0072019-03-20Epistemic institutionshttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Intended use of funds: The grant is to help the Center for Applied Rationality (CFAR) survive as an organization for the next few months (i.e., till the next grant round, which is 3 months later) without having to scale down operations. CFAR is low on finances because they did not run a 2018 fundraiser. because they felt that running a fundraiser would be in bad taste after what they considered a messup on their part in the Brent Dill situation

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka thinks CFAR intro workshops have had positive impact in 3 ways: (1) establishing epistemic norms, (2) training, and (3) recruitment into the X-risk network (especially AI safety). He also thinks CFAR faces many challenges, including the departure of many key employees, the difficulty of attracting top talent, and a dilution of its truth-seeking focus. However, he is enthusiastic about joint CFAR/MIRI workshops for programmers, where CFAR provides instructors. His final reason for donating is to avoid CFAR having to scale down due to its funding shortfall because it didn't run the 2018 fundraiser

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant amount, which is the largest in this grant round from the EA Long-Term Future Fund, is chosen to be sufficient for CFAR to continue operating as usual till the next grant round from the EA Long-Term Future Fund (in about 3 months). Habryka further elaborates in https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-recommendations#uhH4ioNbdaFrwGt4e (GW, IR) in reply to Milan Griffes, explaining why the grant is large and unrestricted
Percentage of total donor spend in the corresponding batch of donations: 16.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round, as well as by CFAR's time-sensitive financial situation; the grant round is a few months after the end of 2018, so the shortfall of funds raised because of not conducting the 2018 fundraiser is starting to hit on the finances
Intended funding timeframe in months: 3

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Oliver Habryka writes: "I didn’t have enough time this grant round to understand how the future of CFAR will play out; the current grant amount seems sufficient to ensure that CFAR does not have to take any drastic action until our next grant round. By the next grant round, I plan to have spent more time learning and thinking about CFAR’s trajectory and future, and to have a more confident opinion about what the correct funding level for CFAR is."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) In the comments, Milan Griffes asks why such a large, unrestricted grant is being made to CFAR despite these concerns, and also what Habryka hopes to learn about CFAR before the next grant round. There are replies from Peter McCluskey and Habryka, with some further comment back-and-forth.
Ought50,000.00312019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational financial buffer

Intended use of funds: No specific information is shared on how the funds will be used at the margin, but the general description gives an idea: "Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications"

Donor reason for selecting the donee: Donor is explicitly interested in diversifying funder base for donee, who currently receives almost all its funding from only two sources and is trying to change that. Othewise, same reason as with last round of funds https://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grants namely "We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): In write-up for previous grant at https://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grants of $10,000, donor says: "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant." The amount this time is bigger ($50,000) but the general principle likely continues to apply
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): In the previous grant round, donor had said "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." Thus, it makes sense to donate again in this round

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
AI Safety Camp (Earmark: Johannes Heidecke)25,000.00592019-03-20AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted).

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund an upcoming camp in Madrid being organized by AI Safety Camp in April 2019. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. The goal is to support aspiring researchers of AI alignment to boost themselves into productivity.

Donor reason for selecting the donee: The grant investigator and main influencer Oliver Habryka mentions that: (1) He has a positive impression of the organizers and has received positive feedback from participants in the first two AI Safety Camps. (2) A greater need to improve access to opportunities in AI alignment for people in Europe. Habryka also mentions an associated greater risk of making the AI Safety Camp the focal point of the AI safety community in Europe, which could cause problems if the quality of the people involved isn't high. He mentions two more specific concerns: (a) Organizing long in-person events is hard, and can lead to conflict, as the last two camps did. (b) People who don't get along with the organizers may find themselves shut out of the AI safety network.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee).
Percentage of total donor spend in the corresponding batch of donations: 2.71%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of the camp (which is scheduled for April 2019; the grant is being made around the same time) as well as the timing of the grant round.
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Habryka writes: "I would want to engage with the organizers a fair bit more before recommending a renewal of this grant."

Donor retrospective of the donation: The August 2019 grant round would include a $41,000 grant to AI Safety Camp for the next camp, with some format changes. However, in the write-up for that grant round, Habryka says " In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation." Also: "I will not fund another one without spending significantly more time investigating the program."

Other notes: Grantee in the grant document is listed as Johannes Heidecke, but the grant is for the AI Safety Camp. The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) Grant decision was coordinated with Effective Altruism Grants (specifically, Nicole Ross of CEA) who had considered also making a grant to the camp. Effective Altruism Grants ultimately decided against making the grant, and the Long-Term Future Fund made it instead. Nicole Ross, in the evaluation by EA Grants, mentions the same concerns that Habryka does: interpersonal conflict and people being shut out of the AI safety community if they don't get along with the camp organizers.
AI Safety Unconference4,500.00752018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Orpheus Lummis and Vaughn DiMarco are organizing an unconference on AI Alignment on the last day of the NeurIPS conference, with the goal of facilitating networking and research on AI Alignment among a diverse audience of AI researchers with and without safety backgrounds. Based on interaction with the organizers and some participants, the donor feels this project is worth funding. However, the donee is still not sure if the unconference will be held, so the grant is conditional to the donee deciding to proceed. The grant would fully fund the request. Percentage of total donor spend in the corresponding batch of donations: 4.71%.
AI summer school21,000.00632018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Grant to fund the second year of a summer school on AI safety, aiming to familiarize potential researchers with interesting technical problems in the field. Last year’s iteration of this event appears to have gone well, per https://www.lesswrong.com/posts/bXLi3n2jrfqRwoSTH/human-aligned-ai-summer-school-a-summary (GW, IR) and private information available to donor. Donor believes that well-run education efforts of this kind are valuable (where “well-run” refers to the quality of the intellectual content, the participants, and the logistics of the event), and feels confident enough that this particular effort will be well-run. Percentage of total donor spend in the corresponding batch of donations: 21.99%.
Machine Intelligence Research Institute40,000.00372018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and to MIRI's 2018 fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ saying "According to their fundraiser post, MIRI believes it will be able to find productive uses for additional funding, and gives examples of ways additional funding was used to support their work this year."

Donor reason for selecting the donee: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and says "We believe that this research represents one promising approach to AI alignment research."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor retrospective of the donation: The Long-Term Future Fund would make a similarly sized grant ($50,000) in its next grant round in April 2019, suggesting that it was satisfied with the outcome of the grant

Other notes: Percentage of total donor spend in the corresponding batch of donations: 41.88%.
Ought10,000.00712018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: Grantee is a nonprofit aiming to implement AI alignment concepts in real-world applications.

Donor reason for selecting the donee: The grant page says: "We believe that Ought's approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant."
Percentage of total donor spend in the corresponding batch of donations: 10.47%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: The grant page says "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." This suggests that Ought will be considered for future grant rounds

Donor retrospective of the donation: The Long-Term Future Fund would make a $50,000 grant to Ought in the April 2019 grant round, suggesting that this grant would be considered a success
Foretold (Earmark: Ozzie Gooen)20,000.00642018-11-29Epistemic institutions/forecastinghttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: Ozzie Gooen plans to build an online community of EA forecasters, researchers, and data scientists to predict variables of interest to the EA community. Ozzie proposed using the platform to answer a range of questions, including examples like “How many Google searches will there be for reinforcement learning in 2020?” or “How many plan changes will 80,000 hours cause in 2020?”, and using the results to help EA organizations and individuals to prioritize. The grant funds the project's basic setup and initial testing. The community and tool would later get created with the name Foretold; it is available at https://www.foretold.io/

Donor reason for selecting the donee: The grant decision was made based on past success by Ozzie Gooen with Guesstimate https://www.getguesstimate.com/ as well as belief both in the broad value of the project and the specifics of the project plan.

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined by the specifics of the project plan and the scope of this round of funding, namely, the project's basic setup and initial testing.
Percentage of total donor spend in the corresponding batch of donations: 20.94%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round, and also by the donee's desire to start the project

Donor retrospective of the donation: The Long-Term Future Fund would make a followup grant of $70,000 to Foretold in the April 2019 grant round https://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendations see also https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) for more detail
Machine Intelligence Research Institute488,994.0012018-08-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/july-2018-long-term-future-fund-grantsNick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017

Intended use of funds (category): Organizational general support

Intended use of funds: Beckstead writes "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare), due to a sense that (i) their work is otherwise much less funding constrained than it used to be, and (ii) spending like this would better reflect the value of staff time and increase staff satisfaction. However, I also told them that I was open to them using these funds to accomplish this objective indirectly (e.g. through salary increases) or using the funds for another purpose if that seemed better to them."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 for Beckstead's opinion of the donee.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-Term Future Fund." Also: "I think a number of these organizations could qualify for the criteria of either the Long-Term Future Fund or the EA Community Fund because of their dual focus on EA and longtermism, which is part of the reason that 80,000 Hours is receiving a grant from each fund."
Percentage of total donor spend in the corresponding batch of donations: 53.32%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: Even after the fund management being moved to a new team, the EA Long-Term Future Fund would continue making grants to MIRI.
Center for Applied Rationality174,021.0042018-08-14Epistemic institutionshttps://funds.effectivealtruism.org/funds/payouts/july-2018-long-term-future-fund-grantsNick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2018 grant https://www.openphilanthropy.org/giving/grants/center-applied-rationality-general-support-2018

Intended use of funds (category): Organizational general support

Intended use of funds: Beckstead writes "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare), due to a sense that (i) their work is otherwise much less funding constrained than it used to be, and (ii) spending like this would better reflect the value of staff time and increase staff satisfaction. However, I also told them that I was open to them using these funds to accomplish this objective indirectly (e.g. through salary increases) or using the funds for another purpose if that seemed better to them."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/giving/grants/center-applied-rationality-general-support-2018 for Beckstead's opinion of the donee.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-Term Future Fund." Also: "I think a number of these organizations could qualify for the criteria of either the Long-Term Future Fund or the EA Community Fund because of their dual focus on EA and longtermism, which is part of the reason that 80,000 Hours is receiving a grant from each fund."
Percentage of total donor spend in the corresponding batch of donations: 18.98%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: Even after the fund management being moved to a new team, the EA Long-Term Future Fund would continue making grants to CFAR.
80,000 Hours91,450.00142018-08-14Effective altruism/movement growth/career counselinghttps://funds.effectivealtruism.org/funds/payouts/july-2018-long-term-future-fund-grantsNick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2017 grant https://www.openphilanthropy.org/giving/grants/80000-hours-general-support and 2018 renewal https://www.openphilanthropy.org/giving/grants/80000-hours-general-support-2018

Intended use of funds (category): Organizational general support

Intended use of funds: Beckstead writes "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare), due to a sense that (i) their work is otherwise much less funding constrained than it used to be, and (ii) spending like this would better reflect the value of staff time and increase staff satisfaction. However, I also told them that I was open to them using these funds to accomplish this objective indirectly (e.g. through salary increases) or using the funds for another purpose if that seemed better to them."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/giving/grants/80000-hours-general-support-2018 for Beckstead's opinion of the donee. This grant page is short, and in turn links to https://www.openphilanthropy.org/giving/grants/80000-hours-general-support which has a detailed Case for the grant section https://www.openphilanthropy.org/giving/grants/80000-hours-general-support#Case_for_the_grant that praises 80,000 Hours' track record in terms of impact-adjusted significant plan changes (IASPCs)

Donor reason for donating that amount (rather than a bigger or smaller amount): Beckstead is also recommending funding from the EA Meta Fund of $75,818 for 80,000 Hours. The grant page says "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-Term Future Fund." Also: "I think a number of these organizations could qualify for the criteria of either the Long-Term Future Fund or the EA Community Fund because of their dual focus on EA and longtermism, which is part of the reason that 80,000 Hours is receiving a grant from each fund."
Percentage of total donor spend in the corresponding batch of donations: 9.97%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: Even after the fund management being moved to a new team, the EA Meta Fund would continue making grants to 80,000 Hours. In fact, 80,000 Hours would receive grant money in each of the three subsequent grant rounds. However, the EA Long-Term Future Fund would make no further grants to 80,000 Hours. This suggests that the selection of the grantee as a Long-Term Future Fund grantee would not continue to be endorsed by the new management team
Centre for Effective Altruism162,537.0062018-08-14Cause prioritizationhttps://funds.effectivealtruism.org/funds/payouts/july-2018-long-term-future-fund-grantsNick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2018 grant https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2018

Intended use of funds (category): Direct project expenses

Intended use of funds: Beckstead writes "CEA will use the LTF funding to support a new project whose objective is to expand global priorities research in academia, especially related to issues around longtermism."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2018 for Beckstead's opinion of the donee. This grant page is short, and in turn links to https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support for a more in-depth review of CEA

Donor reason for donating that amount (rather than a bigger or smaller amount): Beckstead is also recommending funding from the EA Meta Fund of $56,061 for CEA, which is for organizational general support (whereas the LTF grant is to support a new project on longtermism). Beckstead writes: "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-term Future Fund."
Percentage of total donor spend in the corresponding batch of donations: 17.72%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: After transition to the new management team, the EA Long-Term Future Fund would make no further grants to CEA. This suggests that the selection of the grantee as a Long-Term Future Fund grantee would not continue to be endorsed by the new management team
Berkeley Existential Risk Initiative14,838.02692017-03-20AI safety/other global catastrophic riskshttps://funds.effectivealtruism.org/funds/payouts/march-2017-berkeley-existential-risk-initiative-beriNick Beckstead Donation process: The grant page says that Nick Beckstead, the fund manager, learned that Andrew Critch was starting up BERI and needed $50,000. Beckstead determined that this would be the best use of the money in the Long-Term Future Fund.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "It is a new initiative providing various forms of support to researchers working on existential risk issues (administrative, expert consultations, technical support). It works as a non-profit entity, independent of any university, so that it can help multiple organizations and to operate more swiftly than would be possible within a university context."

Donor reason for selecting the donee: Nick Beckstead gives these reasons on the grant page: the basic idea makes sense to him, his confidence in Critch's ability to make it happen, supporting people to try out reasonable ideas and learn from how they unfold seems valuable, and the natural role of Beckstead as a "first funder" for such opportunities and confidence that other competing funders for this would have good counterfactual uses of their money.

Donor reason for donating that amount (rather than a bigger or smaller amount): The requested amount was $50,000, and at the time of grant, the fund only had $14,838.02. So, all the fund money was granted. Beckstead donated the remainder of the funding via the EA Giving Group and a personal donor-advised fund.
Percentage of total donor spend in the corresponding batch of donations: 100.00%

Donor reason for donating at this time (rather than earlier or later): The timing of BERI starting up and the launch of the Long-Term Future Fund closely matched, leading to this grant happening when it did.

Donor retrospective of the donation: BERI would become successful and get considerable funding from Jaan Tallinn in the coming months, validating the grant. The Long-Term Future Fund would not make any further grants to BERI.

Similarity to other donors

Sorry, we couldn't find any similar donors.