Effective Altruism Funds donations made (filtered to cause areas matching AI safety)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United Kingdom
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)Centre for Effective Altruism
Websitehttps://app.effectivealtruism.org/
Donations URLhttps://app.effectivealtruism.org/
Regularity with which donor updates donations datairregular
Regularity with which Donations List Website updates donations data (after donor update)irregular
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)

Brief history: The funds are a program of the Centre for Effective Altruism (CEA). The creation of the funds was inspired by the success of the EA Giving Group donor-advised fund run by Nick Beckstead, and also by the donor lottery run in December 2016 by Paul Christiano and Carl Shulman (see http://effective-altruism.com/ea/14d/donor_lotteries_demonstration_and_faq/ for more). EA Funds were introduced on 2017-02-09 in the post http://effective-altruism.com/ea/174/introducing_the_ea_funds/ and launched on 2017-02-28 in the post http://effective-altruism.com/ea/17v/ea_funds_beta_launch/ The first round of allocations was announced on 2017-04-20 at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ The funds allocation information appears to have next been updated in November 2017; see https://www.facebook.com/groups/effective.altruists/permalink/1606722932717391/ for more

Brief notes on broad donor philosophy and major focus areas: There are four EA Funds, each with its own focus area and own fund managers: Global Health and Development (Elie Hassenfeld of GiveWell), Animal Welfare (Lewis Bollard of the Open Philanthropy Project as Chair, Toni Adleberg of Animal Charity Evaluators, Natalie Cargill of Effective Giving, and Jamie Spurgeon of Animal Charity Evaluators), Long Term Future (Matt Fallshaw as Chair, Oliver Habryka, Helen Toner, Matt Wage, and Alex Zhu; Nick Beckstead and Jonas Vollmer serve as advisors), and EA Community (also known as EA Meta) (Luke Ding as Chair, Alex Foster, Tara Mac Aulay, Denise Melchin, and Matt Wage; Nick Beckstead serves as advisor)

Notes on grant decision logistics: Grants are decided separately within each of the four funds, by the managers of that fund. Allocation of the money may take about a month after the grant decision. Fund managers generally allocate multiple grants together with a bunch of money collected over the last few months. For all funds except the Global Health and Development Fund, the target months for making grant decisions are November, February, and June. For the Global Health and Development Fund, the target months are December, March and July. Actual grant decision months may be one or two months later than the target months

Notes on grant publication logistics: Grant details are published on the EA Funds website, and linked to from the page on the specific Fund. Grants allocated together are generally published together on a single page. Grants from the Global Health and Development Fund (managed by Elie Hassenfeld of GiveWell) are usually of two types: (1) Grants that are also GiveWell Incubation Grants, so they will be cross-posted to the GiveWell Incubation Grants page on GiveWell's site (but are listed only with donor Effective Altruism Funds on the donations list website), (2) Grants that are decided along with and similarly to GiveWell discretionary regranting

Notes on grant financing: Finances for each of the funds are maintained separately: individual donors can donate to a specific fund, or to all funds in a specific proportion specified by them. Only money explicitly donated to a fund can be granted out from that fund. Other money of the Centre for Effective Altruism (CEA) is not granted out through the Funds

This entity is also a donee.

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 20 30,000 50,412 4,500 10,000 14,838 20,000 25,000 30,000 30,000 30,000 40,000 50,000 488,994
AI safety 20 30,000 50,412 4,500 10,000 14,838 20,000 25,000 30,000 30,000 30,000 40,000 50,000 488,994

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2019 2018 2017
AI safety (filter this donor) 20 17 1,008,232.02 428,900.00 564,494.00 14,838.02
Total 20 17 1,008,232.02 428,900.00 564,494.00 14,838.02

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2019 2018 2017
AI safety 10 7 747,394.00 182,900.00 564,494.00 0.00
AI safety/deconfusion research 3 3 90,000.00 90,000.00 0.00 0.00
AI safety/forecasting 3 3 77,000.00 77,000.00 0.00 0.00
AI safety/content creation/video 1 1 39,000.00 39,000.00 0.00 0.00
AI safety/agent foundations 1 1 30,000.00 30,000.00 0.00 0.00
AI safety/other global catastrophic risks 1 1 14,838.02 0.00 0.00 14,838.02
AI safety/upskilling 1 1 10,000.00 10,000.00 0.00 0.00
Classified total 20 17 1,008,232.02 428,900.00 564,494.00 14,838.02
Unclassified total 0 0 0.00 0.00 0.00 0.00
Total 20 17 1,008,232.02 428,900.00 564,494.00 14,838.02

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2019 2018 2017
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 578,994.00 50,000.00 528,994.00 0.00
Ought (filter this donor) AI safety Site 60,000.00 50,000.00 10,000.00 0.00
Shahar Avin (filter this donor) 40,000.00 40,000.00 0.00 0.00
Robert Miles (filter this donor) 39,000.00 39,000.00 0.00 0.00
Tegan McCaslin (filter this donor) 30,000.00 30,000.00 0.00 0.00
Alex Turner (filter this donor) 30,000.00 30,000.00 0.00 0.00
Anand Srinivasan (filter this donor) 30,000.00 30,000.00 0.00 0.00
David Girardo (filter this donor) 30,000.00 30,000.00 0.00 0.00
Nikhil Kunapuli (filter this donor) 30,000.00 30,000.00 0.00 0.00
Jacob Lagerros (filter this donor) 27,000.00 27,000.00 0.00 0.00
AI Safety Camp (filter this donor) 25,000.00 25,000.00 0.00 0.00
AI summer school (filter this donor) 21,000.00 0.00 21,000.00 0.00
Connor Flexman (filter this donor) 20,000.00 20,000.00 0.00 0.00
Effective Altruism Zürich (filter this donor) 17,900.00 17,900.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 14,838.02 0.00 0.00 14,838.02
Orpheus Lummis (filter this donor) 10,000.00 10,000.00 0.00 0.00
AI Safety Unconference (filter this donor) 4,500.00 0.00 4,500.00 0.00
Total -- -- 1,008,232.02 428,900.00 564,494.00 14,838.02

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

If you hover over a cell for a given influencer and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Influencer Number of donations Number of donees Total 2019 2018 2017
Nick Beckstead 2 2 503,832.02 0.00 488,994.00 14,838.02
Oliver Habryka|Alex Zhu|Matt Wage|Helen Toner|Matt Fallshaw 8 8 231,000.00 231,000.00 0.00 0.00
Alex Zhu|Matt Wage|Helen Toner|Matt Fallshaw|Oliver Habryka 3 3 90,000.00 90,000.00 0.00 0.00
Matt Wage|Helen Toner|Matt Fallshaw|Alex Zhu|Oliver Habryka 2 2 90,000.00 90,000.00 0.00 0.00
Alex Zhu|Helen Toner|Matt Fallshaw|Matt Wage|Oliver Habryka 4 4 75,500.00 0.00 75,500.00 0.00
Helen Toner|Matt Wage|Matt Fallshaw|Alex Zhu|Oliver Habryka 1 1 17,900.00 17,900.00 0.00 0.00
Classified total 20 17 1,008,232.02 428,900.00 564,494.00 14,838.02
Unclassified total 0 0 0.00 0.00 0.00 0.00
Total 20 17 1,008,232.02 428,900.00 564,494.00 14,838.02

Graph of spending by influencer and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by influencer and year (cumulative)

Graph of spending should have loaded here

Donation amounts by disclosures and year

Sorry, we couldn't find any disclosures information.

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (25 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Long Term Future Fund and EA Meta Fund applications open until June 28th2019-06-10Oliver Habryka Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Request for proposalsAI safety|Global catastrophic risks|Effective altruismThe blog post announces that two of the funds under Effective Altruism Funds, namely the Long Term Future Fund and the EA Meta Fund, are open for rolling applications. The application window for the current rund ends on June 28. Response time windows will be 3-4 months (i.e., after the end of the corresponding application cycle). In rare cases, grants may be made out-of-cycle. Grant amounts must be at least $10,000, and will generally be under $100,000. The blog post gives guidelines on the kinds of applications that each fund will accept
80,000 Hours Annual Review – December 20182019-05-07Benjamin Todd 80,000 HoursOpen Philanthropy Project Berkeley Existential Risk Initiative Effective Altruism Funds 80,0000 Hours Donee periodic updateEffective altruism/movement growth/career counselingThis blog post is the annual self-review by 80,000 Hours, originally written in December 2018. Publication was deferred because 80,000 Hours was waiting to hear back on the status of some large grants (in particular, one from the Open Philanthropy Project), but most of the content is still from the December 2018 draft. The post goes into detail about 80,000 Hours' progress in 2018, impact and plan changes, and future expansion plans. Funding gaps are discussed (the funding gap for 2019 is $400,000, and further money will be saved for 2020 and 2021). Grants from the Open Philanthropy Project, BERI, and the Effective Altruism Funds (EA Meta Fund) are mentioned
Thoughts on the EA Hotel2019-04-25Oliver Habryka Effective Altruism ForumEffective Altruism Funds EA Hotel Evaluator review of doneeEffective altruism/housingWith permission from Greg Colbourn of the EA Hotel, Habryka publicly posts the feedback he sent to the EA Hotel, who was rejected from the April 2019 funding round by the Long Term Future Fund. Habryka first lists three reasons he is excited about the Hotel: (a) Providing a safety net, (b) Acting on historical interest, (c) Building high-dedication cultures. He articulates three concrete models of concerns: (1) Initial overeagerness to publicize the EA Hotel (a point he now believes is mostly false, based on Greg Colbourn's response), (2) Significant chance of the EA Hotel culture becoming actively harmful for residents, (3) No good candidate to take charge of long-term logistics of running the hotel. Habryka concludes by saying he thinks all his concerns can be overcome. At the moment, he thinks the hotel should be funded for the next year, but is unsure of whether they should be given money to buy the hotel next door. The comment replies include one by Greg Colbourn, giving his backstory on the media attention (re: (1)) and discussing the situation with (2) and (3). There are also other replies, including one from casebash, who stayed at the hotel for a significant time
This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year2019-04-17Evan Gaensbauer Effective Altruism ForumEffective Altruism Funds Third-party coverage of donor strategyGlobal catastrophic risks/AI safety/far futureEvan Gaensbauer reviews the grantmaking of the Long Term Future Fund since the management structure change in 2018 (with Nick Beckstead leaving). He uses the jargon "counterfactually unique" for grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded. Based on that measure, he calculates that 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. After excluding the grants that people have expressed serious concerns about in the comments, he says: "16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants." He calls these numbers "an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago." and considers the grants and the grant report an overall success. In a reply comment, Milan Griffes thanks him for the comment, which he calls an "audit"
You received almost 100 applications as far as I'm aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn't have time to vet them all. What other reasons did you have for rejecting applications?2019-04-08Risto Uuk Effective Altruism ForumEffective Altruism Funds Third-party coverage of donor strategyThe question by Risto Uuk is answered by Oliver Habryka giving the typical factors that might cause the Long Term Future Fund to reject an application. Of the factors listed, one that generates a lot of discussion is when the fund managers have no way of assessing the applicant without investing significant amounts of time, beyond what they have available. This is considered concerning because it creates a bias toward grantees who are better networked and known to the funders. The need for grant amounts that are big enough to justify the overhead costs leads to further discussion of the overhead costs of the marginal and average grant. Conversation participants include Oliver Habryka, Peter Hurford, Ben Kuhn, Michelle Hutchinson, Jonas Vollmer, John Maxwell IV, Jess Whittlestone, Milan Griffes, Evan Gaensbauer, and others
Major Donation: Long Term Future Fund Application Extended 1 Week2019-02-16Oliver Habryka Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Request for proposalsAI safety|Global catastrophic risksThe blog post announces that the EA Long-Term Future Fund has received a large donation, which doubles the amount of money available for granting to ~$1.2 million. It extends the deadline for applications at at https://docs.google.com/forms/d/e/1FAIpQLSeDTbCDbnIN11vcgHM3DKq6M0cZ3itAy5GIPK17uvTXcz8ZFA/viewform?usp=sf_link by 1 week, to 2019-02-24 midnight PST. The application form was previously annonced at https://forum.effectivealtruism.org/posts/oFeGLaJ5bZBBRbjC9/ea-funds-long-term-future-fund-is-open-to-applications-until and supposed to be open till 2019-02-07 for the February 2019 round of grants. Cross-posted to LessWrong at https://www.lesswrong.com/posts/ZKsSuxHWNGiXJBJ9Z/major-donation-long-term-future-fund-application-extended-1
My 2018 donations2019-01-20Ben Kuhn Ben Kuhn Effective Altruism Funds Ben Kuhn donor-advised fund GiveWell GiveWell top charities Ben Kuhn donor-advised fund Effective Altruism Funds Periodic donation list documentationGlobal health and developmentKuhn describes his decision to allocate his donation amount ($70,000, calculated as 50% of his income for the year) between GiveWell, GiveWell top charities, and his own donor-advised fund managed by Fidelity. Kuhn also discusses how he has been out of the loop of the latest developments in effective altruism, which is part of the reason his grants for this year are so boring. However, he is happy with recent management changes and increased grantmaking activity from the Effective Altruism Funds, and they are currently his default choice of where to allocate money from his donor-advised fund in 2019, if he does not find a better donation target. Kuhn also discusses some logistical aspects of his donation, such as: need to make some of his 2018 donations in 2019 and use of the donor-advised fund to channel his donation to GiveWell
EA Funds: Long-Term Future fund is open to applications until Feb. 7th2019-01-17Oliver Habryka Effective Altruism ForumEffective Altruism Funds Request for proposalsAI safety|Global catastrophic risksCross-posted to LessWrong at https://www.lesswrong.com/posts/dvGE8JSeFHtmHC6Gb/ea-funds-long-term-future-fund-is-open-to-applications-until The post seeks proposals for the Long-Term Future Fund. Proposals must be submitted by 2019-02-07 at https://docs.google.com/forms/d/e/1FAIpQLSeDTbCDbnIN11vcgHM3DKq6M0cZ3itAy5GIPK17uvTXcz8ZFA/viewform?usp=sf_link to be considered for the round of grants being announced mid-February. From the application, excerpted in the post: "We are particularly interested in small teams and individuals that are trying to get projects off the ground, or that need less money than existing grant-making institutions are likely to give out (i.e. less than ~$100k, but more than $10k). Here are a few examples of project types that we're open to funding an individual or group for (note that this list is not exhaustive)"
EA Meta Fund: we are open to applications2019-01-05Denise Melchin Effective Altruism ForumEffective Altruism Funds Request for proposalsEffective altruismThe post announces the existence of a form at https://docs.google.com/forms/d/e/1FAIpQLSeID5kjD9zsvlwgqB3hlX54EINg6_pY6sYl4hm7s-bYuDGiwA/viewform through which one can apply for consideration for receiving a grant from the EA Meta Fund. Submissions made by midnight GMT on January 20 will be considered for the grant distribution to be announced in mid-February, but applications made after this date will be considered for future rounds
EA Meta Fund AMA: 20th Dec 20182018-12-19Alex Foster Denise Melchin Matt Wage Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Donee AMAEffective altruismThe post is an Ask Me Anything (AMA) for the Effective Altruism Meta Fund. The questions and answers are in the post comments. Questions are asked by a number of people including alexherwix, Luke Muehlhauser, Tee Barnett,and Peter Hurford. Answers are provided by Denise Melchin, Matt Wage, and Alex Foster, three of the five people managing the fund. The other two, Luke Ding and Tara MacAulay, do not post any comment replies, but are referenced in some of the replies. The questions include how the meta fund sees its role, how much time they expect to spend allocating grants, what sort of criteria they use for evaluating opportunities, and what data inform their decisions
Animal Welfare Fund AMA2018-12-19Jamie Spurgeon Lewis Bollard Natalie Cargill Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Donee AMAAnimal welfareThe post is an Ask Me Anything (AMA) for the Animal Welfare Fund. The questions and answers are in the post comments. Questions are asked by a number of people including Tee Barnett, Peter Hurford, Halstead, Josh You, and Kevin Watkinson. Answers are provided by Lewis Bollard, Jamie Spurgeon, and Natalie Cargill, three of the four managers of the fund. The fourth manager, Toni Adleberg, does not particulate directly, but is referenced in the other answers. Questions cover the risks of lack of diversity due to dominance by Open Philanthropy Project and Animal Charity Evaluators, learning plans from the seemingly "hits-based giving" approach, the relation with the Effective Animal Advocacy Fund managed by ACE, the amount of time spent evaluating grants, criteria for evaluating grants, and research that would help the team.
Long-Term Future Fund AMA2018-12-18Helen Toner Oliver Habryka Alex Zhu Matt Fallshaw Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Donee AMAAI safety|Global catastrophic risksThe post is an Ask Me Anything (AMA) for the Long-Term Future Find. The question and answers are in the post comments. Questions are asked by a number of people including Luke Muehlhauser, Josh You, Peter Hurford, Alex Foster, and Robert Jones. Fund managers Oliver Habryka, Matt Fallshaw, Helen Toner, and Alex Zhu respond in the comments. Fund manager Matt Wage does not appear to have participated. Questions cover the amount of time spent evaluating grants, the evaluation criteria, the methods of soliciting grants, and research that would help the team
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)2018-11-20Oliver Habryka Effective Altruism ForumEffective Altruism Funds Request for proposalsAI safety|Global catastrophic risksThe post seeks proposals for the CEA Long-Term Future Fund. Proposals must be submitted by 2018-11-24 at https://docs.google.com/forms/d/e/1FAIpQLSf46ZTOIlv6puMxkEGm6G1FADe5w5fCO3ro-RK6xFJWt7SfaQ/viewform in order to be considered for the round of grants to be announced by the end of November 2018
Announcing new EA Funds management teams2018-10-27Marek Duda Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Broad donor strategyAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismThe post announces the transition of the Effective Altruism Funds manaagement to teams, with a chair, team members, and advisors. The EA Community Fund is renamed the EA Meta Fund, and has chair Luke Ding and team Denise Melchin, Matt Wage, Alex Foster, and Tara MacAulay, with advisor Nick Beckstead. The long-term future fund has chair Matt Fallshaw, and team Helen Toner, Oliver Habryka, Matt Wage, and Alex Zhu, with advisors Nick Beckstead and Jonas Vollmer. The animal welfare fund has chair Lewis Bollard (same as before) and team Jamie Spurgeon, Natalie Cargill, and Toni Adleberg. The global development fund continues to be solely managed by Elie Hassenfeld. The granting schedule will be thrice a year: November, February, and June for all funds except the Global Development Fund, which will be in December, March, and July.
EA Funds - An update from CEA2018-08-07Marek Duda Centre for Effective AltruismEffective Altruism Funds Effective Altruism Funds Broad donor strategyAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismMarek Duda gives an update on work on the EA Funds donation platform, the departure of Nick Beckstead from managing the EA Community and Long-Term Future Funds, and the experimental creation of "Junior" Funds
The EA Community and Long-Term Future Funds Lack Transparency and Accountability2018-07-23Evan Gaensbauer Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Evaluator review of doneeAnimal welfare|global health|AI safety|global catastrophic risks|effective altruismEvan Gaensbauer builds on past criticism of the EA Funds by Henry Stanley at http://effective-altruism.com/ea/1k9/ea_funds_hands_out_money_very_infrequently_should/ and http://effective-altruism.com/ea/1mr/how_to_improve_ea_funds/ Gaensbauer notes that the Global Health and Development Fund and the Animal Welfare Fund have done a better job of paying out and announcing payouts. However, the Long-Term Future Fund and EA Community Fund, both managed by Nick Beckstead, have announced only one payout, and have missed their self-imposed date for announcing the remaining payouts. Some comments by Marek Duda of the Centre for Effective Altruism (the parent of EA Funds) are also discussed
Update on Partnerships with External Donors2018-05-16Holden Karnofsky Open Philanthropy ProjectOpen Philanthropy Project Future Justice Fund Accountable Justice Action Fund Effective Altruism Funds Accountable Justice Action Fund Effective Altruism Funds Miscellaneous commentaryCriminal justice reform,Animal welfareThe Open Philanthropy Project describes how it works with donors other than Good Ventures (the foundation under Dustin Moskovitz and Cari Tuna that accounts for almost all Open Phil grantmaking). The blog post reiterates that the long-term goal is to inform many different funders, but that is not a short-term priority because the Open Philanthropy Project is not moving enough money to even achieve the total spend that Good Ventures is willing to go up to. The post mentions that Chloe Cockburn, the program officer for criminal justice reform, is working with other funders in criminal justice reform, and they have created a separate vehicle, the Accountable Justice Action Fund, to pool resources. Also, Mike and Kaitlyn Krieger, who previously worked with the Open Philanthropy Project, now have their own criminal justice-focused Future Justice Fund, and are getting help from Cockburn to allocate money from the fund. For causes outside of criminal justice reform, the role of Effective Altruism Funds (whose grantmaking is managed by Open Philanthropy Project staff members) is mentioned. Also, Lewis Bollard is said to have moved ~10% as much money through advice to other donors as he has moved through the Open Philanthropy Project
How to improve EA Funds2018-04-04Henry Stanley Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Evaluator review of doneeAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismHenry Stanley echoes thoughts expressed in his previous post http://effective-altruism.com/ea/1k9/ea_funds_hands_out_money_very_infrequently_should/ and argues for regular disbursement, holding funds in interest-bearing assets, and more clarity about fund manager bandwidth. Comments also discuss Effective Altruism Grants
Where, why and how I donated in 20172018-02-01Ben Kuhn Ben Kuhn Open Philanthropy Project Effective Altruism Funds Effective Altruism Grants GiveWell GiveWell top charities EA Giving Group Effective Altruism Funds Periodic donation list documentationGlobal health and developmentKuhn describes his decision to allocate his donation amount ($60,000, calculated as 50% of his income for the year) between GiveWell, GiveWell top charities, and his own donor-advised fund managed by Fidelity. Kuhn also discusses the Open Philanthropy Project, EA Funds, and EA Grants, and the EA Giving Group he donated to the previous year
EA Funds hands out money very infrequently - should we be worried?2018-01-31Henry Stanley Effective Altruism ForumEffective Altruism Funds Effective Altruism Funds Miscellaneous commentaryAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismHenry Stanley expresses concern that the Effective Altruism Funds hands out money very infrequently. Commenters include Peter Hurford (who suggests a percentage-based approach), Elie Hassenfeld, the manager of the global health and development fund, and Evan Gaensbauer, a person well-connected in effective altruist social circles
What is the status of EA funds? They seem pretty dormant2017-12-10Ben West Effective Altruism Facebook groupEffective Altruism Funds Effective Altruism Funds Miscellaneous commentaryAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismBen West, wondering whether to donate to the Effective Altruism Funds for his end-of-year donation, wonders whether the Funds are dormant, since no donations from the fund have been announced since April. In the comments, Marek Duda of the Centre for Effective Altruism reports that the Funds pages have been updated to include some recent donations, and West updates his post to note that
Discussion: Adding New Funds to EA Funds2017-06-01Kerry Vaughan Centre for Effective AltruismEffective Altruism Funds Broad donor strategyAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismKerry Vaughan of Effective Altruism Funds discusses the alternatives being considered regarding expanding the number of funds, and asks readers for opinions
Update on Effective Altruism Funds2017-04-20Kerry Vaughan Centre for Effective AltruismEffective Altruism Funds Effective Altruism Funds Periodic donation list documentationAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismKerry Vaughan provides a progress report on the beta launch of EA Funds, and says it will go on beyond beta. The post includes information on reception of EA Funds so far, money donated to the funds, and fund allocations for the money donated so far
EA Funds Beta Launch2017-02-28Tara MacAulay Centre for Effective AltruismEffective Altruism Funds Effective Altruism Funds LaunchAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismTara MacAulay of the Centre for Effective Altruism (CEA), the parent of Effective Altruism Funds, describes the beta launch of the project. CEA will revisit within three months to decide whether to make the EA Funds permanent
Introducing the EA Funds2017-02-09William MacAskill Centre for Effective AltruismEffective Altruism Funds Effective Altruism Funds LaunchAnimal welfare|Global health|AI safety|Global catastrophic risks|Effective altruismWilliam MacAskill of the Centre for Effective Altruism (CEA) proposes EA Funds, inspired by the Shulman/Christiano donor lottery from 2016-12, while also incorporating elements of the EA Giving Group run by Nick Beckstead

Full list of donations in reverse chronological order (20 donations)

DoneeAmount (current USD)Amount rank (out of 20)Donation dateCause areaURLInfluencerNotes
Effective Altruism Zürich (Earmark: Alex Lintz)17,900.00162019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlHelen Toner Matt Wage Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: A two-day workshop by Alex Lintz and collaborators from EA Zürich for effective altruists interested in AI governance careers, with the goals of giving participants background on the space, offering career advice, and building community.

Donor reason for selecting the donee: Donor writes: "We agree with their assessment that this space is immature and hard to enter, and believe their suggested plan for the workshop looks like a promising way to help participants orient to careers in AI governance."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 1.93%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed
Intended funding timeframe in months: 1

Other notes: The grant reasoning is written up by Helen Toner and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions but the comments on the post do not discuss this specific grant.
Shahar Avin40,000.0042019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Hiring an academic research assistant and other miscellaneous research expenses, for scaling up scenario role-play for AI strategy research and training.

Donor reason for selecting the donee: Donor writes: "We think positively of Shahar’s past work (for example this report), and multiple people we trust recommended that we fund him." The linked report is https://maliciousaireport.com/

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 4.33%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions but the comments on the post do not discuss this specific grant.
Ought50,000.0022019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organization financial buffer

Intended use of funds: No specific information is shared on how the funds will be used at the margin, but the general description gives an idea: "Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications"

Donor reason for selecting the donee: Donor is explicitly interesting in diversifying funder base for donee, who currently receives almost all its funding from only two sources and is trying to change that. Othewise, same reason as with last round of funds https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi namely "We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): In write-up for previous grant at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi of $10,000, donor says: "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant." The amount this time is bigger ($50,000) but the general principle likely continues to apply
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): In the previous grant round, donor had said "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." Thus, it makes sense to donate again in this round

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions but the comments on the post do not discuss this specific grant.
Nikhil Kunapuli30,000.0072019-04-07AI safety/deconfusion researchhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlAlex Zhu Matt Wage Helen Toner Matt Fallshaw Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grantee is doing independent deconfusion research for AI safety. His approach is to develop better foundational understandings of various concepts in AI safety, like safe exploration and robustness to distributional shift, by exploring these concepts in complex systems science and theoretical biology, domains outside of machine learning for which these concepts are also applicable.

Donor reason for selecting the donee: Fund manager Alex Zhu says: "I recommended that we fund Nikhil because I think Nikhil’s research directions are promising, and because I personally learn a lot about AI safety every time I talk with him."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Donor thoughts on making further donations to the donee: Alex Zhu, in his grant write-up, says that the quality of the work will be assessed by researchers at MIRI. Although it is not explicitly stated, it is likely that this evaluation will influence the decision of whether to make further grants

Other notes: The grant reasoning is written up by Alex Zhu and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions but the comments on the post do not discuss this specific grant.
Anand Srinivasan30,000.0072019-04-07AI safety/deconfusion researchhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlAlex Zhu Matt Wage Helen Toner Matt Fallshaw Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grantee is doing independent deconfusion research for AI safety. His angle of attack is to develop a framework that will allow researchers to make provable claims about what specific AI systems can and cannot do, based off of factors like their architectures and their training processes.

Donor reason for selecting the donee: Grantee worked with main grant influencer Alex Zhu at an enterprise software company that they cofounded. Alex Zhu says in his grant write-up: "I recommended that we fund Anand because I think Anand’s research directions are promising, and I personally learn a lot about AI safety every time I talk with him."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Donor thoughts on making further donations to the donee: Alex Zhu, in his grant write-up, says that the quality of the work will be assessed by researchers at MIRI. Although it is not explicitly stated, it is likely that this evaluation will influence the decision of whether to make further grants

Other notes: The quality of grantee's work will be judged by researchers at the Machine Intelligence Research Institute. The grant reasoning is written up by Alex Zhu and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions but the comments on the post do not discuss this specific grant.
Alex Turner30,000.0072019-04-07AI safety/agent foundationshttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant for building towards a “Limited Agent Foundations” thesis on mild optimization and corrigibility. Grantee is a third-year computer science PhD student funded by a graduate teaching assistantship; to dedicate more attention to alignment research, he is applying for one or more trimesters of funding (spring term starts April 1).

Donor reason for selecting the donee: In the grant write-up, Oliver Habryka explains that he is excited by (a) Turner's posts to LessWrong reviewing many math textbooks useful for thinking about the alignment problem, (b) Turner not being intimidated by the complexity of the problem, and (c) Turner writing up his thoughts and hypotheses in a clear way, seeking feedback on them early, and making a set of novel contributions to an interesting sub-field of AI Alignment quite quickly (in the form of his work on impact measures, on which he recently collaborated with the DeepMind AI Safety team).

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed
Intended funding timeframe in months: 4

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions The comments on the post do not discuss this specific grant.
David Girardo30,000.0072019-04-07AI safety/deconfusion researchhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlAlex Zhu Matt Wage Helen Toner Matt Fallshaw Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grantee is doing independent deconfusion research for AI safety. His angle of attack is to elucidate the ontological primitives for representing hierarchical abstractions, drawing from his experience with type theory, category theory, differential geometry, and theoretical neuroscience.

Donor reason for selecting the donee: The main investigator and influencer for the grant, Alex Zhu, finds the research directions promising. Tsvi Benson-Tilsen, a MIRI researcher, has also recommended that grantee get funding.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. No specific timing-related considerations are discussed

Donor thoughts on making further donations to the donee: The quality of the grantee's work will be assessed by researchers at MIRI

Other notes: The grant reasoning is written up by Alex Zhu and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions but the comments on the post do not discuss this specific grant.
Tegan McCaslin30,000.0072019-04-07AI safety/forecastinghttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant for independent research projects relevant to AI forecasting and strategy, including (but not necessarily limited to) some of the following: (1) Does the trajectory of AI capability development match that of biological evolution? (2) How tractable is long-term forecasting? (3) How much compute did evolution use to produce intelligence? (4)Benchmarking AI capabilities against insects. Short doc on (1) and (2) at https://docs.google.com/document/d/1hTLrLXewF-_iJiefyZPF6L677bLrUTo2ziy6BQbxqjs/edit

Donor reason for selecting the donee: Reasons for the grant from Oliver Habryka, the main influencer, include: (1) It's easier to relocate someone who has already demonstrated trust and skills than to find someone completely new, (2.1) It's important to give good researchers runway while they find the right place. Habryka notes: "my brief assessment of Tegan’s work was not the reason why I recommended this grant, and if Tegan asks for a new grant in 6 months to focus on solo research, I will want to spend significantly more time reading her output and talking with her, to understand how these questions were chosen and what precise relation they have to forecasting technological progress in AI."

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee). Habryka also mentions that he is interested only in providing limited runway, and would need to assess much more carefully for a more long-term grant
Percentage of total donor spend in the corresponding batch of donations: 3.25%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round. However, it is also related to the grantee's situation (she has just quit her job at AI Impacts, and needs financial runway to continue pursuing promising research projects)
Intended funding timeframe in months: 6

Donor thoughts on making further donations to the donee: The grant investigator Oliver Habryka notes: "if Tegan asks for a new grant in 6 months to focus on solo research, I will want to spend significantly more time reading her output and talking with her, to understand how these questions were chosen and what precise relation they have to forecasting technological progress in AI."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions The comments on the post do not discuss this specific grant, but a grant to Lauren Lee that includes somewhat similar reasoning (providing people runway after they leave their jobs, so they can explore better) attracts some criticism.
Machine Intelligence Research Institute50,000.0022019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Grant investigation and influencer Oliver Habryka believes that MIRI is making real progress in its approach of "creating a fundamental piece of theory that helps humanity to understand a wide range of powerful phenomena" He notes that MIRI started work on the alignment problem long before it became cool, which gives him more confidence that they will do the right thing and even their seemingly weird actions may be justified in ways that are not yet obvious. He also thinks that both the research team and ops staff are quite competent

Donor reason for donating that amount (rather than a bigger or smaller amount): Habryka offers the following reasons for giving a grant of just $50,000, which is small relative to the grantee budget: (1) MIRI is in a solid position funding-wise, and marginal use of money may be lower-impact. (2) There is a case for investing in helping grow a larger and more diverse set of organizations, as opposed to putting money in a few stable and well-funded onrganizations.
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Oliver Habryka writes: "I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions . Despite these, Habryka recommends a relatively small grant to MIRI, because they are already relatively well-funded and are not heavily bottlenecked on funding. However, he ultimately decides to grant some amount to MIRI, giving some explanation. He says he will think more about this before the next funding round.
AI Safety Camp (Earmark: Johannes Heidecke)25,000.00132019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Intended use of funds: Grant to fund an upcoming camp in Madrid being organized by AI Safety Camp in April 2019. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. The goal is to support aspiring researchers of AI alignment to boost themselves into productivity.

Donor reason for selecting the donee: The grant investigator and main influencer Oliver Habryka mentions that: (1) He has a positive impression of the organizers and has received positive feedback from participants in the first two AI Safety Camps. (2) A greater need to improve access to opportunities in AI alignment for people in Europe. Habryka also mentions an associated greater risk of making the AI Safety Camp the focal point of the AI safety community in Europe, which could cause problems if the quality of the people involved isn't high. He mentions two more specific concerns: (a) Organizing long in-person events is hard, and can lead to conflict, as the last two camps did. (b) People who don't get along with the organizers may find themselves shut out of the AI safety network

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.71%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of the camp (which is scheduled for April 2019; the grant is being made around the same time) as well as the timing of the grant round
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Habryka writes: "I would want to engage with the organizers a fair bit more before recommending a renewal of this grant"

Other notes: Grantee in the grant document is listed as Johannes Heidecke, but the grant is for the AI Safety Camp. Grant is for supporting The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions Grant decision was coordinated with Effective Altruism Grants (specifically, Nicole Ross of CEA) who had considered also making a grant to the camp. Effective Altruism Grants ultimately decided against making the grant, and the Long Term Future Fund made it instead. Nicole Ross, in the evaluation by EA Grants, mentions the same concerns that Habryka does: interpersonal conflict and people being shut out of the AI safety community if they don't get along with the camp organizers.
Robert Miles39,000.0062019-04-07AI safety/content creation/videohttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to create video content on AI alignment. Grantee has a YouTube channel at https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg (average 20,000 views per video) and also creates videos for the Computerphile channel https://www.youtube.com/watch?v=3TYT1QfdfsM&t=2s (often more than 100,000 views per video)

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka favors the grant for these reasons: (1) Grantee explains AI alignment as primarily a technical problem, not a moral or political problem, (2) Grantee does not politicize AI safety, (3) Grantee's goal is to create interest in these problems from future researchers, and not to simply get as large of an audience as possible. Habryka notes that the grantee is the first skilled person in the X-risk community working full-time on producing video content. "Being the very best we have in this skill area, he is able to help the community in a number of novel ways (for example, he’s already helping existing organizations produce videos about their ideas)." In the previous grant round, the grantee had requested funding for a collaboration with RAISE to produce videos for them, but Habryka felt it was better to fund the grantee directly and allow him to decide which organizations he wanted to help with his videos

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 4.22%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions.
Jacob Lagerros27,000.00122019-04-07AI safety/forecastinghttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project|Direct project expenses

Intended use of funds: Grant to build a private platform where AI safety and policy researchers have direct access to a base of superforecaster-equivalents. Lagerros previously received two grants to work on the project: a half-time salary from Effective Altruism Grants, and a grant for direct project expenses from Berkeley Existential Risk Initiative.

Donor reason for selecting the donee: Grant investigator and main influencer notes the same high-level reasons for the grant as for similar grants to Anothony Aguirre (Metaculus) and Ozzie Gooen (Foretold); the general reasons are explained in the grant writeup for Gooen. Habryka also mentions Lagerros being around the community for 3 years, and having done useful owrk and received other funding. Habryka mentions he did not assess the grant in detail; the main reason for granting from the Long Term Future Fund was due to logistical complications with other grantmakers (FHI and BERI), who already vouched for the value of the project

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.92%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions The comments discuss this and the other forecasting grants, and include the question "why are you acting as grant-givers here rather than as special interest investors?" It is also included in a list of potentially concerning grants in a portfolio evaluation comment https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#d4YHzSJnNWmyxf6HM by Evan Gaensbauer.
Orpheus Lummis10,000.00182019-04-07AI safety/upskillinghttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant for upskilling in contemporary AI techniques, deep RL and AI safety, before pursuing a ML PhD. Notable planned subprojects: (1) Engaging with David Krueger’s AI safety reading group at Montreal Institute for Learning Algorithms (2) Starting & maintaining a public index of AI safety papers, to help future literature reviews and to complement https://vkrakovna.wordpress.com/ai-safety-resources/ as a standalone wiki-page (eg at http://aisafetyindex.net ) (3) From-scratch implementation of seminal deep RL algorithms (4) Going through textbooks: Goodfellow Bengio Courville 2016, Sutton Barto 2018 (5) Possibly doing the next AI Safety camp (6) Building a prioritization tool for English Wikipedia using NLP, building on the literature of quality assessment (https://paperpile.com/shared/BZ2jzQ) (7) Studying the AI Alignment literature

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka is impressed with the results of the AI Safety Unconference organized by Lummis after NeurIPS with Long Term Future Fund money. However, he is not confident of the grant, writing: "I don’t know Orpheus very well, and while I have received generally positive reviews of their work, I haven’t yet had the time to look into any of those reviews in detail, and haven’t seen clear evidence about the quality of their judgment." Habryka also favors more time for self-study and reflection, and is excited about growing the Montral AI alignment community. Finally, Habryka thinks the grant amount is small and is unlikely to have negative consequences

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee). The small amount is also one reason grant investigator Oliver Habryka is comfortable making the grant despite not investigating thoroughly
Percentage of total donor spend in the corresponding batch of donations: 1.08%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions The comments on the post do not discuss this specific grant.
Connor Flexman20,000.00152019-04-07AI safety/forecastinghttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Living expenses during research project

Intended use of funds: Grant to perform independent research in collaboration with John Salvatier

Donor reason for selecting the donee: The grant was originally requested by John Salvatier (who is already funded by an EA Grant), as a grant to Salvatier to hire Flexman to help him. But Oliver Habryka (the primary person on whose recommendation the grant was made) ultimately decided to give the money to Flexman to give him more flexibility to switch if the work with Salvatier does not go well. Despite the reservations, Habryka considers significant negative consequences unlkely. Habryka also says: "I assign some significant probability that this grant can help Connor develop into an excellent generalist researcher of a type that I feel like EA is currently quite bottlenecked on." Habryka has two other reservations: potential conflict of interest because he lives in the same house as the recipient, and lack of concrete, externally verifiable evidence of competence

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.17%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions Habryka was the primary person on whose recommendation the grant was made. Habryka replies to a comment giving ideas on what independent research Flexman might produce if he stops working with Salvatier.
Machine Intelligence Research Institute40,000.0042018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Grant made from the Long-Term Future Fund. Donor believes that the new research directions outlined by donee at https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ are promising, and donee fundraising post suggests it could productively absorb additional funding. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Ought10,000.00182018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Grant made to implement AI alignment concepts in real-world applications. Donee seems more hiring-constrained than fundraising-constrained, hence only a small amount, but donor does believe that donee has a promising approach. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
AI summer school21,000.00142018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Grant to fund the second year of a summer school on AI safety, aiming to familiarize potential researchers with interesting technical problems in the field. Last year’s iteration of this event appears to have gone well, per https://www.lesswrong.com/posts/bXLi3n2jrfqRwoSTH/human-aligned-ai-summer-school-a-summary and private information available to donor. Donor believes that well-run education efforts of this kind are valuable (where “well-run” refers to the quality of the intellectual content, the participants, and the logistics of the event), and feels confident enough that this particular effort will be well-run. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
AI Safety Unconference4,500.00202018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Orpheus Lummis and Vaughn DiMarco are organizing an unconference on AI Alignment on the last day of the NeurIPS conference, with the goal of facilitating networking and research on AI Alignment among a diverse audience of AI researchers with and without safety backgrounds. Based on interaction with the organizers and some participants, the donor feels this project is worth funding. However, the donee is still not sure if the unconference will be held, so the grant is conditional to the donee deciding to proceed. The grant would fully fund the request. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Machine Intelligence Research Institute488,994.0012018-08-14AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0Nick Beckstead Grant made from the Long-Term Future Fund. Beckstead recommended that the grantee spend the money to save time and increase productivity of employees (for instance, by subsidizing childcare or electronics). Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Berkeley Existential Risk Initiative14,838.02172017-04AI safety/other global catastrophic riskshttps://app.effectivealtruism.org/funds/far-future/payouts/OzIQqsVacUKw0kEuaUGgINick Beckstead Grant discussed at http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ along with reasoning. Grantee approached Nick Beckstead with a grant proposal asking for 50000 USD. Beckstead provided all the money donated already from the far future fund, and made up the remainder via the EA Giving Group and some personal funds. Percentage of total donor spend in the corresponding batch of donations: 100.00%.

Similarity to other donors

Sorry, we couldn't find any similar donors.