Effective Altruism Funds: Long-Term Future Fund donations made to AI Safety Camp

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

We do not have any donor information for the donor Effective Altruism Funds: Long-Term Future Fund in our system.

This entity is also a donee.

Full donor page for donor Effective Altruism Funds: Long-Term Future Fund

Basic donee information

We do not have any donee information for the donee AI Safety Camp in our system.

Full donee page for donee AI Safety Camp

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 3 29,000 31,667 25,000 25,000 25,000 25,000 29,000 29,000 29,000 41,000 41,000 41,000 41,000
AI safety 3 29,000 31,667 25,000 25,000 25,000 25,000 29,000 29,000 29,000 41,000 41,000 41,000 41,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2019
AI safety (filter this donor) 3 95,000.00 95,000.00
Total 3 95,000.00 95,000.00

Skipping spending graph as there is fewer than one year’s worth of donations.

Full list of donations in reverse chronological order (3 donations)

Amount (current USD)Amount rank (out of 3)Donation dateCause areaURLInfluencerNotes
29,000.0022019-11-21AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/60MJaGYoLb0zGlIZxuCMPgMatt Wag Helen Toner Oliver Habryka Alex Zhu Donation process: Grant selected from a pool of applicants. More details on the grantmaking process were not included in this round.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund the fifth AI Safety Camp. This camp is to be held in Toronto, Canada.

Donor reason for selecting the donee: The grant page says: "This round, I reached out to more past participants and received responses that were, overall, quite positive. I’ve also started thinking that the reference class of things like the AI Safety Camp is more important than I had originally thought."

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined based on what was requested in application. It is comparable to previous grant amounts of $25,000 and $41,000, that were also to run an AI Safety Camp.
Percentage of total donor spend in the corresponding batch of donations: 6.22%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round and of when the grantee intends to hold the next AI Safety Camp.
Intended funding timeframe in months: 1
25,000.0032019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted).

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund an upcoming camp in Madrid being organized by AI Safety Camp in April 2019. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. The goal is to support aspiring researchers of AI alignment to boost themselves into productivity.

Donor reason for selecting the donee: The grant investigator and main influencer Oliver Habryka mentions that: (1) He has a positive impression of the organizers and has received positive feedback from participants in the first two AI Safety Camps. (2) A greater need to improve access to opportunities in AI alignment for people in Europe. Habryka also mentions an associated greater risk of making the AI Safety Camp the focal point of the AI safety community in Europe, which could cause problems if the quality of the people involved isn't high. He mentions two more specific concerns: (a) Organizing long in-person events is hard, and can lead to conflict, as the last two camps did. (b) People who don't get along with the organizers may find themselves shut out of the AI safety network.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee).
Percentage of total donor spend in the corresponding batch of donations: 2.71%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of the camp (which is scheduled for April 2019; the grant is being made around the same time) as well as the timing of the grant round.
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Habryka writes: "I would want to engage with the organizers a fair bit more before recommending a renewal of this grant."

Donor retrospective of the donation: The August 2019 grant round would include a $41,000 grant to AI Safety Camp for the next camp, with some format changes. However, in the write-up for that grant round, Habryka says " In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation." Also: "I will not fund another one without spending significantly more time investigating the program."

Other notes: Grantee in the grant document is listed as Johannes Heidecke, but the grant is for the AI Safety Camp. The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) Grant decision was coordinated with Effective Altruism Grants (specifically, Nicole Ross of CEA) who had considered also making a grant to the camp. Effective Altruism Grants ultimately decided against making the grant, and the Long-Term Future Fund made it instead. Nicole Ross, in the evaluation by EA Grants, mentions the same concerns that Habryka does: interpersonal conflict and people being shut out of the AI safety community if they don't get along with the camp organizers.
41,000.0012019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Oliver Habryka was the fund manager most excited about the grant, and responsible for the public write-up.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund the 4th AI Safety Camp (AISC) - a research retreat and program for prospective AI safety researchers. From the grant application: "Compared to past iterations, we plan to change the format to include a 3 to 4-day project generation period and team formation workshop, followed by a several-week period of online team collaboration on concrete research questions, a 6 to 7-day intensive research retreat, and ongoing mentoring after the camp. The target capacity is 25 - 30 participants, with projects that range from technical AI safety (majority) to policy and strategy research."

Donor reason for selecting the donee: Habryka, in his grant write-up, says: "I generally think that hackathons and retreats for researchers can be very valuable, allowing for focused thinking in a new environment. I think the AI Safety Camp is held at a relatively low cost, in a part of the world (Europe) where there exist few other opportunities for potential new researchers to spend time thinking about these topics, and some promising people have attended. " He also notes two positive things: (1) The attendees of the second camp all produced an artifact of their research (e.g. an academic writeup or code repository). (2) Changes to the upcoming camp address some concerns raised in feedback on previous camps.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for amount given, but the amount is likely determined by the budget requested by the grantee. For comparison, the amount granted for the previous AI safety camp was $25,000, i.e., a smaller amount. The increased grant size is likely due to the new format of the camp making it longer.
Percentage of total donor spend in the corresponding batch of donations: 9.34%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round as well as intended timing of the 4th AI Safety Camp the grant is for.
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Habryka writes: "I will not fund another one without spending significantly more time investigating the program."

Other notes: Habryka notes: "After signing off on this grant, I found out that, due to overlap between the organizers of the events, some feedback I got about this camp was actually feedback about the Human Aligned AI Summer School, which means that I had even less information than I thought. In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation.".