AI Safety Camp donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee AI Safety Camp in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 17 4,708 43,214 48 59 3,667 3,680 3,793 4,708 29,000 35,000 72,500 130,000 290,000
AI safety 17 4,708 43,214 48 59 3,667 3,680 3,793 4,708 29,000 35,000 72,500 130,000 290,000

Donation amounts by donor and year for donee AI Safety Camp

Donor Total 2022 2021 2020 2019 2018
FTX Future Fund (filter this donee) 290,000.00 290,000.00 0.00 0.00 0.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 252,500.00 72,500.00 85,000.00 0.00 95,000.00 0.00
Jaan Tallinn (filter this donee) 130,000.00 0.00 130,000.00 0.00 0.00 0.00
Survival and Flourishing Projects (filter this donee) 35,000.00 0.00 35,000.00 0.00 0.00 0.00
Lotta and Claes Linsefors (filter this donee) 4,707.60 0.00 0.00 0.00 0.00 4,707.60
Greg Colbourn (filter this donee) 4,036.77 0.00 0.00 0.00 0.00 4,036.77
Machine Intelligence Research Institute (filter this donee) 3,793.15 0.00 0.00 0.00 0.00 3,793.15
Michael Pokorny (filter this donee) 3,680.00 0.00 0.00 3,680.00 0.00 0.00
Luke Stebbing (filter this donee) 3,667.00 0.00 0.00 3,667.00 0.00 0.00
Simon Möller (filter this donee) 3,667.00 0.00 0.00 3,667.00 0.00 0.00
Centre for Effective Altruism (filter this donee) 3,484.80 0.00 0.00 0.00 0.00 3,484.80
Karol Kubicki (filter this donee) 58.85 0.00 0.00 0.00 0.00 58.85
Tom McGrath (filter this donee) 48.25 0.00 0.00 0.00 0.00 48.25
Total 734,643.42 362,500.00 250,000.00 11,014.00 95,000.00 16,129.42

Full list of documents in reverse chronological order (2 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.

Full list of donations in reverse chronological order (17 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 17)Donation dateCause areaURLInfluencerNotes
Effective Altruism Funds: Long-Term Future Fund72,500.0042022-12AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Cover participant stipends for AI Safety Camp Virtual 2023"; see https://www.alignmentforum.org/posts/9AXSrp5MAThZZEfTc/ai-safety-camp-virtual-edition-2023 for the announcement and details; it says: "AI Safety Camp Virtual 8 will be a 3.5-month long online research program from 4 March to 18 June 2023, where participants form teams to work on pre-selected projects."

Donor reason for donating at this time (rather than earlier or later): The grant is made about three months prior to the start of the camp being funded, and about one month before the announcement post https://www.alignmentforum.org/posts/9AXSrp5MAThZZEfTc/ai-safety-camp-virtual-edition-2023 seeking applications.
Intended funding timeframe in months: 4
FTX Future Fund290,000.0012022-06AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "partially support the salaries for AI Safety Camp’s two directors and to support logistical expenses at its physical camp."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Jaan Tallinn130,000.0022021-10AI safetyhttps://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Direct project expenses

Intended use of funds: It is likely (though not explicitly stated) that the grant funds the upcoming six-month virtual AI Safety Camp from January to June 2022.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the first one with a grant to the grantee.
Intended funding timeframe in months: 6

Other notes: The grant is made via Rethink Charity. Percentage of total donor spend in the corresponding batch of donations: 1.47%; announced: 2021-11-20.
Survival and Flourishing Projects35,000.0062021-07AI safetyhttp://survivalandflourishing.org/sfp-2021-q2-- Donation process: http://survivalandflourishing.org/saf-2021-q2-announcement says: "SAF is hosting a competition for individuals seeking a fixed amount of funding for projects benefitting SAF’s mission. We expect this competition to allocate between $300k and $350k in total funding across all winners, similar to our last round. [..] Winning applicants will be selected by a Selection Committee comprising at least three people appointed by SAF’s Project Director and Advisors. The Selection Committee will rank-order projects and award funding and project contracts to the top-ranked projects falling within our budget for this round. We may adjust our total budget for the round based on the quality of applications."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "facilitating two editions where selected applicants prioritise and test their fit for AI x-safety research."

Donor reason for donating that amount (rather than a bigger or smaller amount): While reasons for the amount for individual grants are not given, SAF provides guidance on the ranges it allows for amounts. Amounts must be between $5,000 and $200,000, for a time frame of up to 2 years. The amount is also capped at {$300,000} times {the applicant’s percentage time commitment to the project} times {the duration of the project in years}.

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of the 2021 Q2 grant round
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: While no thoughts are included on followup donations for individual grants, SAF says on the grant page that it usually does not fund ongoing work, making followup grants unlikely. However, AI Safety Camp is an ongoing series of camps, so it is possible that other camps in the series may be funded in the future, with the funding possibly going to other individuals than the person this time (Remmelt Ellen).
Effective Altruism Funds: Long-Term Future Fund85,000.0032021-04-01AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grantsOliver Habryka Asya Bergal Adam Gleave Daniel Eth Evan Hubinger Ozzie Gooen Donation process: Grant selected from a pool of applicants. This particular grantee had received grants in the past, and the grantmaking process was mainly based on soliciting more reviews and feedback from participants in AI Safety Camps funded by past grants.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "running a virtual and physical camp where selected applicants test their fit for AI safety research." Unlike previous grants, no specific date or time is provided for the grant.

Donor reason for selecting the donee: Grant page says: "some alumni of the camp reported very substantial positive benefits from attending the camp, while none of them reported noticing any substantial harmful consequences. [...] all alumni I reached out to thought that the camp was at worst, only a slightly less valuable use of their time than what they would have done instead, so the downside risk seems relatively limited. [...] the need for social events and workshops like this is greater than I previously thought, and that they are in high demand among people new to the AI Alignment field. [...] there is enough demand for multiple programs like this one, which reduces the grant’s downside risk, since it means that AI Safety Camp is not substantially crowding out other similar camps. There also don’t seem to be many similar events to AI Safety Camp right now, which suggests that a better camp would not happen naturally, and makes it seem like a bad idea to further reduce the supply by not funding the camp."

Donor reason for donating that amount (rather than a bigger or smaller amount): No specific reasons are given for the amount, but it is larger than previous grants, possibly reflecting the expanded scope of virtual and physical camp.
Percentage of total donor spend in the corresponding batch of donations: 5.15%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round as well as possibly by time taken to collect and process feedback from past grant participants. The pausing of in-person camps during the COVID-19 pandemic may also explain the gap since the previous grant.
Michael Pokorny3,680.00122020-02AI safety---- This donation was made along with similarly sized donations by Luke Stebbing and Simon Möller. Information on the three donations was communicated by the donee (with donor consent) to Donations List Website maintainer Vipul Naik in January 2022.
Simon Möller3,667.00132020-02AI safety---- This donation was made along with similarly sized donations by Luke Stebbing and Michael Pokorny. Information on the three donations was communicated by the donee (with donor consent) to Donations List Website maintainer Vipul Naik in January 2022.
Luke Stebbing3,667.00132020-02AI safety---- This donation was made along with similarly sized donations by Simon Möller and Michael Pokorny. Information on the three donations was communicated by the donee (with donor consent) to Donations List Website maintainer Vipul Naik in January 2022.
Effective Altruism Funds: Long-Term Future Fund29,000.0072019-11-21AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/november-2019-long-term-future-fund-grantsMatt Wage Helen Toner Oliver Habryka Alex Zhu Donation process: Grant selected from a pool of applicants. More details on the grantmaking process were not included in this round.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund the fifth AI Safety Camp. This camp is to be held in Toronto, Canada.

Donor reason for selecting the donee: The grant page says: "This round, I reached out to more past participants and received responses that were, overall, quite positive. I’ve also started thinking that the reference class of things like the AI Safety Camp is more important than I had originally thought."

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined based on what was requested in application. It is comparable to previous grant amounts of $25,000 and $41,000, that were also to run an AI Safety Camp.
Percentage of total donor spend in the corresponding batch of donations: 6.22%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round and of when the grantee intends to hold the next AI Safety Camp.
Intended funding timeframe in months: 1

Donor retrospective of the donation: The followup $85,000 grant (2021-04-01), also investigated by Oliver Habryka, would be accompanied by a more positive assessment based on processing more feedback from camp participants.
Effective Altruism Funds: Long-Term Future Fund41,000.0052019-08-30AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/august-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Donation process: Grantee applied through the online application process, and was selected based on review by the fund managers. Oliver Habryka was the fund manager most excited about the grant, and responsible for the public write-up.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund the 4th AI Safety Camp (AISC) - a research retreat and program for prospective AI safety researchers. From the grant application: "Compared to past iterations, we plan to change the format to include a 3 to 4-day project generation period and team formation workshop, followed by a several-week period of online team collaboration on concrete research questions, a 6 to 7-day intensive research retreat, and ongoing mentoring after the camp. The target capacity is 25 - 30 participants, with projects that range from technical AI safety (majority) to policy and strategy research." The project would later spin off as the AI Safety Research Program https://aisrp.org/

Donor reason for selecting the donee: Habryka, in his grant write-up, says: "I generally think that hackathons and retreats for researchers can be very valuable, allowing for focused thinking in a new environment. I think the AI Safety Camp is held at a relatively low cost, in a part of the world (Europe) where there exist few other opportunities for potential new researchers to spend time thinking about these topics, and some promising people have attended. " He also notes two positive things: (1) The attendees of the second camp all produced an artifact of their research (e.g. an academic writeup or code repository). (2) Changes to the upcoming camp address some concerns raised in feedback on previous camps.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for amount given, but the amount is likely determined by the budget requested by the grantee. For comparison, the amount granted for the previous AI safety camp was $25,000, i.e., a smaller amount. The increased grant size is likely due to the new format of the camp making it longer.
Percentage of total donor spend in the corresponding batch of donations: 9.34%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round as well as intended timing of the 4th AI Safety Camp the grant is for.
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Habryka writes: "I will not fund another one without spending significantly more time investigating the program."

Other notes: Habryka notes: "After signing off on this grant, I found out that, due to overlap between the organizers of the events, some feedback I got about this camp was actually feedback about the Human Aligned AI Summer School, which means that I had even less information than I thought. In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation." The project funded by the grant would later spin off as the AI Safety Research Program https://aisrp.org/ and the page https://aisrp.org/?page_id=116 would include details on the project outputs.
Effective Altruism Funds: Long-Term Future Fund25,000.0082019-03-20AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted).

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to fund an upcoming camp in Madrid being organized by AI Safety Camp in April 2019. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. The goal is to support aspiring researchers of AI alignment to boost themselves into productivity.

Donor reason for selecting the donee: The grant investigator and main influencer Oliver Habryka mentions that: (1) He has a positive impression of the organizers and has received positive feedback from participants in the first two AI Safety Camps. (2) A greater need to improve access to opportunities in AI alignment for people in Europe. Habryka also mentions an associated greater risk of making the AI Safety Camp the focal point of the AI safety community in Europe, which could cause problems if the quality of the people involved isn't high. He mentions two more specific concerns: (a) Organizing long in-person events is hard, and can lead to conflict, as the last two camps did. (b) People who don't get along with the organizers may find themselves shut out of the AI safety network.

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee).
Percentage of total donor spend in the corresponding batch of donations: 2.71%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of the camp (which is scheduled for April 2019; the grant is being made around the same time) as well as the timing of the grant round.
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Habryka writes: "I would want to engage with the organizers a fair bit more before recommending a renewal of this grant."

Donor retrospective of the donation: The August 2019 grant round would include a $41,000 grant to AI Safety Camp for the next camp, with some format changes. However, in the write-up for that grant round, Habryka says " In April I said I wanted to talk with the organizers before renewing this grant, and I expected to have at least six months between applications from them, but we received another application this round and I ended up not having time for that conversation." Also: "I will not fund another one without spending significantly more time investigating the program."

Other notes: Grantee in the grant document is listed as Johannes Heidecke, but the grant is for the AI Safety Camp. The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) Grant decision was coordinated with Effective Altruism Grants (specifically, Nicole Ross of CEA) who had considered also making a grant to the camp. Effective Altruism Grants ultimately decided against making the grant, and the Long-Term Future Fund made it instead. Nicole Ross, in the evaluation by EA Grants, mentions the same concerns that Habryka does: interpersonal conflict and people being shut out of the AI safety community if they don't get along with the camp organizers.
Centre for Effective Altruism3,484.80152018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards (GW, IR)-- The actual donation probably happened sometime between February and June 2018. Currency info: donation given as 2,961.00 EUR (conversion done on 2018-06-08 via Bloomberg).
Karol Kubicki58.85162018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards (GW, IR)-- The actual donation probably happened sometime between February and June 2018. Currency info: donation given as 50.00 EUR (conversion done on 2018-06-08 via Bloomberg).
Tom McGrath48.25172018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards (GW, IR)-- The actual donation probably happened sometime between February and June 2018. Currency info: donation given as 41.00 EUR (conversion done on 2018-06-08 via Bloomberg).
Machine Intelligence Research Institute3,793.15112018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards (GW, IR)-- The actual donation probably happened sometime between February and June 2018. Currency info: donation given as 3,223.00 EUR (conversion done on 2018-06-08 via Bloomberg).
Greg Colbourn4,036.77102018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards (GW, IR)-- The actual donation probably happened sometime between February and June 2018. Currency info: donation given as 3,430.00 EUR (conversion done on 2018-06-08 via Bloomberg).
Lotta and Claes Linsefors4,707.6092018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards (GW, IR)-- The actual donation probably happened sometime between February and June 2018. Currency info: donation given as 4,000.00 EUR (conversion done on 2018-06-08 via Bloomberg).