AI Safety Camp donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee AI Safety Camp in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 7 3,793 5,876 48 48 59 3,485 3,485 3,793 4,037 4,037 4,708 25,000 25,000
AI safety 7 3,793 5,876 48 48 59 3,485 3,485 3,793 4,037 4,037 4,708 25,000 25,000

Donation amounts by donor and year for donee AI Safety Camp

Donor Total 2019 2018
Effective Altruism Funds (filter this donee) 25,000.00 25,000.00 0.00
Lotta and Claes Linsefors (filter this donee) 4,707.60 0.00 4,707.60
Greg Colbourn (filter this donee) 4,036.77 0.00 4,036.77
Machine Intelligence Research Institute (filter this donee) 3,793.15 0.00 3,793.15
Centre for Effective Altruism (filter this donee) 3,484.80 0.00 3,484.80
Karol Kubicki (filter this donee) 58.84 0.00 58.84
Tom McGrath (filter this donee) 48.25 0.00 48.25
Total 41,129.41 25,000.00 16,129.41

Full list of donations in reverse chronological order (7 donations)

DonorAmount (current USD)Amount rank (out of 7)Donation dateCause areaURLInfluencerNotes
Effective Altruism Funds25,000.0012019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Intended use of funds: Grant to fund an upcoming camp in Madrid being organized by AI Safety Camp in April 2019. The camp consists of several weeks of online collaboration on concrete research questions, culminating in a 9-day intensive in-person research camp. The goal is to support aspiring researchers of AI alignment to boost themselves into productivity.

Donor reason for selecting the donee: The grant investigator and main influencer Oliver Habryka mentions that: (1) He has a positive impression of the organizers and has received positive feedback from participants in the first two AI Safety Camps. (2) A greater need to improve access to opportunities in AI alignment for people in Europe. Habryka also mentions an associated greater risk of making the AI Safety Camp the focal point of the AI safety community in Europe, which could cause problems if the quality of the people involved isn't high. He mentions two more specific concerns: (a) Organizing long in-person events is hard, and can lead to conflict, as the last two camps did. (b) People who don't get along with the organizers may find themselves shut out of the AI safety network

Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee)
Percentage of total donor spend in the corresponding batch of donations: 2.71%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of the camp (which is scheduled for April 2019; the grant is being made around the same time) as well as the timing of the grant round
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Habryka writes: "I would want to engage with the organizers a fair bit more before recommending a renewal of this grant"

Other notes: Grantee in the grant document is listed as Johannes Heidecke, but the grant is for the AI Safety Camp. Grant is for supporting The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions Grant decision was coordinated with Effective Altruism Grants (specifically, Nicole Ross of CEA) who had considered also making a grant to the camp. Effective Altruism Grants ultimately decided against making the grant, and the Long Term Future Fund made it instead. Nicole Ross, in the evaluation by EA Grants, mentions the same concerns that Habryka does: interpersonal conflict and people being shut out of the AI safety community if they don't get along with the camp organizers.
Lotta and Claes Linsefors4,707.6022018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards-- The actual donation probably happened sometime between February and June 2018.
Greg Colbourn4,036.7732018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards-- The actual donation probably happened sometime between February and June 2018.
Machine Intelligence Research Institute3,793.1542018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards-- The actual donation probably happened sometime between February and June 2018.
Centre for Effective Altruism3,484.8052018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards-- The actual donation probably happened sometime between February and June 2018.
Karol Kubicki58.8462018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards-- The actual donation probably happened sometime between February and June 2018.
Tom McGrath48.2572018-06-07AI safetyhttps://www.lesswrong.com/posts/KerENNLyiqQ5ew7Kz/the-first-ai-safety-camp-and-onwards-- The actual donation probably happened sometime between February and June 2018.