AI Impacts donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttps://aiimpacts.org/
Donate pagehttps://aiimpacts.org/donate/
Donors list pagehttps://aiimpacts.org/donate/
Open Philanthropy Project grant reviewhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support
Org Watch pagehttps://orgwatch.issarice.com/?organization=AI+Impacts
Key peopleKatja Grace
Launch date2014

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 6 32,000 41,657 5,000 5,000 24,632 24,632 32,000 32,000 39,000 49,310 49,310 100,000 100,000
1 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000
AI safety 5 39,000 48,988 24,632 24,632 24,632 32,000 32,000 39,000 39,000 49,310 49,310 100,000 100,000

Donation amounts by donor and year for donee AI Impacts

Donor Total 2018 2017 2016 2015
Open Philanthropy Project (filter this donee) 132,000.00 100,000.00 0.00 32,000.00 0.00
Future of Life Institute (filter this donee) 49,310.00 0.00 0.00 0.00 49,310.00
Anonymous (filter this donee) 39,000.00 39,000.00 0.00 0.00 0.00
Effective Altruism Grants (filter this donee) 24,632.22 0.00 24,632.22 0.00 0.00
Donor lottery (filter this donee) 5,000.00 5,000.00 0.00 0.00 0.00
Total 249,942.22 144,000.00 24,632.22 32,000.00 49,310.00

Full list of documents in reverse chronological order (9 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
2018 AI Alignment Literature Review and Charity Comparison2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
2017 Donor Lottery Report2018-11-12Adam Gleave Effective Altruism ForumDonor lottery Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute AI Impacts Wild-Animal Suffering Research Single donation documentationA write-up that documents Adam Gleave’s decision process for where he donated the money for the 2017 donor lottery. (Adam won one of the two blocks of $100,000 for 2017.)
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Project Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
2017 AI Safety Literature Review and Charity Comparison2017-12-20Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
2016 AI Risk Literature Review and Charity Comparison2016-12-13Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
Peter McCluskey's favorite charities2015-12-06Peter McCluskey Peter McCluskey Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute Evaluator consolidated recommendation listThe page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans
Recently at AI Impacts2015-11-24Katja Grace AI Impacts AI Impacts Donee periodic updateAI safetyKatja Grace blogs with an update on new hires (Stephanie Zolayvar and John Salvatier) and new projects: the AI progress survey, AI researcher interviews, and bounty submissions
Supporting AI Impacts2015-05-21Katja Grace AI Impacts AI Impacts Donee donation caseAI safetyThe blog post announces that AI Impacts now has a donations page at http://aiimpacts.org/donate/
The AI Impacts Blog2015-01-09Katja Grace AI Impacts AI Impacts LaunchAI safetyThe blog post announces the launch of the AI Impacts website and new blog, with Katja Grace and Paul Christiano as its authors. The post is also referenced by the Machine Intelligence Research Institute (MIRI), that is the nonprofit that de facto discally sponsors AI Impacts, at https://intelligence.org/2015/01/11/improved-ai-impacts-website/

Full list of donations in reverse chronological order (6 donations)

DonorAmount (current USD)Amount rank (out of 6)Donation dateCause areaURLInfluencerNotes
Donor lottery5,000.0062018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-reportAdam Gleave The blog post explaining the donation contains extensive discussion of AI Impacts. Highlight: "I have found Katja's output in the past to be insightful, so I am excited at ensuring she remains funded. Tegan has less of a track record but based on the output so far I believe she is also worth funding. However, I believe AI Impacts has adequate funding for both of their current employees. Additional contributions would therefore do a combination of increasing their runway and supporting new hires. I am pessimistic about AI Impacts room for growth. This is primarily as I view recruitment in this area being difficult. The ideal candidate would be a cross between an OpenPhil research analyst and a technical AI or strategy researcher. This is a rare skill set with high opportunity cost. Moreover, AI Impacts has had issues with employee retention, with many individuals that have previously worked leaving for other organisations." In terms of the prioritization relative to other grantees: "I ranked GCRI above AI Impacts as AI Impacts core staff are adequately funded, and I am sceptical of their ability to recruit additional qualified staff members. I would favour AI Impacts over GCRI if they had qualified candidates they wanted to hire but were bottlenecked on funding. However, my hunch is that in such a situation they would be able to readily raise funding, although it may be that having an adequate funding reserve would substantially simplify recruitment. [...] If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there.". Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Anonymous39,000.0032018-07-05AI safetyhttps://aiimpacts.org/occasional-update-july-5-2018/-- To support several projects from AI Impacts’s list of promising research projects https://aiimpacts.org/promising-research-projects/.
Open Philanthropy Project100,000.0012018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018Daniel Dewey Discretionary grant via the Machine Intelligence Research Institute. AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence.. Renewal of December 2016 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support. Announced: 2018-06-28.
Effective Altruism Grants24,632.2252017-09-29AI safetyhttps://docs.google.com/spreadsheets/d/1iBy--zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk-- Empirical research on AI forecasting with AI Impacts. This grant will cover an additional part-time employee. See http://effective-altruism.com/ea/1fc/effective_altruism_grants_project_update/ for more context about the grant program.
Open Philanthropy Project32,000.0042016-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-- Grant for work on strategic questions related to potential risks from advanced artificial intelligence. Announced: 2017-02-02.
Future of Life Institute49,310.0022015-09-01AI safetyhttps://futureoflife.org/AI/2015awardees#Grace-- A project grant. Project title: Katja Grace.