AI Impacts donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttps://aiimpacts.org/
Donate pagehttps://aiimpacts.org/donate/
Donors list pagehttps://aiimpacts.org/donate/
Open Philanthropy Project grant reviewhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support
Org Watch pagehttps://orgwatch.issarice.com/?organization=AI+Impacts
Key peopleKatja Grace
Launch date2014

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 21 70,000 118,802 5,000 20,000 30,000 39,000 49,310 70,000 82,000 150,000 179,000 250,000 546,000
AI safety 20 70,000 124,492 5,000 20,000 30,000 39,000 49,310 70,000 82,000 150,000 179,000 250,000 546,000
1 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000

Donation amounts by donor and year for donee AI Impacts

Donor Total 2023 2022 2021 2020 2019 2018 2017 2016 2015
Jaan Tallinn (filter this donee) 1,021,000.00 179,000.00 546,000.00 221,000.00 40,000.00 30,000.00 0.00 0.00 0.00 5,000.00
Open Philanthropy (filter this donee) 696,893.00 150,000.00 364,893.00 0.00 50,000.00 0.00 100,000.00 0.00 32,000.00 0.00
FTX Future Fund (filter this donee) 250,000.00 0.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Future of Life Institute (filter this donee) 211,310.00 162,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 49,310.00
Jed McCaleb (filter this donee) 102,000.00 0.00 0.00 82,000.00 20,000.00 0.00 0.00 0.00 0.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 75,000.00 0.00 0.00 0.00 75,000.00 0.00 0.00 0.00 0.00 0.00
Survival and Flourishing Fund (filter this donee) 70,000.00 0.00 0.00 0.00 0.00 70,000.00 0.00 0.00 0.00 0.00
Anonymous (filter this donee) 39,000.00 0.00 0.00 0.00 0.00 0.00 39,000.00 0.00 0.00 0.00
Effective Altruism Grants (filter this donee) 24,632.22 0.00 0.00 0.00 0.00 0.00 0.00 24,632.22 0.00 0.00
Donor lottery (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00
Total 2,494,835.22 491,000.00 1,160,893.00 303,000.00 185,000.00 100,000.00 144,000.00 24,632.22 32,000.00 54,310.00

Full list of documents in reverse chronological order (13 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Future Fund June 2022 Update2022-06-30Nick Beckstead Leopold Aschenbrenner Avital Balwit William MacAskill Ketan Ramakrishnan FTX Future FundFTX Future Fund Manifold Markets ML Safety Scholars Program Andi Peng Braden Leach Thomas Kwa SecureBio Ray Amjad Apollo Academic Surveys Justin Mares Longview Philanthropy Atlas Fellowship Effective Ideas Blog Prize Ought Swift Centre for Applied Forecasting Federation for American Scientists Public Editor Project Quantified Uncertainty Research Institute Moncef Slaoui AI Impacts EA Critiques and Red Teaming Prize Broad donor strategyLongtermism|AI safety|Biosecurity and pandemic preparedness|Effective altruismThis lengthy blog post, cross-posted at https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update (GW, IR) to the Effective Altruism Forum, goes into detail regarding the grantmaking of the FTX Future Fund so far, and learnings from this grantmaking. The post reports having made 262 grants and investments, with $132 million in total spend. Three funding models are in use: regranting ($31 million so far), open call ($26 million so far), and staff-led grantmaking ($73 million so far).
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
2017 Donor Lottery Report (GW, IR)2018-11-12Adam Gleave Effective Altruism ForumDonor lottery Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute AI Impacts Wild-Animal Suffering Research Single donation documentationGlobal catastrophic risks|AI safety|Animal welfareThe write-up documents Adam Gleave’s decision process for where he donated the money for the 2017 donor lottery. Adam won one of the two blocks of $100,000 for 2017
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
Peter McCluskey's favorite charities2015-12-06Peter McCluskey Peter McCluskey Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute Evaluator consolidated recommendation listThe page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans
Recently at AI Impacts2015-11-24Katja Grace AI Impacts AI Impacts Donee periodic updateAI safetyKatja Grace blogs with an update on new hires (Stephanie Zolayvar and John Salvatier) and new projects: the AI progress survey, AI researcher interviews, and bounty submissions
Supporting AI Impacts2015-05-21Katja Grace AI Impacts AI Impacts Donee donation caseAI safetyThe blog post announces that AI Impacts now has a donations page at http://aiimpacts.org/donate/
The AI Impacts Blog2015-01-09Katja Grace AI Impacts AI Impacts LaunchAI safetyThe blog post announces the launch of the AI Impacts website and new blog, with Katja Grace and Paul Christiano as its authors. The post is also referenced by the Machine Intelligence Research Institute (MIRI), that is the nonprofit that de facto discally sponsors AI Impacts, at https://intelligence.org/2015/01/11/improved-ai-impacts-website/

Full list of donations in reverse chronological order (21 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 21)Donation dateCause areaURLInfluencerNotes
Future of Life Institute162,000.0062023-10AI safety/strategyhttps://survivalandflourishing.fund/sff-2023-h2-recommendationsSurvival and Flourishing Fund Nathan Labenz Michael Page Donation process: Part of the Survival and Flourishing Fund's 2023 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios. In each simulation, Recommenders specify a marginal value function for funding each application, and an algorithm calculates a table of grant recommendations by taking turns distributing funding recommendations from each Recommender in succession, using their marginal value functions to prioritize. The Recommenders then discuss their evaluations and update the simulation with their new opinions, using approval voting to prioritize discussion topics, until the end of the last meeting when their inputs are finalized. Similarly, funders specify and adjust different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's tenth grant round and the fifth with a grant to this grantee.

Other notes: A grant recommendation to the same grantee of $179,000 is also made for funder Jaan Tallinn as part of the same SFF round.
Jaan Tallinn179,000.0052023-10AI safety/strategyhttps://survivalandflourishing.fund/sff-2023-h2-recommendationsSurvival and Flourishing Fund Nathan Labenz Michael Page Donation process: Part of the Survival and Flourishing Fund's 2023 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios. In each simulation, Recommenders specify a marginal value function for funding each application, and an algorithm calculates a table of grant recommendations by taking turns distributing funding recommendations from each Recommender in succession, using their marginal value functions to prioritize. The Recommenders then discuss their evaluations and update the simulation with their new opinions, using approval voting to prioritize discussion topics, until the end of the last meeting when their inputs are finalized. Similarly, funders specify and adjust different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's tenth grant round and the fifth with a grant to this grantee.

Other notes: This includes a speculation grant of $72,000; so the additional granted amount is $107,000. The grant is made completely via SFF rather than via Lightspeed Grants, unlike other grants in this round. A grant recommendation to the same grantee of $162,000 is also made for funder Future of Life Institute as part of the same SFF round.
Open Philanthropy150,000.0072023-08AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-expert-survey-on-progress-in-ai/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support an expert survey on progress in artificial intelligence. AI Impacts works to answer questions about the future of artificial intelligence."

Other notes: AI Impacts previously did expert surveys on the state of AI, including https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/ in 2016 and (a rerun) https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ in 2022. This survey is likely a followup/rerun of those surveys.
Jaan Tallinn546,000.0012022-12-06AI safety/strategyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Nick Hay Alyssa Vance Scott Garrabrant Donation process: Part of the Survival and Flourishing Fund's 2022 H2 grants https://survivalandflourishing.fund/sff-2022-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal value functions. Recommenders specified a marginal value function for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's eighth grant round and the fourth with a grant to this grantee.

Donor retrospective of the donation: The followup grant recommendations for AI Impacts in https://survivalandflourishing.fund/sff-2023-h2-recommendations (about a year later) suggest continued satisfaction with the grant outcome.
Open Philanthropy364,893.0022022-06AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. AI Impacts works on strategic questions related to advanced artificial intelligence."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/ai-impacts-expert-survey-on-progress-in-ai/ suggests continued satisfaction with the grantee.
FTX Future Fund250,000.0032022-06AI safety/forecastinghttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results."
Jaan Tallinn221,000.0042021-07-23AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants https://survivalandflourishing.fund/sff-2021-h1-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round. Grants to AI Impacts had been made in the second and third grant rounds.

Donor retrospective of the donation: The followup grant for AI Impacts in https://survivalandflourishing.fund/sff-2021-h1-recommendations continued satisfaction with the grant outcome.

Other notes: The grant round also includes a grant from Jed McCaleb ($82,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 2.32%.
Jed McCaleb82,000.0092021-04AI safetyhttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round. Grants to AI Impacts had been made in the second and third grant rounds.

Other notes: The grant round also includes a grant from Jaan Tallinn ($221,000) to the same grantee. Percentage of total donor spend in the corresponding batch of donations: 33.74%.
Open Philanthropy50,000.00122020-11AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2020/Tom Davidson Ajeya Cotra Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2022 https://www.openphilanthropy.org/grants/ai-impacts-general-support/ (for a much larger amount) suggests continued satisfaction with the grantee.
Effective Altruism Funds: Long-Term Future Fund75,000.00102020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000Adam Gleave Oliver Habryka Asya Bergal Matt Wage Helen Toner Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant for "answering decision-relevant questions about the future of artificial intelligence."

Donor reason for selecting the donee: Grant investigator and main influencer Adam Gleave writes: "Their work has and continues to influence my outlook on how and when advanced AI will develop, and I often see researchers I collaborate with cite their work in conversations. [...] Overall, I would be excited to see more research into better understanding how AI will develop in the future. This research can help funders to decide which projects to support (and when), and researchers to select an impactful research agenda. We are pleased to support AI Impacts' work in this space, and hope this research field will continue to grow.

Donor reason for donating that amount (rather than a bigger or smaller amount): Grant investigator and main influencer Adam Gleave writes: "We awarded a grant of $75,000, approximately one fifth of the AI Impacts budget. We do not expect sharply diminishing returns, so it is likely that at the margin, additional funding to AI Impacts would continue to be valuable. When funding established organizations, we often try to contribute a "fair share" of organizations' budgets based on the Fund's overall share of the funding landscape. This aids coordination with other donors and encourages organizations to obtain funding from diverse sources (which reduces the risk of financial issues if one source becomes unavailable)."
Percentage of total donor spend in the corresponding batch of donations: 19.02%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Intended funding timeframe in months: 12

Other notes: The grant page says: "(Recusal note: Due to working as a contractor for AI Impacts, Asya Bergal recused herself from the discussion and voting surrounding this grant.)" The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant.
Jaan Tallinn40,000.00142020-06-12AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; grants to AI Impacts had also been made in the second round in 2019 Q4.

Other notes: The grant round also includes a grant from Jed McCaleb ($20,000) to the same grantee (AI Impacts). Although the Survival and Flourishing Fund also participates as a funder in the round, it had no direct grants to AI Impacts in the round. Percentage of total donor spend in the corresponding batch of donations: 4.35%.
Jed McCaleb20,000.00192020-04AI safetyhttps://survivalandflourishing.fund/sff-2020-h1-recommendationsSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; grants to AI Impacts had also been made in the second round in 2019 Q4.

Other notes: The grant round also includes a grant from Jaan Tallinn ($40,000) to the same grantee (AI Impacts). Although the Survival and Flourishing Fund also participates as a funder in the round, it has no direct grants to AI Impacts. Percentage of total donor spend in the corresponding batch of donations: 8.00%.
Jaan Tallinn30,000.00172019-12-04AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round.

Donor retrospective of the donation: Continued grants (such as https://survivalandflourishing.fund/sff-2020-h1-recommendations in 2020 H1) suggest continued satisfaction with the grantee.

Other notes: The grant round also includes a grant from the Survival and Flourishing Fund ($70,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 2.76%; announced: 2019-12-15.
Survival and Flourishing Fund70,000.00112019-12-04AI safety/strategyhttps://jaan.online/philanthropy/donations.htmlAlex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round.

Other notes: The grant round also includes a grant from Jaan Tallinn ($30,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 7.61%; announced: 2019-12-15.
Donor lottery5,000.00202018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report (GW, IR)Adam Gleave The blog post explaining the donation contains extensive discussion of AI Impacts. Highlight: "I have found Katja's output in the past to be insightful, so I am excited at ensuring she remains funded. Tegan has less of a track record but based on the output so far I believe she is also worth funding. However, I believe AI Impacts has adequate funding for both of their current employees. Additional contributions would therefore do a combination of increasing their runway and supporting new hires. I am pessimistic about AI Impacts room for growth. This is primarily as I view recruitment in this area being difficult. The ideal candidate would be a cross between an OpenPhil research analyst and a technical AI or strategy researcher. This is a rare skill set with high opportunity cost. Moreover, AI Impacts has had issues with employee retention, with many individuals that have previously worked leaving for other organisations." In terms of the prioritization relative to other grantees: "I ranked GCRI above AI Impacts as AI Impacts core staff are adequately funded, and I am sceptical of their ability to recruit additional qualified staff members. I would favour AI Impacts over GCRI if they had qualified candidates they wanted to hire but were bottlenecked on funding. However, my hunch is that in such a situation they would be able to readily raise funding, although it may be that having an adequate funding reserve would substantially simplify recruitment. [...] If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there.". Percentage of total donor spend in the corresponding batch of donations: 5.00%.
Anonymous39,000.00152018-07-05AI safetyhttps://aiimpacts.org/occasional-update-july-5-2018/-- To support several projects from AI Impacts’s list of promising research projects https://aiimpacts.org/promising-research-projects/.
Open Philanthropy100,000.0082018-06AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2018/Daniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2020 https://www.openphilanthropy.org/grants/ai-impacts-general-support-2020/ and 2022 https://www.openphilanthropy.org/grants/ai-impacts-general-support/ suggest continued satisfaction with the grantee, though the amount of the 2020 renewal grant is lower (just $50,000).

Other notes: The grant is via the Machine Intelligence Research Institute. Announced: 2018-06-27.
Effective Altruism Grants24,632.22182017-09-29AI safetyhttps://docs.google.com/spreadsheets/d/1iBy--zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk-- Empirical research on AI forecasting with AI Impacts. This grant will cover an additional part-time employee. See http://effective-altruism.com/ea/1fc/effective_altruism_grants_project_update/ for more context about the grant program. Currency info: donation given as 18,385.00 GBP (conversion done on 2017-09-29 via Bloomberg).
Open Philanthropy32,000.00162016-12AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2016/-- Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewals in 2018 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018 and 2020 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020 suggest continued satisfaction with the grantee.

Other notes: Announced: 2017-02-02.
Jaan Tallinn5,000.00202015-10-15AI safety/strategyhttps://jaan.online/philanthropy/donations.html-- Intended use of funds (category): Organizational general support
Future of Life Institute49,310.00132015-09-01AI safetyhttps://futureoflife.org/AI/2015awardees#Grace-- A project grant. Project title: Katja Grace.