AI Impacts donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttps://aiimpacts.org/
Donate pagehttps://aiimpacts.org/donate/
Donors list pagehttps://aiimpacts.org/donate/
Open Philanthropy Project grant reviewhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support
Org Watch pagehttps://orgwatch.issarice.com/?organization=AI+Impacts
Key peopleKatja Grace
Launch date2014

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 14 40,000 59,853 5,000 20,000 24,632 32,000 39,000 40,000 50,000 70,000 82,000 100,000 221,000
1 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000 5,000
AI safety 13 49,310 64,072 20,000 24,632 30,000 32,000 40,000 49,310 50,000 75,000 82,000 100,000 221,000

Donation amounts by donor and year for donee AI Impacts

Donor Total 2021 2020 2019 2018 2017 2016 2015
Jaan Tallinn (filter this donee) 291,000.00 221,000.00 40,000.00 30,000.00 0.00 0.00 0.00 0.00
Open Philanthropy (filter this donee) 182,000.00 0.00 50,000.00 0.00 100,000.00 0.00 32,000.00 0.00
Jed McCaleb (filter this donee) 102,000.00 82,000.00 20,000.00 0.00 0.00 0.00 0.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 75,000.00 0.00 75,000.00 0.00 0.00 0.00 0.00 0.00
Survival and Flourishing Fund (filter this donee) 70,000.00 0.00 0.00 70,000.00 0.00 0.00 0.00 0.00
Future of Life Institute (filter this donee) 49,310.00 0.00 0.00 0.00 0.00 0.00 0.00 49,310.00
Anonymous (filter this donee) 39,000.00 0.00 0.00 0.00 39,000.00 0.00 0.00 0.00
Effective Altruism Grants (filter this donee) 24,632.22 0.00 0.00 0.00 0.00 24,632.22 0.00 0.00
Donor lottery (filter this donee) 5,000.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00
Total 837,942.22 303,000.00 185,000.00 100,000.00 144,000.00 24,632.22 32,000.00 49,310.00

Full list of documents in reverse chronological order (12 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Foundation Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
2017 Donor Lottery Report (GW, IR)2018-11-12Adam Gleave Effective Altruism ForumDonor lottery Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute AI Impacts Wild-Animal Suffering Research Single donation documentationGlobal catastrophic risks|AI safety|Animal welfareThe write-up documents Adam Gleave’s decision process for where he donated the money for the 2017 donor lottery. Adam won one of the two blocks of $100,000 for 2017
Occasional update July 5 20182018-07-05Katja Grace AI ImpactsOpen Philanthropy Anonymous AI Impacts Donee periodic updateAI safetyKatja Grace gives an update on the situation with AI Impacts, including recent funding received, personnel changes, and recent publicity.In particular, a $100,000 donation from the Open Philanthropy Project and a $39,000 anonymous donation are mentioned, and team members Tegan McCaslin, Justis Mills, consultant Carl Shulman, and departing member Michael Wulfsohn are mentioned
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
Peter McCluskey's favorite charities2015-12-06Peter McCluskey Peter McCluskey Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute Evaluator consolidated recommendation listThe page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans
Recently at AI Impacts2015-11-24Katja Grace AI Impacts AI Impacts Donee periodic updateAI safetyKatja Grace blogs with an update on new hires (Stephanie Zolayvar and John Salvatier) and new projects: the AI progress survey, AI researcher interviews, and bounty submissions
Supporting AI Impacts2015-05-21Katja Grace AI Impacts AI Impacts Donee donation caseAI safetyThe blog post announces that AI Impacts now has a donations page at http://aiimpacts.org/donate/
The AI Impacts Blog2015-01-09Katja Grace AI Impacts AI Impacts LaunchAI safetyThe blog post announces the launch of the AI Impacts website and new blog, with Katja Grace and Paul Christiano as its authors. The post is also referenced by the Machine Intelligence Research Institute (MIRI), that is the nonprofit that de facto discally sponsors AI Impacts, at https://intelligence.org/2015/01/11/improved-ai-impacts-website/

Full list of donations in reverse chronological order (14 donations)

Graph of top 10 donors by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 14)Donation dateCause areaURLInfluencerNotes
Jaan Tallinn221,000.0012021-04AI safetyhttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round. Grants to AI Impacts had been made in the second and third grant rounds.

Other notes: The grant round also includes a grant from Jed McCaleb ($82,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 2.32%.
Jed McCaleb82,000.0032021-04AI safetyhttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round. Grants to AI Impacts had been made in the second and third grant rounds.

Other notes: The grant round also includes a grant from Jaan Tallinn ($221,000) to the same grantee. Percentage of total donor spend in the corresponding batch of donations: 33.74%.
Open Philanthropy50,000.0062020-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020Tom Davidson Ajeya Cotra Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."
Effective Altruism Funds: Long-Term Future Fund75,000.0042020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000Adam Gleave Oliver Habryka Asya Bergal Matt Wage Helen Toner Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant for "answering decision-relevant questions about the future of artificial intelligence."

Donor reason for selecting the donee: Grant investigator and main influencer Adam Gleave writes: "Their work has and continues to influence my outlook on how and when advanced AI will develop, and I often see researchers I collaborate with cite their work in conversations. [...] Overall, I would be excited to see more research into better understanding how AI will develop in the future. This research can help funders to decide which projects to support (and when), and researchers to select an impactful research agenda. We are pleased to support AI Impacts' work in this space, and hope this research field will continue to grow.

Donor reason for donating that amount (rather than a bigger or smaller amount): Grant investigator and main influencer Adam Gleave writes: "We awarded a grant of $75,000, approximately one fifth of the AI Impacts budget. We do not expect sharply diminishing returns, so it is likely that at the margin, additional funding to AI Impacts would continue to be valuable. When funding established organizations, we often try to contribute a "fair share" of organizations' budgets based on the Fund's overall share of the funding landscape. This aids coordination with other donors and encourages organizations to obtain funding from diverse sources (which reduces the risk of financial issues if one source becomes unavailable)."
Percentage of total donor spend in the corresponding batch of donations: 19.02%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Intended funding timeframe in months: 12

Other notes: The grant page says: "(Recusal note: Due to working as a contractor for AI Impacts, Asya Bergal recused herself from the discussion and voting surrounding this grant.)" The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant.
Jaan Tallinn40,000.0082020-06-12AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; grants to AI Impacts had also been made in the second round in 2019 Q4.

Other notes: The grant round also includes a grant from Jed McCaleb ($20,000) to the same grantee (AI Impacts). Although the Survival and Flourishing Fund also participates as a funder in the round, it had no direct grants to AI Impacts in the round. Percentage of total donor spend in the corresponding batch of donations: 4.35%.
Jed McCaleb20,000.00132020-04AI safetyhttps://survivalandflourishing.fund/sff-2020-h1-recommendationsSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; grants to AI Impacts had also been made in the second round in 2019 Q4.

Other notes: The grant round also includes a grant from Jaan Tallinn ($40,000) to the same grantee (AI Impacts). Although the Survival and Flourishing Fund also participates as a funder in the round, it has no direct grants to AI Impacts. Percentage of total donor spend in the corresponding batch of donations: 8.00%.
Survival and Flourishing Fund70,000.0052019-12-04AI safetyhttps://jaan.online/philanthropy/donations.htmlAlex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round.

Other notes: The grant round also includes a grant from Jaan Tallinn ($30,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 7.61%; announced: 2019-12-15.
Jaan Tallinn30,000.00112019-12-04AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round.

Donor retrospective of the donation: Continued grants (such as https://survivalandflourishing.fund/sff-2020-h1-recommendations in 2020 H1) suggest continued satisfaction with the grantee.

Other notes: The grant round also includes a grant from the Survival and Flourishing Fund ($70,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 2.76%; announced: 2019-12-15.
Donor lottery5,000.00142018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report (GW, IR)Adam Gleave The blog post explaining the donation contains extensive discussion of AI Impacts. Highlight: "I have found Katja's output in the past to be insightful, so I am excited at ensuring she remains funded. Tegan has less of a track record but based on the output so far I believe she is also worth funding. However, I believe AI Impacts has adequate funding for both of their current employees. Additional contributions would therefore do a combination of increasing their runway and supporting new hires. I am pessimistic about AI Impacts room for growth. This is primarily as I view recruitment in this area being difficult. The ideal candidate would be a cross between an OpenPhil research analyst and a technical AI or strategy researcher. This is a rare skill set with high opportunity cost. Moreover, AI Impacts has had issues with employee retention, with many individuals that have previously worked leaving for other organisations." In terms of the prioritization relative to other grantees: "I ranked GCRI above AI Impacts as AI Impacts core staff are adequately funded, and I am sceptical of their ability to recruit additional qualified staff members. I would favour AI Impacts over GCRI if they had qualified candidates they wanted to hire but were bottlenecked on funding. However, my hunch is that in such a situation they would be able to readily raise funding, although it may be that having an adequate funding reserve would substantially simplify recruitment. [...] If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there.". Percentage of total donor spend in the corresponding batch of donations: 5.00%.
Anonymous39,000.0092018-07-05AI safetyhttps://aiimpacts.org/occasional-update-july-5-2018/-- To support several projects from AI Impacts’s list of promising research projects https://aiimpacts.org/promising-research-projects/.
Open Philanthropy100,000.0022018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018Daniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2020 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020 suggest continued satisfaction with the grantee, though the amount of the renewal grant is lower (just $50,000).

Other notes: The grant is via the Machine Intelligence Research Institute. Announced: 2018-06-27.
Effective Altruism Grants24,632.22122017-09-29AI safetyhttps://docs.google.com/spreadsheets/d/1iBy--zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk-- Empirical research on AI forecasting with AI Impacts. This grant will cover an additional part-time employee. See http://effective-altruism.com/ea/1fc/effective_altruism_grants_project_update/ for more context about the grant program. Currency info: donation given as 18,385.00 GBP (conversion done on 2017-09-29 via Bloomberg).
Open Philanthropy32,000.00102016-12AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-- Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewals in 2018 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2018 and 2020 https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support-2020 suggest continued satisfaction with the grantee.

Other notes: Announced: 2017-02-02.
Future of Life Institute49,310.0072015-09-01AI safetyhttps://futureoflife.org/AI/2015awardees#Grace-- A project grant. Project title: Katja Grace.