Center for Human-Compatible AI donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttps://humancompatible.ai/
Donate pagehttp://humancompatible.ai/get-involved#supporter
Wikipedia pagehttps://en.wikipedia.org/wiki/Center_for_Human-Compatible_Artificial_Intelligence
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Center_for_Human-Compatible_AI
Org Watch pagehttps://orgwatch.issarice.com/?organization=Center+for+Human-Compatible+AI
Key peopleStuart Russell|Bart Sellman|Michael Wellman|Andrew Critch
Launch date2016-08
NotesReceived a 5.5 million dollar grant from the Open Philanthropy Project at the time of founding, with a 50% probability estimate that it would be a respected AI-related organization in two years

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 7 200,000 2,578,971 20,000 20,000 48,000 75,000 75,000 200,000 799,000 799,000 5,555,550 11,355,246 11,355,246
AI safety 7 200,000 2,578,971 20,000 20,000 48,000 75,000 75,000 200,000 799,000 799,000 5,555,550 11,355,246 11,355,246

Donation amounts by donor and year for donee Center for Human-Compatible AI

Donor Total 2021 2020 2019 2016
Open Philanthropy (filter this donee) 17,110,796.00 11,355,246.00 0.00 200,000.00 5,555,550.00
Jaan Tallinn (filter this donee) 819,000.00 819,000.00 0.00 0.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 123,000.00 48,000.00 75,000.00 0.00 0.00
Total 18,052,796.00 12,222,246.00 75,000.00 200,000.00 5,555,550.00

Full list of documents in reverse chronological order (11 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR)2022-08-28Thomas Larsen Eli LessWrongFund for Alignment Resesarch Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research Review of current state of cause areaAI safetyThis post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them.
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Britain’s youngest self-made billionaire is giving away his fortune — to people who don’t exist yet. The case for charity to benefit the far future.2019-05-29Kelsey Piper VoxBen Delo Center for Human-Compatible AI Forethought Foundation for Global Priorities Research Broad donor strategyGlobal catastrophic risks|Effective altruismKelsey Piper reports that Ben Delo, co-founder of cryptocurrency trading platform BitMEX, has signed the Giving Pledge, and will focus his giving on the long term. Piper includes arguments in favor of Delo's choice of focus, and describes the role the effective altruism movement played in influencing him.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).
Changes in funding in the AI safety field2017-02-01Sebastian Farquhar Centre for Effective Altruism Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab Review of current state of cause areaAI safetyThe post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."

Full list of donations in reverse chronological order (7 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 7)Donation dateCause areaURLInfluencerNotes
Effective Altruism Funds: Long-Term Future Fund48,000.0062021-04-01AI safetyhttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grantsEvan Hubinger Oliver Habryka Asya Bergal Adam Gleave Daniel Eth Ozzie Gooen Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "hiring research engineers to support CHAI’s technical research projects." "This grant is to support Cody Wild and Steven Wang in their work assisting CHAI as research engineers, funded through BERI."

Donor reason for selecting the donee: Grant investigator and main influencer Evan Hubinger writes: "Overall, I have a very high opinion of CHAI’s ability to produce good alignment researchers—Rohin Shah, Adam Gleave, Daniel Filan, Michael Dennis, etc.—and I think it would be very unfortunate if those researchers had to spend a lot of their time doing non-alignment-relevant engineering work. Thus, I think there is a very strong case for making high-quality research engineers available to help CHAI students run ML experiments. [...] both Cody and Steven have already been working with CHAI doing exactly this sort of work; when we spoke to Adam Gleave early in the evaluation process, he seems to have found their work to be positive and quite helpful. Thus, the risk of this grant hurting rather than helping CHAI researchers seems very minimal, and the case for it seems quite strong overall, given our general excitement about CHAI."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; a grant of $75,000 for a similar purpose was made to the grantee in the Septembe 2020 round, so the timing is likely partly determined by the need to renew funding for the people (Cody Wild and Steven Wang) funded through the previous grant.

Other notes: The grant page says: "Adam Gleave [one of the fund managers] did not participate in the voting or final discussion around this grant." The EA Forum post https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 5.15%.
Jaan Tallinn20,000.0072021-03-12AI safetyhttps://jaan.online/philanthropy/donations.html-- Donation process: Although most of Jaan Tallinn's public grantmaking during this period is through the Survival and Flourishing Fund's process (with https://survivalandflourishing.fund/ having the details), this particular grant is not made through the SFF process. However, it is made shortly after a much larger 2020 H2 grant through the SFF process, so it may simply be a top-up of that grant.

Intended use of funds (category): Organizational general support
Jaan Tallinn799,000.0032021-01-12AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Oliver Habryka Eric Rogstad Donation process: Part of the Survival and Flourishing Fund's 2020 H2 grants https://survivalandflourishing.fund/sff-2020-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amout recommended by the S-process is $779,000, but the actual grant amount is $799,000 ($20,000 higher).

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fourth grant round and the first with grants to this grantee.

Other notes: Although the Survival and Flourishing Fund and Jed McCaleb also participate in this grant round as funders, neither of them makes any grants to this grantee.
Open Philanthropy11,355,246.0012021-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2021Nick Beckstead Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "The multi-year commitment and increased funding will enable CHAI to expand its research and student training related to potential risks from advanced artificial intelligence."

Other notes: This is a renewal of the original founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai made August 2016. Intended funding timeframe in months: 60.
Effective Altruism Funds: Long-Term Future Fund75,000.0052020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000Oliver Habryka Adam Gleave Asya Bergal Matt Wage Helen Toner Donation process: Donee submitted grant application through the application form for the September 2020 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "hiring a research engineer to support CHAI’s technical research projects."

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka gives these reasons for the grant: "Over the last few years, CHAI has hosted a number of people who I think have contributed at a very high quality level to the AI alignment problem, most prominently Rohin Shah [...] I've also found engaging with Andrew Critch's thinking on AI alignment quite valuable, and I am hopeful about more work from Stuart Russell [...] the specific project that CHAI is requesting money for seems also quite reasonable to me. [...] it seems quite important for them to be able to run engineering-heavy machine learning projects, for which it makes sense to hire research engineers to assist with the associated programming tasks. The reports we've received from students at CHAI also suggest that past engineer hiring has been valuable and has enabled students at CHAI to do substantially better work."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Oliver Habryka writes: "Having thought more recently about CHAI as an organization and its place in the ecosystem of AI alignment,I am currently uncertain about its long-term impact and where it is going, and I eventually plan to spend more time thinking about the future of CHAI. So I think it's not that unlikely (~20%) that I might change my mind on the level of positive impact I'd expect from future grants like this. However, I think this holds less for the other Fund members who were also in favor of this grant, so I don't think my uncertainty is much evidence about how LTFF will think about future grants to CHAI."

Donor retrospective of the donation: A later grant round https://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants includes a $48,000 grant from the LTFF to CHAI for a similar purpose, suggesting continued satisfaction and a continued positive assessment of the grantee.

Other notes: Adam Gleave, though on the grantmaking team, recused himself from discussions around this grant since he is a Ph.D. student at CHAI. Grant investigator and main influencer Oliver Habryka includes a few concerns: "Rohin is leaving CHAI soon, and I'm unsure about CHAI's future impact, since Rohin made up a large fraction of the impact of CHAI in my mind. [...] I also maintain a relatively high level of skepticism about research that tries to embed itself too closely within the existing ML research paradigm. [...] A concrete example of the problems I have seen (chosen for its simplicity more than its importance) is that, on several occasions, I've spoken to authors who, during the publication and peer-review process, wound up having to remove some of their papers' most important contributions to AI alignment. [...] Another concern: Most of the impact that Rohin contributed seemed to be driven more by distillation and field-building work than by novel research. [...] I believe distillation and field-building to be particularly neglected and valuable at the margin. I don't currently see the rest of CHAI engaging in that work in the same way." The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 19.02%.
Open Philanthropy200,000.0042019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "CHAI plans to use these funds to support graduate student and postdoc research."

Other notes: Open Phil makes a $705,000 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 to the Berkeley Existential Risk Initiative (BERI) at the same time (November 2019) to collaborate with CHAI. Intended funding timeframe in months: 24; announced: 2019-12-20.
Open Philanthropy5,555,550.0022016-08AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Donation process: The grant page section https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process says: "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."

Intended use of funds (category): Organizational general support

Intended use of funds: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding says: "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year." The funding from Open Phil will be used toward this budget. An earlier section of the grant page says that the Center's research topics will include value alignment, value functions defined by partially observable and partially defined terms, the structure of human value systems, and conceptual questions including the properties of ideal value systems.

Donor reason for selecting the donee: The grant page gives these reasons: (1) "We expect the existence of the Center to make it much easier for researchers interested in exploring AI safety to discuss and learn about the topic, and potentially consider focusing their careers on it." (2) "The Center may allow researchers already focused on AI safety to dedicate more of their time to the topic and produce higher-quality research." (3) "We hope that the existence of a well-funded academic center at a major university will solidify the place of this work as part of the larger fields of machine learning and artificial intelligence." Also, counterfactual impact: "Professor Russell would not plan to announce a new Center of this kind without substantial additional funding. [...] We are not aware of other potential [substantial] funders, and we believe that having long-term support in place is likely to make it easier for Professor Russell to recruit for the Center."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is based on budget estimates in https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Budget_and_room_for_more_funding "Professor Russell estimates that the Center could, if funded fully, spend between $1.5 million and $2 million in its first year and later increase its budget to roughly $7 million per year."

Donor reason for donating at this time (rather than earlier or later): Timing seems to have been determined by the time it took to work out the details of the new center after Open Phil decided to make AI safety a major priority in 2016. According to https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai#Our_process "We have discussed the possibility of a grant to support Professor Russell’s work several times with him in the past. Following our decision earlier this year to make this focus area a major priority for 2016, we began to discuss supporting a new academic center at UC Berkeley in more concrete terms."
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 in November 2019, five-year renewal https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2021 in January 2021, as well as many grants to Berkeley Existential Risk Initiative (BERI) to collaborate with the grantee, suggest that Open Phil would continue to think highly of the grantee, and stand by its reasoning.

Other notes: Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the name of the center for easier cross-referencing. Announced: 2016-08-29.