Centre for the Governance of AI donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee Centre for the Governance of AI in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 5 450,000 811,466 19,200 19,200 19,200 50,532 50,532 450,000 450,000 1,000,000 1,000,000 2,537,600 2,537,600
AI safety 5 450,000 811,466 19,200 19,200 19,200 50,532 50,532 450,000 450,000 1,000,000 1,000,000 2,537,600 2,537,600

Donation amounts by donor and year for donee Centre for the Governance of AI

Donor Total 2023 2022 2021 2020
Open Philanthropy (filter this donee) 4,057,332.00 1,000,000.00 69,732.00 2,537,600.00 450,000.00
Total 4,057,332.00 1,000,000.00 69,732.00 2,537,600.00 450,000.00

Full list of documents in reverse chronological order (2 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)2021-12-14Zvi Mowshowitz LessWrongSurvival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Miscellaneous commentaryLongtermism|AI safety|Global catastrophic risksIn this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.

Full list of donations in reverse chronological order (5 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 5)Donation dateCause areaURLInfluencerNotes
Open Philanthropy1,000,000.0022023-05AI safety/governancehttps://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-general-support-2/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "to the Centre for the Governance of AI (GovAI) for general support. GovAI conducts research on AI governance and works to develop a talent pipeline for those interested in entering the field."
Open Philanthropy50,532.0042022-09AI safety/governancehttps://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-compute-strategy-workshop/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a workshop bringing together compute experts from several subfields, such as large-model infrastructure, ASIC design, and governance, to discuss compute governance ideas that could reduce existential risk from artificial intelligence."
Open Philanthropy19,200.0052022-09AI safety/governancehttps://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-research-assistant/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a new research assistant."
Open Philanthropy2,537,600.0012021-12AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-field-buildingLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support activities related to building the field of AI governance research. GovAI intends to use this funding to conduct AI governance research and to develop a talent pipeline for those interested in entering the field."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-research-assistant/ and https://www.openphilanthropy.org/grants/centre-for-the-governance-of-ai-general-support-2/ suggest continued satisfaction with the grantee.

Other notes: Grant made via the Centre for Effective Altruism. Intended funding timeframe in months: 24.
Open Philanthropy450,000.0032020-05AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-general-supportCommittee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "GovAI intends to use these funds to support the visit of two senior researchers and a postdoc researcher."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" but does not link to specific past writeups (Open Phil has not previously made grants directly to GovAI).

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public.

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/gov-ai-field-building (December 2021) suggests continued satisfaction with the grantee.

Other notes: Grant made via the Berkeley Existential Risk Initiative.