Center for Security and Emerging Technology donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee Center for Security and Emerging Technology in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 4 8,000,000 26,312,500 3,330,000 3,330,000 3,330,000 8,000,000 8,000,000 8,000,000 38,920,000 38,920,000 55,000,000 55,000,000 55,000,000
Biosecurity and pandemic preparedness 1 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000 3,330,000
AI safety 2 8,000,000 23,460,000 8,000,000 8,000,000 8,000,000 8,000,000 8,000,000 8,000,000 38,920,000 38,920,000 38,920,000 38,920,000 38,920,000
Security 1 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000 55,000,000

Donation amounts by donor and year for donee Center for Security and Emerging Technology

Donor Total 2021 2019
Open Philanthropy (filter this donee) 105,250,000.00 50,250,000.00 55,000,000.00
Total 105,250,000.00 50,250,000.00 55,000,000.00

Full list of documents in reverse chronological order (4 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Questions We Ask Ourselves Before Making a Grant2019-08-06Michael Levine Open PhilanthropyOpen Philanthropy Sandler Foundation Center for Security and Emerging Technology University of Washington (Institute for Protein Design) Broad donor strategyMichael Levine describes some guidance that the Open Philanthropy Project has put together for program officers on questions to consider before making a grant. This complements guidance published three years ago about internal grant writeups: https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process
Important But Neglected: Why an Effective Altruist Funder Is Giving Millions to AI Security2019-03-20Tate Williams Inside PhilanthropyOpen Philanthropy Center for Security and Emerging Technology Third-party coverage of donor strategyAI safety|Biosecurity and pandemic preparedness|Global catastrophic risks|SecurityThe article focuses on grantmaking by the Open Philanthropy Project in the areas of global catastrophic risks and security, particularly in AI safety and biosecurity and pandemic preparedness. It includes quotes from Luke Muehlhauser, Senior Research Analyst at the Open Philanthropy Project and the investigator for the $55 million grant https://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology to the Center for Security and Emerging Technology (CSET). Muehlhauser was previously Executive Director at the Machine Intelligence Research Institute. It also includes a quote from Holden Karnofsky, who sees the early interest of effective altruists in AI safety as prescient. The CSET grant is discussed in the context of the Open Philanthropy Project's hits-based giving approach, as well as the interest in the policy space in better understanding of safety and governance issues related to technology and AI.

Full list of donations in reverse chronological order (4 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 4)Donation dateCause areaURLInfluencerNotes
Open Philanthropy38,920,000.0022021-08AI safetyhttps://www.openphilanthropy.org/grants/center-for-security-and-emerging-technology-general-support-august-2021/Luke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "CSET is a think tank, incubated by our January 2019 support, dedicated to policy analysis at the intersection of national and international security and emerging technologies. This funding is intended to augment our original support for CSET, particularly for its work on security and artificial intelligence."

Other notes: Intended funding timeframe in months: 36.
Open Philanthropy3,330,000.0042021-08Biosecurity and pandemic preparednesshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/center-security-and-emerging-technology-biosecurity-researchAndrew Snyder-Beattie Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a project investigating the extent and risks of dual-use research in the biosciences."

Donor reason for selecting the donee: The grant page says: "The hope is that the results of this project will better inform policymakers and other stakeholders of the security implications of such research."

Other notes: Intended funding timeframe in months: 36.
Open Philanthropy8,000,000.0032021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-supportLuke Muehlhauser Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "This funding is intended to augment our original support for CSET, particularly for its work on the intersection of security and artificial intelligence."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021 for a much larger amount suggests continued satisfaction with the grantee.
Open Philanthropy55,000,000.0012019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technologyLuke Muehlhauser Intended use of funds (category): Organizational general support

Intended use of funds: Grant via Georgetown University for the Center for Security and Emerging Technology (CSET), a new think tank led by Jason Matheny, formerly of IARPA, dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders.

Donor reason for selecting the donee: Open Phil thinks that one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. As AI grows more powerful, calls for government to play a more active role are likely to increase, and government funding and regulation could affect the benefits and risks of AI. Thus: "Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice." Despite risks and uncertainty, the grant is described as worthwhile under Open Phil's hits-based giving framework

Donor reason for donating that amount (rather than a bigger or smaller amount): The large amount over an extended period (5 years) is explained at https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant "In the case of the new Center for Security and Emerging Technology, we think it will take some time to develop expertise on key questions relevant to policymakers and want to give CSET the commitment necessary to recruit key people, so we provided a five-year grant."

Donor reason for donating at this time (rather than earlier or later): Likely determined by the timing that the grantee plans to launch. More timing details are not discussed
Intended funding timeframe in months: 60

Other notes: Donee is entered as Center for Security and Emerging Technology rather than as Georgetown University for consistency with future grants directly to the organization once it is set up. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.