Study and Training Related to AI Policy Careers donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee Study and Training Related to AI Policy Careers in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 1 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420
AI safety 1 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420 594,420

Donation amounts by donor and year for donee Study and Training Related to AI Policy Careers

Donor Total 2020
Open Philanthropy (filter this donee) 594,420.00 594,420.00
Total 594,420.00 594,420.00

Full list of documents in reverse chronological order (0 documents)

There are no documents associated with this donee.

Full list of donations in reverse chronological order (1 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 1)Donation dateCause areaURLInfluencerNotes
Open Philanthropy594,420.0012020-03AI safety/governance/talent pipelinehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-and-training-related-to-ai-policy-careersLuke Muehlhauser Donation process: This is a scholarship program run by Open Philanthropy. Applications were sought at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers with the last date for applications being 2019-10-15.

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant is "flexible support to enable individuals to pursue and explore careers in artificial intelligence policy." Recipients include Emefa Agawu, Karson Elmgren, Matthew Gentzel, Becca Kagan, and Benjamin Mueller. The ways that specific recipients intend to use the funds is not described, but https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#examples gives general guidance on the kinds of uses Open Philanthropy was expecting to see when it opened applications.

Donor reason for selecting the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#goal says: "The goal of this program is to provide flexible support that empowers exceptional people who are interested in positively affecting the long-run effects of transformative AI via careers in AI policy, which we see as an important and neglected issue." https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#appendix provides links to Open Philanthropy's other writing on the importance of the issue.

Donor reason for donating that amount (rather than a bigger or smaller amount): https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/funding-AI-policy-careers#summary says: "There is neither a maximum nor a minimum number of applications we intend to fund; rather, we intend to fund the applications that seem highly promising to us."

Donor reason for donating at this time (rather than earlier or later): Timing is likely determined by the time taken to review all applications after the close of applications on 2019-10-15.

Donor retrospective of the donation: As of early 2022, there do not appear to have been further rounds of grantmaking from Open Philanthropy for this purpose.

Other notes: Open Philanthropy runs a related fellowship program called the Open Phil AI Fellowship, that has an annual cadence of announcing new grants, though individual grants are often multi-year. The Open Phil AI Fellowship grantees are mostly people working on technical AI safety, whereas this grant is focused on AI policy work. Moreover, the Open Phil AI Fellowship targets graduate-level research, whereas this grant targets study and training.