, ,
This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2025. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donee information for the donee Center for AI Safety in our system.
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 3 | 4,025,729 | 3,539,576 | 1,433,000 | 1,433,000 | 1,433,000 | 1,433,000 | 4,025,729 | 4,025,729 | 4,025,729 | 5,160,000 | 5,160,000 | 5,160,000 | 5,160,000 |
AI safety | 3 | 4,025,729 | 3,539,576 | 1,433,000 | 1,433,000 | 1,433,000 | 1,433,000 | 4,025,729 | 4,025,729 | 4,025,729 | 5,160,000 | 5,160,000 | 5,160,000 | 5,160,000 |
Donor | Total | 2023 | 2022 |
---|---|---|---|
Open Philanthropy (filter this donee) | 10,618,729.00 | 5,458,729.00 | 5,160,000.00 |
Total | 10,618,729.00 | 5,458,729.00 | 5,160,000.00 |
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR) | 2022-08-28 | Thomas Larsen Eli | LessWrong | Fund for Alignment Resesarch | Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research | Review of current state of cause area | AI safety | This post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them. |
Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations
Donor | Amount (current USD) | Amount rank (out of 3) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
Open Philanthropy | 4,025,729.00 | 2 | AI safety/technical research/movement growth | https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support-2023/ | -- | Intended use of funds (category): Organizational general support Intended use of funds: Grant "for general support. The Center for AI Safety works on research, field-building, and advocacy to reduce existential risks from artificial intelligence." |
|
Open Philanthropy | 1,433,000.00 | 3 | AI safety/technical research/strategy | https://www.openphilanthropy.org/grants/center-for-ai-safety-philosophy-fellowship/ | -- | Intended use of funds (category): Direct project expenses Intended use of funds: Grant "to support the CAIS Philosophy Fellowship, which is a research fellowship that will support philosophers researching topics related to AI safety. This grant also supported a workshop on adversarial robustness, as well as prizes for safety-related competitions at the 2022 NeurIPS conference." Links: https://philosophy.safe.ai/ for CAIS Philosophy Fellowship, https://eccv22-arow.github.io/ for the workshop, and https://trojandetection.ai/ and https://neurips2022.mlsafety.org/ for the prizes. |
|
Open Philanthropy | 5,160,000.00 | 1 | AI safety/technical research/movement growth | https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support/ | -- | Intended use of funds (category): Organizational general support Intended use of funds: Grant "for general support. The Center for AI Safety does technical research and field-building aimed at reducing catastrophic and existential risks from artificial intelligence." Donor retrospective of the donation: The followup general support grant https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support-2023/ in 2023 for a similar amount suggests continued satisfaction with the grantee. |