AI Safety Support donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee AI Safety Support in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 7 200,000 361,245 25,000 25,000 42,000 80,000 80,000 200,000 200,000 200,000 443,716 1,538,000 1,538,000
AI safety 7 200,000 361,245 25,000 25,000 42,000 80,000 80,000 200,000 200,000 200,000 443,716 1,538,000 1,538,000

Donation amounts by donor and year for donee AI Safety Support

Donor Total 2023 2022 2021
Open Philanthropy (filter this donee) 2,023,716.00 443,716.00 1,580,000.00 0.00
FTX Future Fund (filter this donee) 200,000.00 0.00 200,000.00 0.00
Jaan Tallinn (filter this donee) 200,000.00 0.00 0.00 200,000.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 105,000.00 0.00 80,000.00 25,000.00
Total 2,528,716.00 443,716.00 1,860,000.00 225,000.00

Full list of documents in reverse chronological order (0 documents)

There are no documents associated with this donee.

Full list of donations in reverse chronological order (7 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 7)Donation dateCause areaURLInfluencerNotes
Open Philanthropy443,716.0022023-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/ai-safety-support-situational-awareness-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Three grants "to support research led by Owain Evans to evaluate whether machine learning models have situational awareness. These grants were made to AI Safety Support, Effective Ventures Foundation USA, and the Berkeley Existential Risk Initiative, and will support salaries, office space, and compute for this research project."

Other notes: Both the Open Philanthropy website and the donations list website list the grantee as AI Safety Support, but this is actually a combination of three grants, one each to "AI Safety Support, Effective Ventures Foundation USA, and the Berkeley Existential Risk Initiative"; the single donee is for simplicity and due to system limitations.
Open Philanthropy1,538,000.0012022-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [AI Safety Support's] collaboration with Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with in-person alignment research communities."

Other notes: See also the companion grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ to Berkeley Existential Risk Initiative and the grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ to Conjecture for the London-based extension.
FTX Future Fund200,000.0032022-05AI safety/talent pipelinehttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for general funding for community building and managing the talent pipeline for AI alignment researchers. AI Safety Support’s work includes one-on-one coaching, events, and research training programs."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Open Philanthropy42,000.0062022-04AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-safety-support-research-on-trends-in-machine-learning/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to scale up a research group, led by Jaime Sevilla, which studies trends in machine learning."

Donor retrospective of the donation: Further grant https://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/ and https://www.openphilanthropy.org/grants/ai-safety-support-situational-awareness-research/ from Open Philanthropy, though with slightly different goals, suggest continued satisfaction with the grantee.
Effective Altruism Funds: Long-Term Future Fund80,000.0052022-01AI safety/movement growthhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "Free health coaching to optimize the health and wellbeing, and thus capacity/productivity, of those working on AI safety"

Donor retrospective of the donation: AI Safety Support would be shut down about 1.5 years later; see https://forum.effectivealtruism.org/posts/Bjr6FXvnKqb37uMPP/shutting-down-ai-safety-support (GW, IR) for details.
Effective Altruism Funds: Long-Term Future Fund25,000.0072021-07AI safetyhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=round-- Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "6-month salary for JJ Hepburn to continue providing 1-on-1 support to early AI safety researchers and transition AI safety support"

Donor retrospective of the donation: A followup grant to AI Safety Support about six months later, at the end of the timeframe covered by this grant, suggests continued satisfaction with the grant outcome. AI Safety Support would be shut down about two years later; see https://forum.effectivealtruism.org/posts/Bjr6FXvnKqb37uMPP/shutting-down-ai-safety-support (GW, IR) for details.

Other notes: Intended funding timeframe in months: 6.
Jaan Tallinn200,000.0032021-04AI safetyhttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the first with a grant to the grantee.

Other notes: Although Jed McCaleb also participates as a funder in this grant round, he does not make any grants to this grantee. Percentage of total donor spend in the corresponding batch of donations: 2.10%.