Alignment Research Center donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee Alignment Research Center in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 6 1,250,000 1,168,833 72,000 72,000 265,000 265,000 1,250,000 1,250,000 1,401,000 1,846,000 1,846,000 2,179,000 2,179,000
AI safety 6 1,250,000 1,168,833 72,000 72,000 265,000 265,000 1,250,000 1,250,000 1,401,000 1,846,000 1,846,000 2,179,000 2,179,000

Donation amounts by donor and year for donee Alignment Research Center

Donor Total 2023 2022
Jaan Tallinn (filter this donee) 4,025,000.00 1,846,000.00 2,179,000.00
Open Philanthropy (filter this donee) 1,515,000.00 0.00 1,515,000.00
Future of Life Institute (filter this donee) 1,401,000.00 1,401,000.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 72,000.00 0.00 72,000.00
Total 7,013,000.00 3,247,000.00 3,766,000.00

Full list of documents in reverse chronological order (2 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR)2022-08-28Thomas Larsen Eli LessWrongFund for Alignment Resesarch Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research Review of current state of cause areaAI safetyThis post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them.
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.

Full list of donations in reverse chronological order (6 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 6)Donation dateCause areaURLInfluencerNotes
Future of Life Institute1,401,000.0032023-04AI safety/technical researchhttps://futureoflife.org/grant-program/2023-grants/Survival and Flourishing Fund Olle Häggström Steve Omohundro Daniel Kokotajlo Donation process: Part of the Survival and Flourishing Fund's 2023 H1 grants https://survivalandflourishing.fund/sff-2023-h1-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios. In each simulation, Recommenders specify a marginal value function for funding each application, and an algorithm calculates a table of grant recommendations by taking turns distributing funding recommendations from each Recommender in succession, using their marginal value functions to prioritize. The Recommenders then discuss their evaluations and update the simulation with their new opinions, using approval voting to prioritize discussion topics, until the end of the last meeting when their inputs are finalized. Similarly, funders specify and adjust different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: https://futureoflife.org/grant-program/2023-grants/ says: "Support for the Alignment Research Center (ARC) Evaluation (Evals) Team. Evals is a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's ninth grant round and the second with a grant to this grantee.

Other notes: In this grant round, there are two funders: Jaan Tallinn and Future of Life Institute, and the page https://survivalandflourishing.fund/sff-2023-h1-recommendations does not provide a breakdown of the grant amount $3,247,000 by funder. https://futureoflife.org/grant-program/2023-grants/ is used to obtain the amount actually granted by FLI.
Jaan Tallinn1,846,000.0022023-04AI safety/technical researchhttps://survivalandflourishing.fund/sff-2023-h1-recommendationsSurvival and Flourishing Fund Olle Häggström Steve Omohundro Daniel Kokotajlo Donation process: Part of the Survival and Flourishing Fund's 2023 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios. In each simulation, Recommenders specify a marginal value function for funding each application, and an algorithm calculates a table of grant recommendations by taking turns distributing funding recommendations from each Recommender in succession, using their marginal value functions to prioritize. The Recommenders then discuss their evaluations and update the simulation with their new opinions, using approval voting to prioritize discussion topics, until the end of the last meeting when their inputs are finalized. Similarly, funders specify and adjust different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Direct project expenses

Intended use of funds: https://futureoflife.org/grant-program/2023-grants/ says: "Support for the Alignment Research Center (ARC) Evaluation (Evals) Team. Evals is a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's ninth grant round and the second with a grant to this grantee.

Other notes: In this grant round, there are two funders: Jaan Tallinn and Future of Life Institute, and the page https://survivalandflourishing.fund/sff-2023-h1-recommendations does not provide a breakdown of the grant amount $3,247,000 by funder. https://futureoflife.org/grant-program/2023-grants/ is used to obtain the amount ($1,401,000) granted by FLI, and the amount granted by Jaan Tallinn is calculated as the difference of the two amounts. https://jaan.online/philanthropy/donations.html is expected to eventually include the donation, but as of 2023-11-26 it does not.
Jaan Tallinn2,179,000.0012022-12-20AI safety/technical researchhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Nick Hay Alyssa Vance Scott Garrabrant Donation process: Part of the Survival and Flourishing Fund's 2022 H2 grants https://survivalandflourishing.fund/sff-2022-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal value functions. Recommenders specified a marginal value function for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's eighth grant round and the first with a grant to this grantee.

Donor retrospective of the donation: The grant recommendation in the future grant round https://survivalandflourishing.fund/sff-2023-h1-recommendations suggests continued satisfaction with the grantee.
Open Philanthropy1,250,000.0042022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/alignment-research-center-general-support-november-2022/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. The Alignment Research Center conducts research on how to align AI with human interests, with a focus on techniques that could be adopted in existing machine learning systems and effectively scale up to future systems."

Donor reason for selecting the donee: While no reason is specified in the grant page, it's worth noting that the founder of the donee organization, Paul Christiano, has previously been a technical advisor to Open Philanthropy, and has been affiliated with multiple organizations (Machine Intelligence Research Institute, OpenAI, and Ought) that have previously received funding from Open Philanthropy for AI safety. These past connections may have influenced the grant.

Donor reason for donating at this time (rather than earlier or later): The grant is made eight months after the previous $265,000 grant https://www.openphilanthropy.org/grants/alignment-research-center-general-support/ and likely reflects the renewal of that now-used-up funding.
Intended funding timeframe in months: 24
Effective Altruism Funds: Long-Term Future Fund72,000.0062022-10AI safety/technical researchhttps://funds.effectivealtruism.org/grants?fund=Long-Term%2520Future%2520Fund&sort=roundAsya Bergal Donation process: The process for this particular grant is not available, but the general process is for the grantee to submit an application at https://av20jp3z.paperform.co/?fund=Long-Term%20Future%20Fund and get a response within 3 weeks (for most applications) or 2 months (for all applications).

Intended use of funds (category): Direct project expenses

Intended use of funds: The grants database gives the following intended use of funds: "A research & networking retreat for winners of the Eliciting Latent Knowledge contest" https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alignment_Research_Center__54_543__Support_for_a_research___networking_event_for_winners_of_the_Eliciting_Latent_Knowledge_contest (GW, IR) (written by Asya Bergal) gives further detail: "This was funding a research & networking event for the winners of the Eliciting Latent Knowledge contest run in early 2022; the plan for the event was mainly for it to be participant-led, with participants sharing what they were working on and connecting with others, along with professional alignment researchers visiting to share their own work with participants." The LessWrong post https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals (GW, IR) is linked for more detail on the Eliciting Latent Knowledge contest.

Donor reason for selecting the donee: The grant page section https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alignment_Research_Center__54_543__Support_for_a_research___networking_event_for_winners_of_the_Eliciting_Latent_Knowledge_contest (GW, IR) written by Asya Bergal says: "I think the case for this grant is pretty straightforward: the winners of this contest are (presumably) selected for being unusually likely to be able to contribute to problems in AI alignment, and retreats, especially those involving interactions with professionals in the space, have a strong track record of getting people more involved with this work."

Other notes: https://forum.effectivealtruism.org/posts/zZ2vq7YEckpunrQS4/long-term-future-fund-april-2023-grant-recommendations#Alignment_Research_Center__54_543__Support_for_a_research___networking_event_for_winners_of_the_Eliciting_Latent_Knowledge_contest (GW, IR) gives a grant amount of $54,543, but the grants database gives an amount of $72,000.
Open Philanthropy265,000.0052022-03AI safety/technical researchhttps://www.openphilanthropy.org/grants/alignment-research-center-general-support/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. ARC focuses on developing strategies for AI alignment that can be adopted by industry today and scaled to future machine learning systems."

Donor reason for selecting the donee: While no reason is specified in the grant page, it's worth noting that the founder of the donee organization, Paul Christiano, has previously been a technical advisor to Open Philanthropy, and has been affiliated with multiple organizations (Machine Intelligence Research Institute, OpenAI, and Ought) that have previously received funding from Open Philanthropy for AI safety. These past connections may have influenced the grant.

Donor reason for donating at this time (rather than earlier or later): The grant is made shortly after the announcement by Alignment Research Center of its plans at https://www.alignment.org/blog/early-2022-hiring-round/ to hire beyond its current full-time staff of two. As grants are often committed after the internal decision process to make them, it is possible that the funding for this grant was sought for the purpose of this round of hiring, and was factored into the hiring announcement.

Donor retrospective of the donation: The followup two-year grant https://www.openphilanthropy.org/grants/alignment-research-center-general-support-november-2022/ suggests continued satisfaction with the grantee.