FTX Future Fund donations made (filtered to cause areas matching AI safety)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

We do not have any donor information for the donor FTX Future Fund in our system.

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 24 250,000 600,208 30,000 50,000 100,000 135,000 190,000 250,000 300,000 380,000 800,000 1,500,000 5,000,000
AI safety 19 250,000 566,316 30,000 40,000 95,000 100,000 155,000 250,000 290,000 380,000 600,000 1,500,000 5,000,000
Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change 1 135,000 135,000 135,000 135,000 135,000 135,000 135,000 135,000 135,000 135,000 135,000 135,000 135,000
AI safety|Biosecurity and pandemic preparedness 2 190,000 255,000 190,000 190,000 190,000 190,000 190,000 190,000 320,000 320,000 320,000 320,000 320,000
AI safety|Migration policy 1 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000 1,000,000
Effective altruism|AI safety 1 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000 2,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2022
AI safety (filter this donor) 19 18 10,760,000.00 10,760,000.00
Effective altruism|AI safety (filter this donor) 1 1 2,000,000.00 2,000,000.00
AI safety|Migration policy (filter this donor) 1 1 1,000,000.00 1,000,000.00
AI safety|Biosecurity and pandemic preparedness (filter this donor) 2 2 510,000.00 510,000.00
Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change (filter this donor) 1 1 135,000.00 135,000.00
Total 24 23 14,405,000.00 14,405,000.00

Skipping spending graph as there is at most one year’s worth of donations.

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2022
AI safety 15 14 10,115,000.00 10,115,000.00
Effective altruism|AI safety 1 1 2,000,000.00 2,000,000.00
AI safety|Migration policy/high-skilled migration 1 1 1,000,000.00 1,000,000.00
AI safety|Biosecurity and pandemic preparedness 2 2 510,000.00 510,000.00
AI safety/talent pipeline 3 3 395,000.00 395,000.00
AI safety/forecasting 1 1 250,000.00 250,000.00
Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change 1 1 135,000.00 135,000.00
Classified total 24 23 14,405,000.00 14,405,000.00
Unclassified total 0 0 0.00 0.00
Total 24 23 14,405,000.00 14,405,000.00

Skipping spending graph as there is at most one year’s worth of donations.

Donation amounts by donee and year

Donee Cause area Metadata Total 2022
Ought (filter this donor) AI safety Site 5,000,000.00 5,000,000.00
Lightcone Infrastructure (filter this donor) Epistemic institutions FB WP Site 2,000,000.00 2,000,000.00
Cornell University (filter this donor) FB Tw WP Site 1,500,000.00 1,500,000.00
University of California, Berkeley (filter this donor) FB Tw WP Site 1,400,000.00 1,400,000.00
Federation for American Scientists (filter this donor) 1,000,000.00 1,000,000.00
ML Safety Scholars Program (filter this donor) 490,000.00 490,000.00
University of Cambridge (filter this donor) FB Tw WP Site 380,000.00 380,000.00
Association for Long Term Existence and Resilience (filter this donor) 320,000.00 320,000.00
Brian Christian (filter this donor) 300,000.00 300,000.00
AI Safety Camp (filter this donor) 290,000.00 290,000.00
University of Utah (filter this donor) FB Tw WP Site 280,000.00 280,000.00
AI Impacts (filter this donor) AI safety Site 250,000.00 250,000.00
AI Safety Support (filter this donor) 200,000.00 200,000.00
James Lin (filter this donor) 190,000.00 190,000.00
Berkeley Existential Risk Initiative (filter this donor) AI safety/other global catastrophic risks Site TW 155,000.00 155,000.00
EffiSciences (filter this donor) 135,000.00 135,000.00
Siddharth Hiregowdara (filter this donor) 100,000.00 100,000.00
Columbia University (filter this donor) FB Tw WP Site 100,000.00 100,000.00
Prometheus Science Bowl (filter this donor) 100,000.00 100,000.00
Apart Research (filter this donor) 95,000.00 95,000.00
Trojan Detection Challenge at NeurIPS 2022 (filter this donor) 50,000.00 50,000.00
AI Risk Public Materials Competition (filter this donor) 40,000.00 40,000.00
Evan R. Murphy (filter this donor) 30,000.00 30,000.00
Total -- -- 14,405,000.00 14,405,000.00

Skipping spending graph as there is at most one year’s worth of donations.

Donation amounts by influencer and year

Sorry, we couldn't find any influencer information.

Donation amounts by disclosures and year

Sorry, we couldn't find any disclosures information.

Donation amounts by country and year

If you hover over a cell for a given country and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Country Number of donations Number of donees Total 2022
France 1 1 135,000.00 135,000.00
Classified total 1 1 135,000.00 135,000.00
Unclassified total 23 22 14,270,000.00 14,270,000.00
Total 24 23 14,405,000.00 14,405,000.00

Skipping spending graph as there is at most one year’s worth of donations.

Full list of documents in reverse chronological order (4 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Announcing the Future Fund’s AI Worldview Prize2022-09-23Nick Beckstead Leopold Aschenbrenner Avital Balwit William MacAskill Ketan Ramakrishnan FTX Future FundFTX Future Fund Request for critiques of donor strategyAI safetyIn this post, cross-posted as https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize (GW, IR) to the EA Forum, the Future Fund teams announces its Worldview Prize, that seeks content that would cause FTX Future Fund to significantly update its views regarding AI timelines and its perspective on how to approach AI safety.
Future Fund June 2022 Update2022-06-30Nick Beckstead Leopold Aschenbrenner Avital Balwit William MacAskill Ketan Ramakrishnan FTX Future FundFTX Future Fund Manifold Markets ML Safety Scholars Program Andi Peng Braden Leach Thomas Kwa SecureBio Ray Amjad Apollo Academic Surveys Justin Mares Longview Philanthropy Atlas Fellowship Effective Ideas Blog Prize Ought Swift Centre for Applied Forecasting Federation for American Scientists Public Editor Project Quantified Uncertainty Research Institute Moncef Slaoui AI Impacts EA Critiques and Red Teaming Prize Broad donor strategyLongtermism|AI safety|Biosecurity and pandemic preparedness|Effective altruismThis lengthy blog post, cross-posted at https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update (GW, IR) to the Effective Altruism Forum, goes into detail regarding the grantmaking of the FTX Future Fund so far, and learnings from this grantmaking. The post reports having made 262 grants and investments, with $132 million in total spend. Three funding models are in use: regranting ($31 million so far), open call ($26 million so far), and staff-led grantmaking ($73 million so far).
Some thoughts on recent Effective Altruism funding announcements. It's been a big week in Effective Altruism2022-03-03James Ozden Open Philanthropy FTX Future Fund FTX Community Fund FTX Climate Fund Mercy For Animals Charity Entrepreneurship Miscellaneous commentaryLongtermism|Animal welfare|Global health and development|AI safety|Climate changeIn this blog post, cross-posted at https://forum.effectivealtruism.org/posts/Wpr5ssnNW5JPDDPvd/some-thoughts-on-recent-effective-altruism-funding (GW, IR) to the EA Forum, James Ozden discusses recent increases in funding by donors aligned with effective altruism (EA) and makes forecasts for the amount of annual money moved by 2025. Highlights of the post: 1. The entry of the FTX Future Fund is expected to increase the proportion of funds allocated to longtermist causes to increase to become more in line with what EA leaders think it should be (based on the data that https://80000hours.org/2021/08/effective-altruism-allocation-resources-cause-areas/ compiles). 2. Grantmaking capacity needs to be scaled up to match the increase in available funds. 3. The EA movement may need to shift from marginal thinking to coordination dynamics, as their funding amounts are no longer as marginal. 4. Entrepreneurs, founders, and incubators are needed. 6. We need to be more ambitious.
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.

Full list of donations in reverse chronological order (24 donations)

Graph of top 10 donees (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DoneeAmount (current USD)Amount rank (out of 24)Donation dateCause areaURLInfluencerNotes
Evan R. Murphy30,000.00242022-07AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to "support six months of independent research on interpretability and other AI safety topics."
AI Impacts250,000.00132022-06AI safety/forecastinghttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results."
University of California, Berkeley (Earmark: Sergey Levine)600,000.0062022-06AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a project to study how large language models integrated with offline reinforcement learning pose a risk of machine deception and persuasion."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
AI Safety Camp290,000.00112022-06AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "partially support the salaries for AI Safety Camp’s two directors and to support logistical expenses at its physical camp."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
James Lin190,000.00152022-05AI safety|Biosecurity and pandemic preparednesshttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. The grant recipient had written a blog post https://forum.effectivealtruism.org/posts/qoB8MHe94kCEZyswd/i-want-future-perfect-but-for-science-publications (GW, IR) on 2022-03-08 (during the grant application period) describing the idea.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "allow a reputable technology publication to engage 2-5 undergraduate student interns to write about topics including AI safety, alternative proteins, and biosecurity." See https://forum.effectivealtruism.org/posts/qoB8MHe94kCEZyswd/i-want-future-perfect-but-for-science-publications (GW, IR) for the grantee's original vision.

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Federation for American Scientists1,000,000.0042022-05AI safety|Migration policy/high-skilled migrationhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years."

Other notes: Intended funding timeframe in months: 36.
Berkeley Existential Risk Initiative155,000.00162022-05AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft."
Apart Research95,000.00212022-05AI safety/talent pipelinehttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant to "support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field."
Ought5,000,000.0012022-05AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant to "support Ought’s work building Elicit, a language-model based research assistant."

Donor reason for selecting the donee: The grant description says: "This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems."
Trojan Detection Challenge at NeurIPS 202250,000.00222022-05AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met."

Other notes: Intended funding timeframe in months: 1.
Brian Christian300,000.00102022-05AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support the completion of a book which explores the nature of human values and the implications for aligning AI with human preferences."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
University of Utah (Earmark: Daniel Brown)280,000.00122022-05AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research on value alignment in AI systems, practical algorithms for efficient value alignment verification, and user studies and experiments to test these algorithms."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
AI Safety Support200,000.00142022-05AI safety/talent pipelinehttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for general funding for community building and managing the talent pipeline for AI alignment researchers. AI Safety Support’s work includes one-on-one coaching, events, and research training programs."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
University of Cambridge (Earmark: Gabriel Recchia)380,000.0082022-05AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research on how to fine-tune GPT-3 models to identify flaws in other fine-tuned language models' arguments for the correctness of their outputs, and to test whether these help nonexpert humans successfully judge such arguments."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Association for Long Term Existence and Resilience320,000.0092022-05AI safety|Biosecurity and pandemic preparednesshttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support ALTER, an academic research and advocacy organization, which hopes to investigate, demonstrate, and foster useful ways to improve the future in the short term, and to safeguard and improve the long-term trajectory of humanity. The organization's initial focus is building bridges to academia via conferences and grants to find researchers who can focus on AI safety, and on policy for reducing biorisk."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
University of California, Berkeley (Earmark: Anca Dragan)800,000.0052022-05AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a project to develop interactive AI algorithms for alignment that can uncover the causal features in human reward systems, and thereby help AI systems learn underlying human values that generalize to new situations."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Prometheus Science Bowl100,000.00182022-05AI safety/talent pipelinehttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support a competition for work on Eliciting Latent Knowledge, an open problem in AI alignment, for talented high school and college students who are participating in Prometheus Science Bowl."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Cornell University (Earmark: Lionel Levine)1,500,000.0032022-04AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support Prof. Levine, as well as students and collaborators, to work on alignment theory research at the Cornell math department."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).

Other notes: The June 2022 update https://ftxfuturefund.org/future-fund-june-2022-update/ by the FTX Future Fund highlights the grant as one of its example grants.
AI Risk Public Materials Competition40,000.00232022-04AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support two competitions to produce better public materials on the existential risk from AI."
EffiSciences135,000.00172022-04Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate changehttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support EffiSciences’s work promoting high impact research on global priorities (e.g. AI safety, biosecurity, and climate change) among French students and academics, and building up a community of people willing to work on important topics."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made close to the application window for the open call (2022-02-28 to 2022-03-21).

Other notes: Affected countries: France.
ML Safety Scholars Program490,000.0072022-04AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety."

Other notes: Intended funding timeframe in months: 2.
Columbia University (Earmark: Claudia Shi)100,000.00182022-04AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support the work of a PhD student [Claudia Shi] working on AI safety at Columbia University."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21).
Intended funding timeframe in months: 36
Siddharth Hiregowdara100,000.00182022-03AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=open-call-- Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support the production of high quality materials for learning about AI safety work."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made close to the application window for the open call (2022-02-28 to 2022-03-21).
Lightcone Infrastructure2,000,000.0022022-02Effective altruism|AI safetyhttps://ftxfuturefund.org/our-grants/?_funding_stream=ad-hoc-- Donation process: This grant is part of staff-led grantmaking by FTX Future Fund. https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Staff_led_grantmaking_in_more_detail (GW, IR) says: "Unlike the open call and regranting, these grants and investments are not a test of a particular potentially highly scalable funding model. These are projects we funded because we became aware of them and thought they were good ideas."

Intended use of funds (category): Organizational general support

Intended use of funds: Grant to "support Lightcone’s ongoing projects including running the LessWrong forum, hosting conferences and events, and maintaining an office space for Effective Altruist organizations."

Similarity to other donors

Sorry, we couldn't find any similar donors.