This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donor information for the donor FTX Future Fund in our system.
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 24 | 250,000 | 600,208 | 30,000 | 50,000 | 100,000 | 135,000 | 190,000 | 250,000 | 300,000 | 380,000 | 800,000 | 1,500,000 | 5,000,000 |
AI safety | 19 | 250,000 | 566,316 | 30,000 | 40,000 | 95,000 | 100,000 | 155,000 | 250,000 | 290,000 | 380,000 | 600,000 | 1,500,000 | 5,000,000 |
Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change | 1 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 | 135,000 |
AI safety|Biosecurity and pandemic preparedness | 2 | 190,000 | 255,000 | 190,000 | 190,000 | 190,000 | 190,000 | 190,000 | 190,000 | 320,000 | 320,000 | 320,000 | 320,000 | 320,000 |
AI safety|Migration policy | 1 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 | 1,000,000 |
Effective altruism|AI safety | 1 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 | 2,000,000 |
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
Cause area | Number of donations | Number of donees | Total | 2022 |
---|---|---|---|---|
AI safety (filter this donor) | 19 | 18 | 10,760,000.00 | 10,760,000.00 |
Effective altruism|AI safety (filter this donor) | 1 | 1 | 2,000,000.00 | 2,000,000.00 |
AI safety|Migration policy (filter this donor) | 1 | 1 | 1,000,000.00 | 1,000,000.00 |
AI safety|Biosecurity and pandemic preparedness (filter this donor) | 2 | 2 | 510,000.00 | 510,000.00 |
Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change (filter this donor) | 1 | 1 | 135,000.00 | 135,000.00 |
Total | 24 | 23 | 14,405,000.00 | 14,405,000.00 |
Skipping spending graph as there is at most one year’s worth of donations.
If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
Subcause area | Number of donations | Number of donees | Total | 2022 |
---|---|---|---|---|
AI safety | 15 | 14 | 10,115,000.00 | 10,115,000.00 |
Effective altruism|AI safety | 1 | 1 | 2,000,000.00 | 2,000,000.00 |
AI safety|Migration policy/high-skilled migration | 1 | 1 | 1,000,000.00 | 1,000,000.00 |
AI safety|Biosecurity and pandemic preparedness | 2 | 2 | 510,000.00 | 510,000.00 |
AI safety/talent pipeline | 3 | 3 | 395,000.00 | 395,000.00 |
AI safety/forecasting | 1 | 1 | 250,000.00 | 250,000.00 |
Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change | 1 | 1 | 135,000.00 | 135,000.00 |
Classified total | 24 | 23 | 14,405,000.00 | 14,405,000.00 |
Unclassified total | 0 | 0 | 0.00 | 0.00 |
Total | 24 | 23 | 14,405,000.00 | 14,405,000.00 |
Skipping spending graph as there is at most one year’s worth of donations.
Skipping spending graph as there is at most one year’s worth of donations.
Sorry, we couldn't find any influencer information.
Sorry, we couldn't find any disclosures information.
If you hover over a cell for a given country and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
Country | Number of donations | Number of donees | Total | 2022 |
---|---|---|---|---|
France | 1 | 1 | 135,000.00 | 135,000.00 |
Classified total | 1 | 1 | 135,000.00 | 135,000.00 |
Unclassified total | 23 | 22 | 14,270,000.00 | 14,270,000.00 |
Total | 24 | 23 | 14,405,000.00 | 14,405,000.00 |
Skipping spending graph as there is at most one year’s worth of donations.
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
Announcing the Future Fund’s AI Worldview Prize | 2022-09-23 | Nick Beckstead Leopold Aschenbrenner Avital Balwit William MacAskill Ketan Ramakrishnan | FTX Future Fund | FTX Future Fund | Request for critiques of donor strategy | AI safety | In this post, cross-posted as https://forum.effectivealtruism.org/posts/W7C5hwq7sjdpTdrQF/announcing-the-future-fund-s-ai-worldview-prize (GW, IR) to the EA Forum, the Future Fund teams announces its Worldview Prize, that seeks content that would cause FTX Future Fund to significantly update its views regarding AI timelines and its perspective on how to approach AI safety. | ||
Future Fund June 2022 Update | 2022-06-30 | Nick Beckstead Leopold Aschenbrenner Avital Balwit William MacAskill Ketan Ramakrishnan | FTX Future Fund | FTX Future Fund | Manifold Markets ML Safety Scholars Program Andi Peng Braden Leach Thomas Kwa SecureBio Ray Amjad Apollo Academic Surveys Justin Mares Longview Philanthropy Atlas Fellowship Effective Ideas Blog Prize Ought Swift Centre for Applied Forecasting Federation for American Scientists Public Editor Project Quantified Uncertainty Research Institute Moncef Slaoui AI Impacts EA Critiques and Red Teaming Prize | Broad donor strategy | Longtermism|AI safety|Biosecurity and pandemic preparedness|Effective altruism | This lengthy blog post, cross-posted at https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update (GW, IR) to the Effective Altruism Forum, goes into detail regarding the grantmaking of the FTX Future Fund so far, and learnings from this grantmaking. The post reports having made 262 grants and investments, with $132 million in total spend. Three funding models are in use: regranting ($31 million so far), open call ($26 million so far), and staff-led grantmaking ($73 million so far). | |
Some thoughts on recent Effective Altruism funding announcements. It's been a big week in Effective Altruism | 2022-03-03 | James Ozden | Open Philanthropy FTX Future Fund FTX Community Fund FTX Climate Fund | Mercy For Animals Charity Entrepreneurship | Miscellaneous commentary | Longtermism|Animal welfare|Global health and development|AI safety|Climate change | In this blog post, cross-posted at https://forum.effectivealtruism.org/posts/Wpr5ssnNW5JPDDPvd/some-thoughts-on-recent-effective-altruism-funding (GW, IR) to the EA Forum, James Ozden discusses recent increases in funding by donors aligned with effective altruism (EA) and makes forecasts for the amount of annual money moved by 2025. Highlights of the post: 1. The entry of the FTX Future Fund is expected to increase the proportion of funds allocated to longtermist causes to increase to become more in line with what EA leaders think it should be (based on the data that https://80000hours.org/2021/08/effective-altruism-allocation-resources-cause-areas/ compiles). 2. Grantmaking capacity needs to be scaled up to match the increase in available funds. 3. The EA movement may need to shift from marginal thinking to coordination dynamics, as their funding amounts are no longer as marginal. 4. Entrepreneurs, founders, and incubators are needed. 6. We need to be more ambitious. | ||
2021 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2021-12-23 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund | Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure. |
Graph of top 10 donees (for donations with known year of donation) by amount, showing the timeframe of donations
Donee | Amount (current USD) | Amount rank (out of 24) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
Evan R. Murphy | 30,000.00 | 24 | AI safety | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Living expenses during project Intended use of funds: Grant to "support six months of independent research on interpretability and other AI safety topics." |
|
AI Impacts | 250,000.00 | 13 | AI safety/forecasting | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support rerunning the highly-cited survey “When Will AI Exceed Human Performance? Evidence from AI Experts” from 2016, analysis, and publication of results." |
|
University of California, Berkeley (Earmark: Sergey Levine) | 600,000.00 | 6 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support a project to study how large language models integrated with offline reinforcement learning pose a risk of machine deception and persuasion." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
AI Safety Camp | 290,000.00 | 11 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "partially support the salaries for AI Safety Camp’s two directors and to support logistical expenses at its physical camp." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
James Lin | 190,000.00 | 15 | AI safety|Biosecurity and pandemic preparedness | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. The grant recipient had written a blog post https://forum.effectivealtruism.org/posts/qoB8MHe94kCEZyswd/i-want-future-perfect-but-for-science-publications (GW, IR) on 2022-03-08 (during the grant application period) describing the idea. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "allow a reputable technology publication to engage 2-5 undergraduate student interns to write about topics including AI safety, alternative proteins, and biosecurity." See https://forum.effectivealtruism.org/posts/qoB8MHe94kCEZyswd/i-want-future-perfect-but-for-science-publications (GW, IR) for the grantee's original vision. Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
Federation for American Scientists | 1,000,000.00 | 4 | AI safety|Migration policy/high-skilled migration | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support a researcher and research assistant to work on high-skill immigration and AI policy at FAS for three years." Other notes: Intended funding timeframe in months: 36. |
|
Berkeley Existential Risk Initiative | 155,000.00 | 16 | AI safety | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support a NeurIPS competition applying human feedback in a non-language-model setting, specifically pretrained models in Minecraft." |
|
Apart Research | 95,000.00 | 21 | AI safety/talent pipeline | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Organizational general support Intended use of funds: Grant to "support the creation of an AI Safety organization which will create a platform to share AI safety research ideas and educational materials, connect people working on AI safety, and bring new people into the field." |
|
Ought | 5,000,000.00 | 1 | AI safety | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Organizational general support Intended use of funds: Grant to "support Ought’s work building Elicit, a language-model based research assistant." Donor reason for selecting the donee: The grant description says: "This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems." |
|
Trojan Detection Challenge at NeurIPS 2022 | 50,000.00 | 22 | AI safety | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support prizes for a trojan detection competition at NeurIPS, which involves identifying whether a deep neural network will suddenly change behavior if certain unknown conditions are met." Other notes: Intended funding timeframe in months: 1. |
|
Brian Christian | 300,000.00 | 10 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support the completion of a book which explores the nature of human values and the implications for aligning AI with human preferences." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
University of Utah (Earmark: Daniel Brown) | 280,000.00 | 12 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support research on value alignment in AI systems, practical algorithms for efficient value alignment verification, and user studies and experiments to test these algorithms." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
AI Safety Support | 200,000.00 | 14 | AI safety/talent pipeline | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant "for general funding for community building and managing the talent pipeline for AI alignment researchers. AI Safety Support’s work includes one-on-one coaching, events, and research training programs." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
University of Cambridge (Earmark: Gabriel Recchia) | 380,000.00 | 8 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support research on how to fine-tune GPT-3 models to identify flaws in other fine-tuned language models' arguments for the correctness of their outputs, and to test whether these help nonexpert humans successfully judge such arguments." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
Association for Long Term Existence and Resilience | 320,000.00 | 9 | AI safety|Biosecurity and pandemic preparedness | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support ALTER, an academic research and advocacy organization, which hopes to investigate, demonstrate, and foster useful ways to improve the future in the short term, and to safeguard and improve the long-term trajectory of humanity. The organization's initial focus is building bridges to academia via conferences and grants to find researchers who can focus on AI safety, and on policy for reducing biorisk." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
University of California, Berkeley (Earmark: Anca Dragan) | 800,000.00 | 5 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support a project to develop interactive AI algorithms for alignment that can uncover the causal features in human reward systems, and thereby help AI systems learn underlying human values that generalize to new situations." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
Prometheus Science Bowl | 100,000.00 | 18 | AI safety/talent pipeline | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support a competition for work on Eliciting Latent Knowledge, an open problem in AI alignment, for talented high school and college students who are participating in Prometheus Science Bowl." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). |
|
Cornell University (Earmark: Lionel Levine) | 1,500,000.00 | 3 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support Prof. Levine, as well as students and collaborators, to work on alignment theory research at the Cornell math department." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). Other notes: The June 2022 update https://ftxfuturefund.org/future-fund-june-2022-update/ by the FTX Future Fund highlights the grant as one of its example grants. |
|
AI Risk Public Materials Competition | 40,000.00 | 23 | AI safety | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support two competitions to produce better public materials on the existential risk from AI." |
|
EffiSciences | 135,000.00 | 17 | Effective altruism|AI safety|Biosecurity and pandemic preparedness|Climate change | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support EffiSciences’s work promoting high impact research on global priorities (e.g. AI safety, biosecurity, and climate change) among French students and academics, and building up a community of people willing to work on important topics." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made close to the application window for the open call (2022-02-28 to 2022-03-21). Other notes: Affected countries: France. |
|
ML Safety Scholars Program | 490,000.00 | 7 | AI safety | https://ftxfuturefund.org/our-regrants/ | -- | Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "fund a summer program for up to 100 students to spend 9 weeks studying machine learning, deep learning, and technical topics in safety." Other notes: Intended funding timeframe in months: 2. |
|
Columbia University (Earmark: Claudia Shi) | 100,000.00 | 18 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support the work of a PhD student [Claudia Shi] working on AI safety at Columbia University." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made shortly after the application window for the open call (2022-02-28 to 2022-03-21). Intended funding timeframe in months: 36 |
|
Siddharth Hiregowdara | 100,000.00 | 18 | AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=open-call | -- | Donation process: This grant is a result of the Future Fund's open call for applications originally announced on 2022-02-28 at https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) with a deadline of 2022-03-21. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to "support the production of high quality materials for learning about AI safety work." Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of the open call https://forum.effectivealtruism.org/posts/2mx6xrDrwiEKzfgks/announcing-the-future-fund-1 (GW, IR) for applications; the grant is made close to the application window for the open call (2022-02-28 to 2022-03-21). |
|
Lightcone Infrastructure | 2,000,000.00 | 2 | Effective altruism|AI safety | https://ftxfuturefund.org/our-grants/?_funding_stream=ad-hoc | -- | Donation process: This grant is part of staff-led grantmaking by FTX Future Fund. https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Staff_led_grantmaking_in_more_detail (GW, IR) says: "Unlike the open call and regranting, these grants and investments are not a test of a particular potentially highly scalable funding model. These are projects we funded because we became aware of them and thought they were good ideas." Intended use of funds (category): Organizational general support Intended use of funds: Grant to "support Lightcone’s ongoing projects including running the LessWrong forum, hosting conferences and events, and maintaining an office space for Effective Altruist organizations." |
Sorry, we couldn't find any similar donors.