, ,
This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2025. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donee information for the donee Convergence Analysis in our system.
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 4 | 13,000 | 40,000 | 10,000 | 10,000 | 10,000 | 13,000 | 13,000 | 13,000 | 34,000 | 34,000 | 103,000 | 103,000 | 103,000 |
Global catastrophic risks | 2 | 10,000 | 11,500 | 10,000 | 10,000 | 10,000 | 10,000 | 10,000 | 10,000 | 13,000 | 13,000 | 13,000 | 13,000 | 13,000 |
AI safety | 2 | 34,000 | 68,500 | 34,000 | 34,000 | 34,000 | 34,000 | 34,000 | 34,000 | 103,000 | 103,000 | 103,000 | 103,000 | 103,000 |
Donor | Total | 2021 | 2020 |
---|---|---|---|
Jaan Tallinn (filter this donee) | 160,000.00 | 150,000.00 | 10,000.00 |
Total | 160,000.00 | 150,000.00 | 10,000.00 |
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
2021 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2021-12-23 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund | Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure. |
2020 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2020-12-21 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund | Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint. |
2018 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2018-12-17 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues. |
Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations
Donor | Amount (current USD) | Amount rank (out of 4) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
Jaan Tallinn | 34,000.00 | 2 | AI safety | https://survivalandflourishing.fund/sff-2021-h2-recommendations | Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz | Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective. Intended use of funds (category): Direct project expenses Intended use of funds: Grant to support "Research on AI & International Relations" Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the third one with grants to the grantee. Other notes: Although Jed McCaleb and The Casey and Family Foundation also participate as funders in this grant round, they do not make any grants to the grantee. Zvi Mowshowitz, one of the recommenders in the grant round, writes a post https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) about the round that does not seem to mention this grant. Percentage of total donor spend in the corresponding batch of donations: 3.84%; announced: 2021-11-20. |
|
Jaan Tallinn | 103,000.00 | 1 | AI safety | https://survivalandflourishing.fund/sff-2021-h1-recommendations | Survival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone | Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts." Intended use of funds (category): Direct project expenses Intended use of funds: Grant to support "Convergence: Project AI Clarity" Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the second with a grant to the grantee. Other notes: The grant round also includes a $13,000 grant to Convergence Analysis for Convergence. Although Jed McCaleb also participates as a funder in this grant round, he does not make any grants to this grantee. Percentage of total donor spend in the corresponding batch of donations: 10.83%. |
|
Jaan Tallinn | 13,000.00 | 3 | Global catastrophic risks | https://survivalandflourishing.fund/sff-2021-h1-recommendations | Survival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone | Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts." Intended use of funds (category): Direct project expenses Intended use of funds: Grant to support "Convergence" Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round and the second with a grant to the grantee. Other notes: The grant round also includes a $103,000 grant to Convergence Analysis for Convergence: Project AI Clarity. Although Jed McCaleb also participates as a funder in this grant round, he does not make any grants to this grantee. Percentage of total donor spend in the corresponding batch of donations: 1.37%. |
|
Jaan Tallinn | 10,000.00 | 4 | Global catastrophic risks | https://jaan.online/philanthropy/donations.html | Survival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka | Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts." Intended use of funds (category): Organizational general support Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round and the first with a grant to the grantee. However, a previous grant to Modeling Cooperation had been made via the grantee (Convergence Analysis). Other notes: Although the Survival and Flourishing Fund and Jed McCaleb also participate as funders in this grant round, neither of them makes a grant to the grantee. Percentage of total donor spend in the corresponding batch of donations: 1.09%. |