This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)||Survival and Flourishing|
|Best overview URL||https://survivalandflourishing.fund/|
|Page on philosophy informing donations||https://survivalandflourishing.fund/|
|Grant application process page||https://survivalandflourishing.fund/|
|Regularity with which donor updates donations data||semiannual refresh|
|Regularity with which Donations List Website updates donations data (after donor update)||irregular|
|Lag with which donor updates donations data||days|
|Lag with which Donations List Website updates donations data (after donor update)||months|
|Data entry method on Donations List Website||Manual (no scripts used)|
Brief history: The Survival and Flourishing Fund started operation in the third quarter of 2019 as a way to continue the grantmaking portion of Berkeley Existential Risk Initiative (BERI) so that BERI could focus on its core mission. While SFF initially had some funds of its own to allocate, by 2021 it had spent its own funds and its main role was to run a S-process (simulation process) to help participating donors allocate their own funds, rather than SFF holding or distributing the funds. SFF has used the moniker "virtual fund" to describe itself. Participating donors have included Jaan Tallinn (the biggest donor), Jed McCaleb, and David Marble (via The Casey and Family Foundation). https://www.youtube.com/watch?v=jWivz6KidkI has an overview of the S-process and some of SFF's history.
Brief notes on broad donor philosophy and major focus areas: SFF's general focus is "to bring financial support to organizations working to improve humanity’s long-term prospects for survival and flourishing." Three things to note: (1) "organizations": SFF makes grants to organizations; there is another organization Survival and Flourishing (SAF) that makes grants to individuals. (2) "long-term prospects": SFF's funding is mainly evaluated based on longtermist principles. (3) "survival and flourishing": the word "survival" hints at the salience given to global catastrophic risks. With that said, not all grantees may see themselves as longtermist organizations.
Notes on grant decision logistics: The first grant round (2019 Q3) was off-cycle and unusual; since then, each grant round has used a S-process (simulation process) that helps aggregate the opinions of multiple recommenders across grantees, and allows multiple funders to decide how much weight to place on each recommender. https://www.youtube.com/watch?v=jWivz6KidkI describes the S-process in detail. The S-process-based grant rounds are run twice a year. For each grant round, applications are opened to grantees in the first quarter of the half-year and the S-process to decide the grants happens in the second quarter, usually being made public around the middle-to-end of the fifth month. The set of recommenders as well as the set of participating funders can vary between grant rounds.
Notes on grant publication logistics: The grant recommendations for each grant round are published on a page dedicated to the grant round (for instance, https://survivalandflourishing.fund/sff-2021-h2-recommendations for the second half of 2021). The full list of grant recommendations is also added to the home page, possibly with some lag. SFF does not include information on whether and when the grant was actually made; the donor Jaan Tallinn publishes information on actual grantmaking (updated annually) at https://jaan.online/philanthropy/ and this generally matches SFF's recommendations.
Notes on grant financing: In 2019 and 2020, SFF had funds of its own that it was granting via the grant rounds. Starting 2019 Q4, it also started using the grant rounds to make funding recommendations to other funders. Starting 2021 H1, it didn't have funds of its own, so it was only making recommendations to other funders, i.e., acting as a "virtual fund." Funders who have participated in SFF's grant rounds include Jaan Tallinn, Jed McCaleb, and David Marble (allocating money for The Casey and Family Foundation). Grants for each of these, including ones made via the SFF's S-process, are listed on their respective donor pages on this site.
Miscellaneous notes: In https://www.youtube.com/watch?v=jWivz6KidkI Andrew Critch describes the S-process in more detail. The process involves the use of marginal value functions, several rounds of discussion between recommenders, funders making a decision on the weight to allocate to each recommender, a multi-step allocation rotating between funders to avoid bystander effects and pile-on effects between funders, and funders ultimately having the final say in whether/how much to fund, with the outcome of the S-process being only a recommendation.
This entity is also a donee.
Full donor page for donor Survival and Flourishing Fund
|Donors list page||https://aiimpacts.org/donate/|
|Open Philanthropy Project grant review||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support|
|Org Watch page||https://orgwatch.issarice.com/?organization=AI+Impacts|
|Key people||Katja Grace|
Full donee page for donee AI Impacts
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Total||2019|
|AI safety (filter this donor)||1||70,000.00||70,000.00|
Skipping spending graph as there is fewer than one year’s worth of donations.
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Affected influencers||Document scope||Cause area||Notes|
|2021 AI Alignment Literature Review and Charity Comparison (GW, IR)||2021-12-23||Ben Hoskin||Effective Altruism Forum||Ben Hoskin Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Foundation||Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours||Survival and Flourishing Fund||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.|
|2020 AI Alignment Literature Review and Charity Comparison (GW, IR)||2020-12-21||Ben Hoskin||Effective Altruism Forum||Ben Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours||Survival and Flourishing Fund||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.|
|2019 AI Alignment Literature Review and Charity Comparison (GW, IR)||2019-12-19||Ben Hoskin||Effective Altruism Forum||Ben Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse||Survival and Flourishing Fund||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.|
Graph of all donations, showing the timeframe of donations
|Amount (current USD)||Amount rank (out of 1)||Donation date||Cause area||URL||Influencer||Notes|
|70,000.00||1||AI safety||https://jaan.online/philanthropy/donations.html||Alex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka||Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."
Intended use of funds (category): Organizational general support
Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round.
Other notes: The grant round also includes a grant from Jaan Tallinn ($30,000) to the same grantee (AI Impacts). Percentage of total donor spend in the corresponding batch of donations: 7.61%; announced: 2019-12-15.