Center on Long-Term Risk donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee Center on Long-Term Risk in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 1 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000
Global catastrophic risks 1 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000 1,218,000

Donation amounts by donor and year for donee Center on Long-Term Risk

Donor Total 2021
Jaan Tallinn (filter this donee) 1,218,000.00 1,218,000.00
Total 1,218,000.00 1,218,000.00

Full list of documents in reverse chronological order (3 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Foundation Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)2021-12-14Zvi Mowshowitz LessWrongSurvival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Miscellaneous commentaryLongtermism|AI safety|Global catastrophic risksIn this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.

Full list of donations in reverse chronological order (1 donations)

Graph of top 10 donors by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 1)Donation dateCause areaURLInfluencerNotes
Jaan Tallinn1,218,000.0012021-10Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Zvi Mowshowitz, one of the recommenders, writes in https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) "I was excited by the detailed contents of what they are working on, relative to the baseline the applications set for excitement, but their focus on s-risks was concerning to me. I don’t want to have the debate on this, but I consider concerns about s-risks a bigger thing to be concerned about right now than actual s-risks. They do have a reasonable plan to mitigate the risk of concern about s-risk, and are saying many of the right things when asked, so I came around to it being worth proceeding."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the first one with a grant to the grantee.

Other notes: The other two funders in this SFF grant round (Jed McCaleb and The Casey and Family Foundation) do not make grants to the Center on Long-Term Risk. Percentage of total donor spend in the corresponding batch of donations: 13.75%; announced: 2021-11-20.