This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)||Centre for the Study of Existential Risk|
|Best overview URL||https://jaan.online/philanthropy/|
|Regularity with which donor updates donations data||annual refresh|
|Regularity with which Donations List Website updates donations data (after donor update)||irregular|
|Lag with which donor updates donations data||months|
|Lag with which Donations List Website updates donations data (after donor update)||months|
|Data entry method on Donations List Website||Manual (no scripts used)|
|Org Watch page||https://orgwatch.issarice.com/?person=Jaan+Tallinn|
Brief history: Tallinn is a co-founder of Skype and Kazaa and one of the earlier wealthy supporters of organizations working in AI safety, along with Peter Thiel. In 2011, he had a conversation with Holden Karnofsky sharing his thoughts on AI safetyand in particular the work of the Singularity Institute (SI), the former name of the Machine Intelligence Research Institute. See https://groups.yahoo.com/neo/groups/givewell/conversations/topics/287 and https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) for details. Tallinn played a significant role in financing the Berkeley Existential Risk Initiative (BERI)'s grantmaking operations, and later funding the Survival and Flourising Fund (SFF). In 2020, Tallinn prepared a philanthropy pledge https://jaan.online/philanthropy/ for his grantmaking for the next five years, and also indicated a plan to switch more to making direct grants using SFF's S-process, rather than giving funds to organizations such as BERI and SFF.
Brief notes on broad donor philosophy and major focus areas: https://jaan.online/philanthropy/ says: "the primary purpose of my philanthropy is to reduce existential risks to humanity from advanced technologies, such as AI. i currently believe that this cause scores the highest according to the framework used in effective altruism: (1) importance [...] (2) tractability [...] (3) neglectedness. [...] i'm likely to pass on all other opportunities — especially popular ones, like supporting education, healthcare, arts, and various social causes. [...] i'm considering (as of 2020) a few exceptions — eg, donating to more neglected climate interventions [...] i should also mention that i'm especially fond of software projects as philanthropic targets [...]"
Notes on grant decision logistics: Tallinn plans to use the Survival and Flourishing Fund (SFF)'s S-process (simulation process) to direct most of his grantmaking, as described e.g. at http://survivalandflourishing.fund/sff-2019-q4-recommendations and other grant rounds. He may also make one-off direct grants (at most $100,000 per grant) for funding needs that are time-sensitive but encourages grantees to also apply for the next SFF grant round. Tallinn has historically donated money to BERI and SFF for regranting, but does not expect to make similar donations for regranting in the future. Tallinn may also engage in small amounts of individual regranting and individual gifts.
Notes on grant financing: Tallinn donates his own money, but not always directly; in most cases (particularly when donating to US-based nonprofits) he donates money via (donor-advised funds managed by) Founders Pledge or Silicon Valley Community Foundation. He has also made direct gifts in cryptocurrency when not donating to US nonprofits.
Full donor page for donor Jaan Tallinn
|Donors list page||https://ought.org/about|
|Open Philanthropy Project grant review||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support|
|Org Watch page||https://orgwatch.issarice.com/?organization=Ought|
|Key people||Andreas Stuhlmüller|
Full donee page for donee Ought
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Total||2021|
|(filter this donor)||1||542,000.00||542,000.00|
Skipping spending graph as there is fewer than one year’s worth of donations.
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Affected influencers||Document scope||Cause area||Notes|
|Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)||2021-12-14||Zvi Mowshowitz||LessWrong||Survival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation||Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute||Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz||Miscellaneous commentary||Longtermism|AI safety|Global catastrophic risks||In this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.|
Graph of all donations, showing the timeframe of donations
|Amount (current USD)||Amount rank (out of 1)||Donation date||Cause area||URL||Influencer||Notes|
|542,000.00||1||--||https://survivalandflourishing.fund/sff-2021-h2-recommendations||Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz||Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.
Intended use of funds (category): Organizational general support
Donor reason for selecting the donee: Zvi Mowshowitz, one of the recommenders, writes in https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) "Ought was a weird case, where I had the strong initial instinct that Ought, as I understood it, was doing a net harmful thing. [...] A lot of others positivity seemed to reflect knowing the people involved, whereas I don’t know them at all. A lot of support seemed to come down to People Doing Thing being present, and faith that those people would look for net positive things and to avoid net bad things generally, and that they had an active eye towards AI Safety. [...] I wouldn’t be surprised to learn this was net harmful, but there was enough disagreement and upside in various ways that I concluded that my expectation was positive, so I no longer felt the need to actively try to stop others from funding."
Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the second one with a grant to the grantee.
Other notes: The other two funders in this SFF grant round (Jed McCaleb and The Casey and Family Foundation) do not make grants to Ought. In https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) Zvi Mowshowitz, one of the recommenders in the grant round, writes about his evaluation of Ought's agenda: "They are using GPT-3 to assist in research, to do things like generate questions to ask, or classify data, or do whatever else GPT-3 can do. The goal is to make research easier. However, because it’s good at the things GPT-3 is good at, this is going to be a much bigger deal for those looking to do performative science or publish papers or keep dumping more compute into the same systems over and over again, than it will help those trying to do something genuinely new and valuable. The hard part where one actually thinks isn’t being sped up, while the rest of the process is. Oh no. [...] I read a comment on LessWrong by Jessica Taylor questioning why one of MIRI’s latest plans wasn’t strictly worse than Ought [...] This frames the whole thing on a meta-level as a way to test a theory of how to build an aligned AI. As per Paul’s theory as I understand it, if you can (1) break up a given task into subcomponents and then (2) solve each subcomponent while (3) ensuring each subcomponent is aligned then that could solve the alignment problem with regard to the larger task, so testing to see what types of things can usefully be split into machine tasks, and whether those tasks can be solved, would be some sort of exploration in that direction under some theories. I notice I have both the ‘yeah sure I guess maybe’ instinct here and the mostly-integrated inner-Eliezer-style reaction that very strongly thinks that this represents fundamental confusion and is wrong. In any case, it’s another perspective, and Paul specifically is excited by this path.". Percentage of total donor spend in the corresponding batch of donations: 6.12%; announced: 2021-11-20.