Jaan Tallinn donations made to AI Objectives Institute

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)Centre for the Study of Existential Risk
Wikipedia pagehttps://en.wikipedia.org/wiki/Jaan_Tallinn
Best overview URLhttps://jaan.online/philanthropy/
Facebook username jaan.tallinn
Websitehttps://jaan.online/
Donations URLhttps://jaan.online/philanthropy/
LessWrong usernamejaan
Regularity with which donor updates donations dataannual refresh
Regularity with which Donations List Website updates donations data (after donor update)irregular
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)months
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?person=Jaan+Tallinn

Brief history: Tallinn is a co-founder of Skype and Kazaa and one of the earlier wealthy supporters of organizations working in AI safety, along with Peter Thiel. In 2011, he had a conversation with Holden Karnofsky sharing his thoughts on AI safetyand in particular the work of the Singularity Institute (SI), the former name of the Machine Intelligence Research Institute. See https://groups.yahoo.com/neo/groups/givewell/conversations/topics/287 and https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) for details. Tallinn played a significant role in financing the Berkeley Existential Risk Initiative (BERI)'s grantmaking operations, and later funding the Survival and Flourising Fund (SFF). In 2020, Tallinn prepared a philanthropy pledge https://jaan.online/philanthropy/ for his grantmaking for the next five years, and also indicated a plan to switch more to making direct grants using SFF's S-process, rather than giving funds to organizations such as BERI and SFF.

Brief notes on broad donor philosophy and major focus areas: https://jaan.online/philanthropy/ says: "the primary purpose of my philanthropy is to reduce existential risks to humanity from advanced technologies, such as AI. i currently believe that this cause scores the highest according to the framework used in effective altruism: (1) importance [...] (2) tractability [...] (3) neglectedness. [...] i'm likely to pass on all other opportunities — especially popular ones, like supporting education, healthcare, arts, and various social causes. [...] i'm considering (as of 2020) a few exceptions — eg, donating to more neglected climate interventions [...] i should also mention that i'm especially fond of software projects as philanthropic targets [...]"

Notes on grant decision logistics: Tallinn plans to use the Survival and Flourishing Fund (SFF)'s S-process (simulation process) to direct most of his grantmaking, as described e.g. at http://survivalandflourishing.fund/sff-2019-q4-recommendations and other grant rounds. He may also make one-off direct grants (at most $100,000 per grant) for funding needs that are time-sensitive but encourages grantees to also apply for the next SFF grant round. Tallinn has historically donated money to BERI and SFF for regranting, but does not expect to make similar donations for regranting in the future. Tallinn may also engage in small amounts of individual regranting and individual gifts.

Notes on grant financing: Tallinn donates his own money, but not always directly; in most cases (particularly when donating to US-based nonprofits) he donates money via (donor-advised funds managed by) Founders Pledge or Silicon Valley Community Foundation. He has also made direct gifts in cryptocurrency when not donating to US nonprofits.

Full donor page for donor Jaan Tallinn

Basic donee information

We do not have any donee information for the donee AI Objectives Institute in our system.

Full donee page for donee AI Objectives Institute

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 1 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000
AI safety 1 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2021
AI safety (filter this donor) 1 485,000.00 485,000.00
Total 1 485,000.00 485,000.00

Skipping spending graph as there is at most one year’s worth of donations.

Full list of documents in reverse chronological order (1 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)2021-12-14Zvi Mowshowitz LessWrongSurvival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Miscellaneous commentaryLongtermism|AI safety|Global catastrophic risksIn this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.

Full list of donations in reverse chronological order (1 donations)

Graph of all donations (with known year of donation), showing the timeframe of donations

Graph of donations and their timeframes
Amount (current USD)Amount rank (out of 1)Donation dateCause areaURLInfluencerNotes
485,000.0012021-10AI safetyhttps://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the first one with grants to the grantee.

Other notes: Grant made via Foresight Institute. Although Jed McCaleb and The Casey and Family Foundation also participate as funders in this grant round, they do not make any grants to AI Objectives Institute in this round. In https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff#AI_Safety_Paper_Production (GW, IR) Zvi Mowshowitz, one of the recommenders in the grant round, expresses his reservations: "Then there’s the people who think the ‘AI Safety’ risk is that things will be insufficiently ‘democratic,’ too ‘capitalist’ or ‘biased’ or otherwise not advance their particular agendas. They care about, in Eliezer’s terminology from Twitter, which monkey gets the poisoned banana first. To the extent that they redirect attention, that’s harmful. [...] I do feel the need to mention one organization here, AIObjectives@Foresight, because they’re the only organization that got funding that I view as an active negative. I strongly objected to the decision to fund them, and would have used my veto on an endorsement if I’d retained the right to veto. I do see that they are doing some amount of worthwhile research into ‘how to make AIs do what humans actually want’ but given what else is on their agenda, I view their efforts as strongly net-harmful, and I’m quite sad that they got money. Some others seemed to view this concern more as a potential ‘poisoning the well’ concern that the cause area would become associated with such political focus, whereas I was object-level concerned about the agenda, and in giving leverage over important things to people who are that wrong about very important things and focused on making the world match their wrong views.". Percentage of total donor spend in the corresponding batch of donations: 5.48%; announced: 2021-11-20.