AI Objectives Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

We do not have any donee information for the donee AI Objectives Institute in our system.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 1 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000
AI safety 1 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000 485,000

Donation amounts by donor and year for donee AI Objectives Institute

Donor Total 2021
Jaan Tallinn (filter this donee) 485,000.00 485,000.00
Total 485,000.00 485,000.00

Full list of documents in reverse chronological order (1 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)2021-12-14Zvi Mowshowitz LessWrongSurvival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Miscellaneous commentaryLongtermism|AI safety|Global catastrophic risksIn this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.

Full list of donations in reverse chronological order (1 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 1)Donation dateCause areaURLInfluencerNotes
Jaan Tallinn485,000.0012021-10AI safetyhttps://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the first one with grants to the grantee.

Other notes: Grant made via Foresight Institute. Although Jed McCaleb and The Casey and Family Foundation also participate as funders in this grant round, they do not make any grants to AI Objectives Institute in this round. In https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff#AI_Safety_Paper_Production (GW, IR) Zvi Mowshowitz, one of the recommenders in the grant round, expresses his reservations: "Then there’s the people who think the ‘AI Safety’ risk is that things will be insufficiently ‘democratic,’ too ‘capitalist’ or ‘biased’ or otherwise not advance their particular agendas. They care about, in Eliezer’s terminology from Twitter, which monkey gets the poisoned banana first. To the extent that they redirect attention, that’s harmful. [...] I do feel the need to mention one organization here, AIObjectives@Foresight, because they’re the only organization that got funding that I view as an active negative. I strongly objected to the decision to fund them, and would have used my veto on an endorsement if I’d retained the right to veto. I do see that they are doing some amount of worthwhile research into ‘how to make AIs do what humans actually want’ but given what else is on their agenda, I view their efforts as strongly net-harmful, and I’m quite sad that they got money. Some others seemed to view this concern more as a potential ‘poisoning the well’ concern that the cause area would become associated with such political focus, whereas I was object-level concerned about the agenda, and in giving leverage over important things to people who are that wrong about very important things and focused on making the world match their wrong views.". Percentage of total donor spend in the corresponding batch of donations: 5.48%; announced: 2021-11-20.