Global Priorities Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United Kingdom
Websitehttps://globalprioritiesinstitute.org/
Open Philanthropy Project grant reviewhttps://www.openphilanthropy.org/giving/grants/global-priorities-institute-general-support
Key peopleHilary Greaves|William MacAskill
Launch date2017-12

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 2 14,000 1,344,142 14,000 14,000 14,000 14,000 14,000 14,000 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284
Cause prioritization 2 14,000 1,344,142 14,000 14,000 14,000 14,000 14,000 14,000 2,674,284 2,674,284 2,674,284 2,674,284 2,674,284

Donation amounts by donor and year for donee Global Priorities Institute

Donor Total 2018
Open Philanthropy (filter this donee) 2,674,284.00 2,674,284.00
Effective Altruism Funds: Effective Altruism Infrastructure Fund (filter this donee) 14,000.00 14,000.00
Total 2,688,284.00 2,688,284.00

Full list of documents in reverse chronological order (10 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
Update on the Global Priorities Institute's (GPI) activities (GW, IR)2019-12-24Hilary Greaves Global Priorities InstituteOpen Philanthropy Global Priorities Institute Donee periodic updateCause prioritizationThe Global Priorities Institute shares a short annual report, also available at https://globalprioritiesinstitute.org/global-priorities-institute-annual-report-2018-19/ on its website. In addition, the post contains links for following GPI's research and current opportunities. The annual report has three sections: (1) Research (agenda focused on "longtermism") (2) Academic outreach (various two-day workshops and the Early Career Conference Programme (ECCP)) (3) Current team and growth ambitions (plans to expand, helped by £2.5m from the Open Philanthropy Project and £3m from other private donors; fundraising is ongoing).
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open PhilanthropyChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Announcing the new Forethought Foundation for Global Priorities Research (GW, IR)2018-12-04William MacAskill Effective Altruism Forum Forethought Foundation for Global Priorities Research Global Priorities Institute Centre for Effective Altruism LaunchCause prioritizationThe blog post announces the launch of the Forethought Foundation for Global Priorities Research. The planned total budget for 2019 and 2020 is £1.12 million - £1.47 million, and a breakdown is provided in the post. The project will be incubated by the Centre for Effective Altruism, and its work is intended to complement the work of the Global Priorities Institute
Updates from the Global Priorities Institute and how to get involved (GW, IR)2018-11-14Global Priorities Institute Effective Altruism Forum Global Priorities Institute Donee periodic updateCause prioritizationThe blog post gives an update on how the Global Priorities Institute has been doing, including its officially becoming an institute within Oxford University https://www.campaign.ox.ac.uk/news/new-global-priorities-institute-opens It also abstracts of GPI's current working papers
Job opportunity at the Future of Humanity Institute and Global Priorities Institute2018-04-01Hayden Win Effective Altruism Forum Global Priorities Institute Future of Humanity Institute Job advertisementAI safetyThe blog post advertises a Senior Administrator position that would be shared between the Future of Humanty Institute and the Global Priorities Institute
New releases: Global Priorities Institute research agenda and posts we’re hiring for2017-12-14Michelle Hutchinson Global Priorities Institute Global Priorities Institute Donee periodic updateCause prioritizationHutchinson reports on the progress and plans for the Global Priorities Institute, housed at Oxford University, and also describes the posts it is hiring for

Full list of donations in reverse chronological order (2 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 2)Donation dateCause areaURLInfluencerNotes
Effective Altruism Funds: Effective Altruism Infrastructure Fund14,000.0022018-11-29Cause prioritizationhttps://app.effectivealtruism.org/funds/ea-community/payouts/2dyBJqJBSIq6sAGU6gMYQWLuke Ding Alex Foster Denise Melchin Matt Wage Tara MacAulay Grant write-up notes that donee (Global Priorities Institute) is solving an important problem of spreading EA-style ideas in the academic and policy worlds, and has shown impressive progress in its first year. The write-up concludes: "This grant is expected to contribute to GPI’s plans to grow its research team, particularly in economics, in order to publish a strong initial body of papers that defines their research focus. GPI also plans to sponsor DPhil students engaging in global priorities research at the University of Oxford through scholarships and prizes, and to give close support to early career academics with its summer visiting program.". Percentage of total donor spend in the corresponding batch of donations: 10.85%.
Open Philanthropy2,674,284.0012018-02Cause prioritizationhttps://www.openphilanthropy.org/giving/grants/global-priorities-institute-general-supportNick Beckstead Grant of £2,051,232 over five years (estimated at $2,674,284, depending upon currency conversion rates at the time of annual installments) via Americans for Oxford for general support. GPI is an interdisciplinary research center at the University of Oxford that conducts foundational research to inform the decision-making of individuals and institutions seeking to do as much good as possible. GPI intends to use this funding to support global priorities research, specifically: to hire three early-career, non-tenured research fellows with expertise in philosophy or economics, as well as two operations staff; to secure a larger office space to accommodate them; to host visiting researchers; and to hold seminars which address global priorities research topics. Announced: 2018-05-21.