Global Priorities Institute donations received
This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
Table of contents
Basic donee information
Donation amounts by donor and year for donee Global Priorities Institute
|Open Philanthropy Project (filter this donee)
|Effective Altruism Funds (filter this donee)
Full list of documents in reverse chronological order (5 documents)
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Document scope||Cause area||Notes|
|2018 AI Alignment Literature Review and Charity Comparison||2018-12-17||Ben Hoskin ||Effective Altruism Forum||Ben Hoskin ||Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis ||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues. |
|Announcing the new Forethought Foundation for Global Priorities Research||2018-12-04||William MacAskill ||Effective Altruism Forum|| ||Forethought Foundation for Global Priorities Research Global Priorities Institute Centre for Effective Altruism ||Launch||Cause prioritization||The blog post announces the launch of the Forethought Foundation for Global Priorities Research. The planned total budget for 2019 and 2020 is £1.12 million - £1.47 million, and a breakdown is provided in the post. The project will be incubated by the Centre for Effective Altruism, and its work is intended to complement the work of the Global Priorities Institute |
|Updates from the Global Priorities Institute and how to get involved||2018-11-14||Global Priorities Institute ||Effective Altruism Forum|| ||Global Priorities Institute ||Donee periodic update||Cause prioritization||The blog post gives an update on how the Global Priorities Institute has been doing, including its officially becoming an institute within Oxford University https://www.campaign.ox.ac.uk/news/new-global-priorities-institute-opens It also abstracts of GPI's current working papers |
|Job opportunity at the Future of Humanity Institute and Global Priorities Institute||2018-04-01||Hayden Win ||Effective Altruism Forum|| ||Global Priorities Institute Future of Humanity Institute ||Job advertisement||AI safety||The blog post advertises a Senior Administrator position that would be shared between the Future of Humanty Institute and the Global Priorities Institute |
|New releases: Global Priorities Institute research agenda and posts we’re hiring for||2017-12-14||Michelle Hutchinson ||Global Priorities Institute|| ||Global Priorities Institute ||Donee periodic update||Cause prioritization||Hutchinson reports on the progress and plans for the Global Priorities Institute, housed at Oxford University, and also describes the posts it is hiring for |
Full list of donations in reverse chronological order (2 donations)
|Donor||Amount (current USD)||Donation date||Cause area||URL||Influencer||Notes|
|Effective Altruism Funds||14,000.00||2018-11-29||Cause prioritization||https://app.effectivealtruism.org/funds/ea-community/payouts/2dyBJqJBSIq6sAGU6gMYQW||Luke Ding Alex Foster Denise Melchin Matt Wage Tara MacAulay ||Grant write-up notes that donee (Global Priorities Institute) is solving an important problem of spreading EA-style ideas in the academic and policy worlds, and has shown impressive progress in its first year. The write-up concludes: "This grant is expected to contribute to GPI’s plans to grow its research team, particularly in economics, in order to publish a strong initial body of papers that defines their research focus. GPI also plans to sponsor DPhil students engaging in global priorities research at the University of Oxford through scholarships and prizes, and to give close support to early career academics with its summer visiting program.". Percentage of total donor spend in the corresponding batch of donations: 10.85%.
|Open Philanthropy Project||2,674,284.00||2018-02||Cause prioritization||https://www.openphilanthropy.org/giving/grants/global-priorities-institute-general-support||Nick Beckstead ||Grant of £2,051,232 over five years (estimated at $2,674,284, depending upon currency conversion rates at the time of annual installments) via Americans for Oxford for general support. GPI is an interdisciplinary research center at the University of Oxford that conducts foundational research to inform the decision-making of individuals and institutions seeking to do as much good as possible. GPI intends to use this funding to support global priorities research, specifically: to hire three early-career, non-tenured research fellows with expertise in philosophy or economics, as well as two operations staff; to secure a larger office space to accommodate them; to host visiting researchers; and to hold seminars which address global priorities research topics. Announced: 2018-05-21.