Ought donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttps://ought.org
Donors list pagehttps://ought.org/about
Open Philanthropy Project grant reviewhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support
Org Watch pagehttps://orgwatch.issarice.com/?organization=Ought
Key peopleAndreas Stuhlmüller

Donation amounts by donor and year for donee Ought

Donor Total 2018
Open Philanthropy Project (filter this donee) 525,000.00 525,000.00
Effective Altruism Funds (filter this donee) 10,000.00 10,000.00
Total 535,000.00 535,000.00

Full list of documents in reverse chronological order (3 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
EA orgs are trying to fundraise ~$10m - $16m2019-01-06Hauke Hillebrandt Effective Altruism Forum Centre for Effective Altruism Effective Altruism Foundation Machine Intelligence Research Institute Forethought Foundation for Global Priorities Research Sentience Institute Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute Rethink Priorities EA Hotel 80,000 Hours Rethink Charity Miscellaneous commentaryThe blog post links to and discusses the spreadsheet https://docs.google.com/spreadsheets/d/10zU6gp_H_zuvlZ2Vri-epSK0_urbcmdS-5th3mXQGXM/edit which tables various organizations and their fundraising targets, along with quotes and links to fundraising posts. The blog post itself has three points, the last of whichis that the EA community is relatively more funding constrained again
2018 AI Alignment Literature Review and Charity Comparison2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Announcing the new Forethought Foundation for Global Priorities Research2018-12-04William MacAskill Effective Altruism Forum Forethought Foundation for Global Priorities Research Global Priorities Institute Centre for Effective Altruism LaunchCause prioritizationThe blog post announces the launch of the Forethought Foundation for Global Priorities Research. The planned total budget for 2019 and 2020 is £1.12 million - £1.47 million, and a breakdown is provided in the post. The project will be incubated by the Centre for Effective Altruism, and its work is intended to complement the work of the Global Priorities Institute

Full list of donations in reverse chronological order (2 donations)

DonorAmount (current USD)Donation dateCause areaURLInfluencerNotes
Effective Altruism Funds10,000.002018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Grant made to implement AI alignment concepts in real-world applications. Donee seems more hiring-constrained than fundraising-constrained, hence only a small amount, but donor does believe that donee has a promising approach. Percentage of total donor spend in the corresponding batch of donations: 10.47%.
Open Philanthropy Project525,000.002018-05AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Grantee has a mission to “leverage machine learning to help people think.” Ought plans to conduct research on deliberation and amplification, a concept we consider relevant to AI alignment. The funding, combined with another grant from Open Philanthropy Project technical advisor Paul Christiano, is intended to allow Ought to hire up to three new staff members and provide one to three years of support for Ought’s work, depending how quickly they hire. Announced: 2018-05-31.