Center for Human-Compatible AI donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttp://humancompatible.ai/
Donate pagehttp://humancompatible.ai/get-involved#supporter
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Center_for_Human-Compatible_AI
Org Watch pagehttps://orgwatch.issarice.com/?organization=Center+for+Human-Compatible+AI
Key peopleStuart Russell|Bart Sellman|Michael Wellman|Andrew Critch
Launch date2016-08
NotesReceived a 5.5 million dollar grant from the Open Philanthropy Project at the time of founding, with a 50% probability estimate that it would be a respected AI-related organization in two years

Donation amounts by donor and year for donee Center for Human-Compatible AI

Donor Total 2016
Open Philanthropy Project (filter this donee) 5,555,550.00 5,555,550.00
Total 5,555,550.00 5,555,550.00

Full list of documents in reverse chronological order (2 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare,AI risk,Biosecurity and pandemic preparedness,Effective altruism,Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund)

Full list of donations in reverse chronological order (1 donations)

DonorAmount (current USD)Donation dateCause areaURLInfluencerNotes
Open Philanthropy Project5,555,550.002016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-- Grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence and https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity and states a 50% chance that two years from the grant date, the Center will be spending at least $2 million a year, and will be considered by one or more of our relevant technical advisors to have a reasonably good reputation in the field. Note that the grant recipient in the Open Phil database has been listed as UC Berkeley, but we have written it as the naem of the center for easier cross-referencing. Announced: 2016-08-29.