, ,
This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2025. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
Item | Value |
---|---|
Country | United States |
Facebook username | Berkeley-Existential-Risk-Initiative-1875638366085846 |
Website | http://existence.org/ |
Data entry method on Donations List Website | Manual (no scripts used) |
Org Watch page | https://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative |
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 33 | 50,000 | 129,031 | 1,900 | 10,000 | 20,000 | 25,000 | 50,000 | 50,000 | 100,000 | 100,000 | 200,000 | 300,000 | 800,000 |
AI safety | 33 | 50,000 | 129,031 | 1,900 | 10,000 | 20,000 | 25,000 | 50,000 | 50,000 | 100,000 | 100,000 | 200,000 | 300,000 | 800,000 |
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
Cause area | Number of donations | Number of donees | Total | 2020 | 2019 | 2018 | 2017 |
---|---|---|---|---|---|---|---|
AI safety (filter this donor) | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Total | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Graph of spending by cause area and year (incremental, not cumulative)
Graph of spending by cause area and year (cumulative)
If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
Subcause area | Number of donations | Number of donees | Total | 2020 | 2019 | 2018 | 2017 |
---|---|---|---|---|---|---|---|
AI safety | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Classified total | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Unclassified total | 0 | 0 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Total | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Graph of spending by subcause area and year (incremental, not cumulative)
Graph of spending by subcause area and year (cumulative)
Donee | Cause area | Metadata | Total | 2020 | 2019 | 2018 | 2017 |
---|---|---|---|---|---|---|---|
Center for Applied Rationality (filter this donor) | Rationality | FB Tw WP Site TW | 1,200,000.00 | 0.00 | 0.00 | 1,100,000.00 | 100,000.00 |
Machine Intelligence Research Institute (filter this donor) | AI safety | FB Tw WP Site CN GS TW | 1,100,000.00 | 300,000.00 | 600,000.00 | 0.00 | 200,000.00 |
Future of Life Institute (filter this donor) | AI safety/other global catastrophic risks | FB Tw WP Site | 500,000.00 | 0.00 | 50,000.00 | 300,000.00 | 150,000.00 |
UC Berkeley Foundation (filter this donor) | 400,000.00 | 0.00 | 0.00 | 400,000.00 | 0.00 | ||
Centre for the Study of Existential Risk (filter this donor) | 200,000.00 | 0.00 | 0.00 | 200,000.00 | 0.00 | ||
Lightcone Infrastructure (filter this donor) | Epistemic institutions | FB WP Site | 150,000.00 | 0.00 | 0.00 | 150,000.00 | 0.00 |
Ben Goldhaber (filter this donor) | 150,000.00 | 0.00 | 0.00 | 150,000.00 | 0.00 | ||
Stephanie Zolayvar (filter this donor) | 100,000.00 | 0.00 | 0.00 | 100,000.00 | 0.00 | ||
Baeo Maltinsky (filter this donor) | 55,000.00 | 0.00 | 0.00 | 55,000.00 | 0.00 | ||
Colleen McKenzie (filter this donor) | 51,000.00 | 0.00 | 0.00 | 51,000.00 | 0.00 | ||
Theiss Research (filter this donor) | 50,000.00 | 0.00 | 0.00 | 50,000.00 | 0.00 | ||
Institute for Philosophical Research (filter this donor) | 50,000.00 | 0.00 | 0.00 | 50,000.00 | 0.00 | ||
Roxanne Heston (filter this donor) | 49,532.00 | 0.00 | 0.00 | 49,532.00 | 0.00 | ||
Association for the Advancement of Artificial Intelligence (filter this donor) | 45,000.00 | 0.00 | 0.00 | 25,000.00 | 20,000.00 | ||
Miles Brundage (filter this donor) | 32,500.00 | 0.00 | 0.00 | 32,500.00 | 0.00 | ||
Zoe Cremer (filter this donor) | 25,200.00 | 0.00 | 0.00 | 25,200.00 | 0.00 | ||
David Manheim (filter this donor) | 25,000.00 | 0.00 | 0.00 | 25,000.00 | 0.00 | ||
Bryce Hidysmith (filter this donor) | 20,500.00 | 0.00 | 0.00 | 20,500.00 | 0.00 | ||
Luca Rade (filter this donor) | 20,400.00 | 0.00 | 0.00 | 20,400.00 | 0.00 | ||
Sebastian Farquhar (filter this donor) | 12,000.00 | 0.00 | 0.00 | 12,000.00 | 0.00 | ||
Cambridge in America (filter this donor) | 10,000.00 | 0.00 | 0.00 | 10,000.00 | 0.00 | ||
Jessica Taylor (filter this donor) | 10,000.00 | 0.00 | 0.00 | 10,000.00 | 0.00 | ||
Jordan Alexander (filter this donor) | 1,900.00 | 0.00 | 0.00 | 1,900.00 | 0.00 | ||
Total | -- | -- | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Graph of spending by donee and year (incremental, not cumulative)
Graph of spending by donee and year (cumulative)
Sorry, we couldn't find any influencer information.
Sorry, we couldn't find any disclosures information.
Sorry, we couldn't find any country information.
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
New grants from the Open Philanthropy Project and BERI | 2019-04-01 | Rob Bensinger | Machine Intelligence Research Institute | Open Philanthropy Berkeley Existential Risk Initiative | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ | |
A model I use when making plans to reduce AI x-risk (GW, IR) | 2018-01-18 | Ben Pace | LessWrong | Berkeley Existential Risk Initiative | Review of current state of cause area | AI safety | The author describes his implicit model of AI risk, with four parts: (1) Alignment is hard, (2) Getting alignment right accounts for most of the variance in whether an AGI system will be positive for humanity, (3) Our current epistemic state regarding AGI timelines will continue until we are close (<2 years from) to having AGI, and (4) Given timeline uncertainty, it is best to spend marginal effort on plans that assume / work in shorter timelines. There is a lot of discussion in the comments | ||
Announcing BERI Computing Grants | 2017-12-01 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | ||
Forming an engineering team | 2017-10-25 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | |||
What we’re thinking about as we grow - ethics, oversight, and getting things done | 2017-10-19 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | Outlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.). | |
BERI's semi-annual report, August | 2017-09-12 | Rebecca Raible | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | A blog post announcing BERI's semi-annual report. |
Graph of top 10 donees (for donations with known year of donation) by amount, showing the timeframe of donations
Donee | Amount (current USD) | Amount rank (out of 33) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
Machine Intelligence Research Institute | 300,000.00 | 4 | AI safety | http://existence.org/grants/ | -- | Intended use of funds (category): Organizational general support Other notes: The grant is mentioned by MIRI in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ along with a large grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 from the Open Philanthropy Project. The post says: "at the time of our 2019 fundraiser, we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower.". |
|
Future of Life Institute | 50,000.00 | 17 | AI safety | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | ||
Machine Intelligence Research Institute | 600,000.00 | 2 | AI safety | http://existence.org/grants/ | -- | Intended use of funds (category): Organizational general support Donor retrospective of the donation: BERI would make a further to MIRI, indicating continued confidence in the grantee. The followup grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Other notes: This grant is also discussed by the Machine Intelligence Research Institute (the grant recipient) at https://intelligence.org/2017/11/08/major-grant-open-phil/ along with a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 from the Open Philanthropy Project. Announced: 2019-05-09. |
|
Sebastian Farquhar | 12,000.00 | 29 | AI safety | https://web.archive.org/web/20181107040049/http://existence.org/grants-list/ | -- | “to attend conferences and purchase compute for experiments related to his PhD research on uncertainty modeling in neural networks”. | |
Association for the Advancement of Artificial Intelligence | 20,000.00 | 27 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “for the support of the 2019 conference on AI, Ethics, and Society”. | |
Ben Goldhaber | 150,000.00 | 8 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to support his project (co-lead by Jacob Lagerros) to bring x-risk-relevant questions to popular prediction platforms”. | |
Jessica Taylor | 10,000.00 | 30 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to work on her research in AI alignment and other areas”. | |
Stephanie Zolayvar | 100,000.00 | 10 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to train herself in circling and host circles for people who are promising contributors to reducing x-risk”. | |
Cambridge in America | 10,000.00 | 30 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “for the support of the Leverhulme Centre for the Future of Intellgience”. | |
Roxanne Heston | 49,532.00 | 21 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to work on a variety of AI policy projects in Washington, D.C.”. | |
Baeo Maltinsky | 55,000.00 | 15 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to further his research on AI and technology trends”. | |
Zoe Cremer | 25,200.00 | 23 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to support her as a visiting fellow at CFI, where she will research disagreements about the amount and kind of structure required for AGI”. | |
Colleen McKenzie | 51,000.00 | 16 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to support her research on AI timelines and the processes that produce technical and scientific progress”. | |
David Manheim | 25,000.00 | 24 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to research aspects of Goodhart’s law, focusing on multi-agent dynamics”. Probably related to https://arxiv.org/pdf/1803.04585.pdf. | |
Bryce Hidysmith | 20,500.00 | 25 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to analyze global risks from technology through a geopolitical lens”. | |
Jordan Alexander | 1,900.00 | 33 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to host several meetings at Stanford EA and the Stanford Transhumanist Association”. | |
Luca Rade | 20,400.00 | 26 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to research the implications of coarse-graining by an agent in a complex environment for AI alignment”. | |
Institute for Philosophical Research | 50,000.00 | 17 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Lightcone Infrastructure | 150,000.00 | 8 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Given via the Center for Applied Rationality. To support the development of LessWrong 2.0. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
UC Berkeley Foundation | 400,000.00 | 3 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Intended to support Andrew Critch’s work at the Center for Human-compatible AI. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Center for Applied Rationality | 300,000.00 | 4 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html and at http://www.rationality.org/resources/updates/2018/august-newsletter (CFAR lists $600,000 over two years as the amount). | |
Theiss Research | 50,000.00 | 17 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Intended to support Carrick Flynn’s research with Allan DaFoe at the Future of Humanity Institute. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Future of Life Institute | 300,000.00 | 4 | AI safety | https://web.archive.org/web/20180905034853/http://existence.org/organization-grants/ https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support. | |
Centre for the Study of Existential Risk | 200,000.00 | 7 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | For general suppotr; grant via Cambridge in America. | |
Association for the Advancement of Artificial Intelligence | 5,000.00 | 32 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Grant in support of the 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents. See announcement at http://existence.org/2018/04/07/activity-update-february-and-march-2018.html. | |
Center for Applied Rationality | 800,000.00 | 1 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support for the purpose of a permanent venue for CFAR workshops. See announcement at http://existence.org/2018/02/08/activity-update-january-2018.html. | |
Miles Brundage | 32,500.00 | 22 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants | -- | ||
Machine Intelligence Research Institute | 100,000.00 | 10 | AI safety | http://existence.org/grants/ | -- | Intended use of funds (category): Organizational general support Donor retrospective of the donation: BERI would make two further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Other notes: Announced: 2018-01-11. |
|
Center for Applied Rationality | 100,000.00 | 10 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | See announcement at http://existence.org/2018/01/11/activity-update-december-2017.html. | |
Association for the Advancement of Artificial Intelligence | 20,000.00 | 27 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | To support the conference on Artificial Intelligence, Ethics, and Society. See announcement at http://existence.org/2017/12/01/activity-update-november-2017.html. | |
Future of Life Institute | 50,000.00 | 17 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | For general support. See announcement at http://existence.org/2017/11/03/activity-update-october-2017.html. | |
Future of Life Institute | 100,000.00 | 10 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | ||
Machine Intelligence Research Institute | 100,000.00 | 10 | AI safety | http://existence.org/grants | -- | Intended use of funds (category): Organizational general support Donor reason for selecting the donee: The announcement page says: "Broadly, we believe these groups [Machine Intelligence Research Institute and Future of Life Institute] to have done good work in the past for reducing existential risk and wish to support their continued efforts." Donor reason for donating at this time (rather than earlier or later): This is one of two opening grants made by BERI to begin its grants program. Donor thoughts on making further donations to the donee: The grant page says: "Over the next few months, we may write more about our reasoning behind these and other grants." It further outlines the kinds of organizations that BERI will be granting to in the short run. Donor retrospective of the donation: BERI would make three further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Other notes: Announced: 2017-09-25. |
Sorry, we couldn't find any similar donors.