This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
Item | Value |
---|---|
Country | United States |
Facebook username | Berkeley-Existential-Risk-Initiative-1875638366085846 |
Website | http://existence.org/ |
Data entry method on Donations List Website | Manual (no scripts used) |
Org Watch page | https://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative |
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 47 | 50,000 | 113,659 | 1,900 | 10,000 | 20,500 | 25,000 | 32,500 | 50,000 | 86,580 | 100,000 | 150,000 | 300,000 | 800,000 |
AI safety | 33 | 50,000 | 129,031 | 1,900 | 10,000 | 20,000 | 25,000 | 50,000 | 50,000 | 100,000 | 100,000 | 200,000 | 300,000 | 800,000 |
Existential risks | 4 | 25,000 | 38,500 | 4,000 | 4,000 | 4,000 | 25,000 | 25,000 | 25,000 | 25,000 | 25,000 | 100,000 | 100,000 | 100,000 |
Rationality community | 1 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 | 24,000 |
Existential risk | 6 | 25,000 | 42,656 | 25,000 | 25,000 | 25,000 | 25,000 | 25,000 | 25,000 | 44,353 | 50,000 | 50,000 | 86,580 | 86,580 |
Rationality improvement | 1 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 | 100,000 |
1 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | 200,000 | |
Career planning | 1 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 | 350,000 |
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
Cause area | Number of donations | Number of donees | Total | 2020 | 2019 | 2018 | 2017 |
---|---|---|---|---|---|---|---|
AI safety (filter this donor) | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Career planning (filter this donor) | 1 | 1 | 350,000.00 | 0.00 | 350,000.00 | 0.00 | 0.00 |
Existential risk (filter this donor) | 6 | 5 | 255,933.00 | 0.00 | 211,580.00 | 44,353.00 | 0.00 |
(filter this donor) | 1 | 1 | 200,000.00 | 0.00 | 200,000.00 | 0.00 | 0.00 |
Existential risks (filter this donor) | 4 | 4 | 154,000.00 | 0.00 | 0.00 | 129,000.00 | 25,000.00 |
Rationality improvement (filter this donor) | 1 | 1 | 100,000.00 | 0.00 | 0.00 | 100,000.00 | 0.00 |
Rationality community (filter this donor) | 1 | 1 | 24,000.00 | 0.00 | 0.00 | 24,000.00 | 0.00 |
Total | 47 | 34 | 5,341,965.00 | 300,000.00 | 1,411,580.00 | 3,135,385.00 | 495,000.00 |
Graph of spending by cause area and year (incremental, not cumulative)
Graph of spending by cause area and year (cumulative)
If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
Subcause area | Number of donations | Number of donees | Total | 2020 | 2019 | 2018 | 2017 |
---|---|---|---|---|---|---|---|
AI safety | 33 | 23 | 4,258,032.00 | 300,000.00 | 650,000.00 | 2,838,032.00 | 470,000.00 |
Career planning | 1 | 1 | 350,000.00 | 0.00 | 350,000.00 | 0.00 | 0.00 |
Existential risk | 6 | 5 | 255,933.00 | 0.00 | 211,580.00 | 44,353.00 | 0.00 |
Existential risks | 4 | 4 | 154,000.00 | 0.00 | 0.00 | 129,000.00 | 25,000.00 |
Rationality improvement | 1 | 1 | 100,000.00 | 0.00 | 0.00 | 100,000.00 | 0.00 |
Rationality community | 1 | 1 | 24,000.00 | 0.00 | 0.00 | 24,000.00 | 0.00 |
Classified total | 46 | 33 | 5,141,965.00 | 300,000.00 | 1,211,580.00 | 3,135,385.00 | 495,000.00 |
Unclassified total | 1 | 1 | 200,000.00 | 0.00 | 200,000.00 | 0.00 | 0.00 |
Total | 47 | 34 | 5,341,965.00 | 300,000.00 | 1,411,580.00 | 3,135,385.00 | 495,000.00 |
Graph of spending by subcause area and year (incremental, not cumulative)
Graph of spending by subcause area and year (cumulative)
Donee | Cause area | Metadata | Total | 2020 | 2019 | 2018 | 2017 |
---|---|---|---|---|---|---|---|
Center for Applied Rationality (filter this donor) | Rationality | FB Tw WP Site TW | 1,200,000.00 | 0.00 | 0.00 | 1,100,000.00 | 100,000.00 |
Machine Intelligence Research Institute (filter this donor) | AI safety | FB Tw WP Site CN GS TW | 1,100,000.00 | 300,000.00 | 600,000.00 | 0.00 | 200,000.00 |
Future of Life Institute (filter this donor) | AI safety/other global catastrophic risks | FB Tw WP Site | 500,000.00 | 0.00 | 50,000.00 | 300,000.00 | 150,000.00 |
UC Berkeley Foundation (filter this donor) | 400,000.00 | 0.00 | 0.00 | 400,000.00 | 0.00 | ||
80,000 Hours (filter this donor) | Career coaching/life guidance | FB Tw WP Site | 350,000.00 | 0.00 | 350,000.00 | 0.00 | 0.00 |
Lightcone Infrastructure (filter this donor) | Rationality improvement | FB WP Site | 250,000.00 | 0.00 | 0.00 | 250,000.00 | 0.00 |
Centre for the Study of Existential Risk (filter this donor) | 200,000.00 | 0.00 | 0.00 | 200,000.00 | 0.00 | ||
Survival and Flourishing Fund (filter this donor) | 200,000.00 | 0.00 | 200,000.00 | 0.00 | 0.00 | ||
Ben Goldhaber (filter this donor) | 150,000.00 | 0.00 | 0.00 | 150,000.00 | 0.00 | ||
Stephanie Zolayvar (filter this donor) | 100,000.00 | 0.00 | 0.00 | 100,000.00 | 0.00 | ||
Justin Shovelain (filter this donor) | 100,000.00 | 0.00 | 0.00 | 100,000.00 | 0.00 | ||
Stefan Schubert (filter this donor) | 86,580.00 | 0.00 | 86,580.00 | 0.00 | 0.00 | ||
Institute for Philosophical Research (filter this donor) | 75,000.00 | 0.00 | 0.00 | 50,000.00 | 25,000.00 | ||
Leverage Research (filter this donor) | 75,000.00 | 0.00 | 75,000.00 | 0.00 | 0.00 | ||
Baeo Maltinsky (filter this donor) | 55,000.00 | 0.00 | 0.00 | 55,000.00 | 0.00 | ||
Colleen McKenzie (filter this donor) | 51,000.00 | 0.00 | 0.00 | 51,000.00 | 0.00 | ||
Theiss Research (filter this donor) | 50,000.00 | 0.00 | 0.00 | 50,000.00 | 0.00 | ||
Roxanne Heston (filter this donor) | 49,532.00 | 0.00 | 0.00 | 49,532.00 | 0.00 | ||
Association for the Advancement of Artificial Intelligence (filter this donor) | 45,000.00 | 0.00 | 0.00 | 25,000.00 | 20,000.00 | ||
Lucius Caviola (filter this donor) | 44,353.00 | 0.00 | 0.00 | 44,353.00 | 0.00 | ||
Miles Brundage (filter this donor) | 32,500.00 | 0.00 | 0.00 | 32,500.00 | 0.00 | ||
Zoe Cremer (filter this donor) | 25,200.00 | 0.00 | 0.00 | 25,200.00 | 0.00 | ||
Alliance to Feed the Earth in Disasters (filter this donor) | 25,000.00 | 0.00 | 25,000.00 | 0.00 | 0.00 | ||
Global Catastrophic Risk Institute (filter this donor) | Global catastrophic risks | FB Tw Site | 25,000.00 | 0.00 | 0.00 | 25,000.00 | 0.00 |
David Manheim (filter this donor) | 25,000.00 | 0.00 | 0.00 | 25,000.00 | 0.00 | ||
Americans for Oxford (filter this donor) | 25,000.00 | 0.00 | 25,000.00 | 0.00 | 0.00 | ||
Berkeley REACH (filter this donor) | Rationality community | FB Site | 24,000.00 | 0.00 | 0.00 | 24,000.00 | 0.00 |
Bryce Hidysmith (filter this donor) | 20,500.00 | 0.00 | 0.00 | 20,500.00 | 0.00 | ||
Luca Rade (filter this donor) | 20,400.00 | 0.00 | 0.00 | 20,400.00 | 0.00 | ||
Sebastian Farquhar (filter this donor) | 12,000.00 | 0.00 | 0.00 | 12,000.00 | 0.00 | ||
Cambridge in America (filter this donor) | 10,000.00 | 0.00 | 0.00 | 10,000.00 | 0.00 | ||
Jessica Taylor (filter this donor) | 10,000.00 | 0.00 | 0.00 | 10,000.00 | 0.00 | ||
Effective Altruism Sweden (filter this donor) | 4,000.00 | 0.00 | 0.00 | 4,000.00 | 0.00 | ||
Jordan Alexander (filter this donor) | 1,900.00 | 0.00 | 0.00 | 1,900.00 | 0.00 | ||
Total | -- | -- | 5,341,965.00 | 300,000.00 | 1,411,580.00 | 3,135,385.00 | 495,000.00 |
Graph of spending by donee and year (incremental, not cumulative)
Graph of spending by donee and year (cumulative)
Sorry, we couldn't find any influencer information.
Sorry, we couldn't find any disclosures information.
Sorry, we couldn't find any country information.
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
ALLFED 2019 Annual Report and Fundraising Appeal (GW, IR) | 2019-11-23 | Aron Mill | Alliance to Feed the Earth in Disasters | Berkeley Existential Risk Initiative Donor lottery Effective Altruism Grants Open Philanthropy | Alliance to Feed the Earth in Disasters Future of Humanity Institute | Donee donation case | Alternative foods | Aron Mill provides a summary of the work of the Alliance to Feed the Earth in Disasters (ALLFED) in 2019. He lists key supporters as well as partners that ALLFED worked with during the year. The blog post proceeds to make an appeal and a case for fundraising ALLFED. Sections of the blog post include: (1) research output, (2) preparedness and alliance-building, (3) ALLFED team, (4) current projects, and (5) projects in need of funding. | |
The Future of Grant-making Funded by Jaan Tallinn at BERI | 2019-08-25 | Board of Directors | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative Jaan Tallinn | Broad donor strategy | In the blog post, BERI announces that it is no longer going to be handling grantmaking for Jaan Tallinn. The grantmaking is being handed to "one or more other teams and/or processes that are separate from BERI." Andrew Critch will be working on the handoff. BERI will complete administration of grants already committed to. | |||
80,000 Hours Annual Review – December 2018 | 2019-05-07 | Benjamin Todd | 80,000 Hours | Open Philanthropy Berkeley Existential Risk Initiative Effective Altruism Funds | 80,0000 Hours | Donee periodic update | Effective altruism/movement growth/career counseling | This blog post is the annual self-review by 80,000 Hours, originally written in December 2018. Publication was deferred because 80,000 Hours was waiting to hear back on the status of some large grants (in particular, one from the Open Philanthropy Project), but most of the content is still from the December 2018 draft. The post goes into detail about 80,000 Hours' progress in 2018, impact and plan changes, and future expansion plans. Funding gaps are discussed (the funding gap for 2019 is $400,000, and further money will be saved for 2020 and 2021). Grants from the Open Philanthropy Project, BERI, and the Effective Altruism Funds (EA Meta Fund) are mentioned | |
New grants from the Open Philanthropy Project and BERI | 2019-04-01 | Rob Bensinger | Machine Intelligence Research Institute | Open Philanthropy Berkeley Existential Risk Initiative | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ | |
Seeking Testimonials - IPR, Leverage, and Paradigm | 2018-11-15 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Leverage Research Institute for Philosophical Research Paradigm Academy | Request for reviews of donee | Rationality improvement | In the blog post, Andrew Critch of BERI talks about plans to make grants to Leverage Research and the Institute for Philosophical Research (IPR). Critch says that IPR, Leverage, and Paradigm Academy are three related organizations that BERI internally refers to as ILP. In light of community skepticism about ILP, Critch announces that BERI is inviting feedback through a feedback form on these organizations till December 20. He also explains what sort of feedback will be taken more seriously by BERI. The post was also announced on the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/fvvRZMJJ7g4gzXjSH/seeking-information-on-three-potential-grantee-organizations (GW, IR) on 2018-12-09. | |
A model I use when making plans to reduce AI x-risk (GW, IR) | 2018-01-18 | Ben Pace | LessWrong | Berkeley Existential Risk Initiative | Review of current state of cause area | AI safety | The author describes his implicit model of AI risk, with four parts: (1) Alignment is hard, (2) Getting alignment right accounts for most of the variance in whether an AGI system will be positive for humanity, (3) Our current epistemic state regarding AGI timelines will continue until we are close (<2 years from) to having AGI, and (4) Given timeline uncertainty, it is best to spend marginal effort on plans that assume / work in shorter timelines. There is a lot of discussion in the comments | ||
Announcing BERI Computing Grants | 2017-12-01 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | ||
Forming an engineering team | 2017-10-25 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | |||
What we’re thinking about as we grow - ethics, oversight, and getting things done | 2017-10-19 | Andrew Critch | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | Outlines BERI's approach to growth and "ethics" (transparency, oversight, trust, etc.). | |
BERI's semi-annual report, August | 2017-09-12 | Rebecca Raible | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Berkeley Existential Risk Initiative | Donee periodic update | AI safety/other global catastrophic risks | A blog post announcing BERI's semi-annual report. |
Graph of top 10 donees by amount, showing the timeframe of donations
Donee | Amount (current USD) | Amount rank (out of 47) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
Machine Intelligence Research Institute | 300,000.00 | 5 | AI safety | http://existence.org/grants/ | -- | Intended use of funds (category): Organizational general support Other notes: The grant is mentioned by MIRI in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ along with a large grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 from the Open Philanthropy Project. The post says: "at the time of our 2019 fundraiser, we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower.". |
|
Survival and Flourishing Fund | 200,000.00 | 8 | -- | http://existence.org/tallinn-grants-future/ | -- | Handoff of funds donated by Jaan Tallinn, so that grantmaking for these funds can be handled by the Survival and Flourishing Fund. This is because BERI no longer wants to manage grantmaking for these funds. | |
80,000 Hours | 350,000.00 | 4 | Career planning | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | The grant is mentioned by the recipient, 80,000 Hours, in a blog post https://80000hours.org/2019/05/annual-review-dec-2018/ along with grants from the Open Philanthropy Project and the Effective Altruism Funds Meta Fund. | |
Future of Life Institute | 50,000.00 | 22 | AI safety | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | ||
Leverage Research | 50,000.00 | 22 | Existential risk | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | ||
Americans for Oxford (Earmark: Yarin Gal) | 25,000.00 | 31 | Existential risk | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | ||
Alliance to Feed the Earth in Disasters | 25,000.00 | 31 | Existential risk | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | ||
Machine Intelligence Research Institute | 600,000.00 | 2 | AI safety | http://existence.org/grants/ | -- | Intended use of funds (category): Organizational general support Donor retrospective of the donation: BERI would make a further to MIRI, indicating continued confidence in the grantee. The followup grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Other notes: This grant is also discussed by the Machine Intelligence Research Institute (the grant recipient) at https://intelligence.org/2017/11/08/major-grant-open-phil/ along with a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 from the Open Philanthropy Project. Announced: 2019-05-09. |
|
Stefan Schubert | 86,580.00 | 19 | Existential risk | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | Research on the psychology of existential risk. | |
Leverage Research | 25,000.00 | 31 | Existential risk | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | The grant follows a solicitation of feedback on Leverage ond related orgs http://existence.org/testimonials-ipr-leverage-paradigm/. | |
Lucius Caviola | 44,353.00 | 28 | Existential risk | http://web.archive.org/web/20190623203105/http://existence.org/grants/ | -- | Research on the psychology of existential risk. | |
Sebastian Farquhar | 12,000.00 | 42 | AI safety | https://web.archive.org/web/20181107040049/http://existence.org/grants-list/ | -- | “to attend conferences and purchase compute for experiments related to his PhD research on uncertainty modeling in neural networks”. | |
Justin Shovelain | 100,000.00 | 12 | Existential risks | https://web.archive.org/web/20181107040049/http://existence.org/grants-list/ | -- | “to lead and develop Convergence Analysis, a new group focused on x-risk strategy research”. | |
Berkeley REACH (Earmark: Sarah Spikes) | 24,000.00 | 37 | Rationality community | https://web.archive.org/web/20181107040049/http://existence.org/grants-list/ | -- | Grant “to implement improvements in support of the Rationality and Effective Altruism Community Hub (REACH)” BERI support is mentioned on the REACH Patreon page: "In October 2018, BERI (Berkeley Existential Risk Initiative) awarded a grant of $24k toward REACH operations for 2018-2019. So far, $10k of that has been dispersed as salary for the REACH Manager, Sarah “Stardust” Spikes.". | |
Effective Altruism Sweden | 4,000.00 | 46 | Existential risks | https://web.archive.org/web/20181107040049/http://existence.org/grants-list/ | -- | “to support Markus Stoor’s project to coordinate two follow-up lunch-to-lunch meetings in Sweden for x-risk-focused individuals”. | |
Association for the Advancement of Artificial Intelligence | 20,000.00 | 40 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “for the support of the 2019 conference on AI, Ethics, and Society”. | |
Jessica Taylor | 10,000.00 | 43 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to work on her research in AI alignment and other areas”. | |
Ben Goldhaber | 150,000.00 | 10 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to support his project (co-lead by Jacob Lagerros) to bring x-risk-relevant questions to popular prediction platforms”. | |
Stephanie Zolayvar | 100,000.00 | 12 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to train herself in circling and host circles for people who are promising contributors to reducing x-risk”. | |
Roxanne Heston | 49,532.00 | 27 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to work on a variety of AI policy projects in Washington, D.C.”. | |
Baeo Maltinsky | 55,000.00 | 20 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to further his research on AI and technology trends”. | |
Zoe Cremer | 25,200.00 | 30 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to support her as a visiting fellow at CFI, where she will research disagreements about the amount and kind of structure required for AGI”. | |
Colleen McKenzie | 51,000.00 | 21 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to support her research on AI timelines and the processes that produce technical and scientific progress”. | |
Cambridge in America | 10,000.00 | 43 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “for the support of the Leverhulme Centre for the Future of Intellgience”. | |
David Manheim | 25,000.00 | 31 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to research aspects of Goodhart’s law, focusing on multi-agent dynamics”. Probably related to https://arxiv.org/pdf/1803.04585.pdf. | |
Luca Rade | 20,400.00 | 39 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to research the implications of coarse-graining by an agent in a complex environment for AI alignment”. | |
Jordan Alexander | 1,900.00 | 47 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to host several meetings at Stanford EA and the Stanford Transhumanist Association”. | |
Bryce Hidysmith | 20,500.00 | 38 | AI safety | https://web.archive.org/web/20181019174535/http://existence.org/grants-list/ | -- | “to analyze global risks from technology through a geopolitical lens”. | |
Institute for Philosophical Research | 50,000.00 | 22 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Lightcone Infrastructure | 150,000.00 | 10 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Given via the Center for Applied Rationality. To support the development of LessWrong 2.0. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Center for Applied Rationality | 300,000.00 | 5 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html and at http://www.rationality.org/resources/updates/2018/august-newsletter (CFAR lists $600,000 over two years as the amount). | |
UC Berkeley Foundation | 400,000.00 | 3 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Intended to support Andrew Critch’s work at the Center for Human-compatible AI. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Theiss Research | 50,000.00 | 22 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Intended to support Carrick Flynn’s research with Allan DaFoe at the Future of Humanity Institute. Announced at http://existence.org/2018/09/18/activity-update-july-and-august-2018.html. | |
Future of Life Institute | 300,000.00 | 5 | AI safety | https://web.archive.org/web/20180905034853/http://existence.org/organization-grants/ https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support. | |
Centre for the Study of Existential Risk | 200,000.00 | 8 | AI safety | https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | For general suppotr; grant via Cambridge in America. | |
Association for the Advancement of Artificial Intelligence | 5,000.00 | 45 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | Grant in support of the 2018 Spring Symposium on AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents. See announcement at http://existence.org/2018/04/07/activity-update-february-and-march-2018.html. | |
Lightcone Infrastructure | 100,000.00 | 12 | Rationality improvement | https://web.archive.org/web/20180731180958/http://existence.org:80/grants | -- | Grant via the Center for Applied Rationality for the support of activities to develop and improve the website LessWrong 2.0. See announcement at http://existence.org/2018/04/07/activity-update-february-and-march-2018.html. | |
Center for Applied Rationality | 800,000.00 | 1 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | General support for the purpose of a permanent venue for CFAR workshops. See announcement at http://existence.org/2018/02/08/activity-update-january-2018.html. | |
Global Catastrophic Risk Institute | 25,000.00 | 31 | Existential risks | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | For general support; grant via Social and Environmental Entrepreneurs. | |
Miles Brundage | 32,500.00 | 29 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants | -- | ||
Machine Intelligence Research Institute | 100,000.00 | 12 | AI safety | http://existence.org/grants/ | -- | Intended use of funds (category): Organizational general support Donor retrospective of the donation: BERI would make two further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Other notes: Announced: 2018-01-11. |
|
Center for Applied Rationality | 100,000.00 | 12 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | See announcement at http://existence.org/2018/01/11/activity-update-december-2017.html. | |
Association for the Advancement of Artificial Intelligence | 20,000.00 | 40 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | To support the conference on Artificial Intelligence, Ethics, and Society. See announcement at http://existence.org/2017/12/01/activity-update-november-2017.html. | |
Institute for Philosophical Research | 25,000.00 | 31 | Existential risks | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | ||
Future of Life Institute | 100,000.00 | 12 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | ||
Future of Life Institute | 50,000.00 | 22 | AI safety | https://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/ | -- | For general support. See announcement at http://existence.org/2017/11/03/activity-update-october-2017.html. | |
Machine Intelligence Research Institute | 100,000.00 | 12 | AI safety | http://existence.org/grants | -- | Intended use of funds (category): Organizational general support Donor reason for selecting the donee: The announcement page says: "Broadly, we believe these groups [Machine Intelligence Research Institute and Future of Life Institute] to have done good work in the past for reducing existential risk and wish to support their continued efforts." Donor reason for donating at this time (rather than earlier or later): This is one of two opening grants made by BERI to begin its grants program. Donor thoughts on making further donations to the donee: The grant page says: "Over the next few months, we may write more about our reasoning behind these and other grants." It further outlines the kinds of organizations that BERI will be granting to in the short run. Donor retrospective of the donation: BERI would make three further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund. Other notes: Announced: 2017-09-25. |
Sorry, we couldn't find any similar donors.