Open Philanthropy donations made to Berkeley Existential Risk Initiative

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy

Brief history: Open Philanthropy (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell as the "Open Philanthropy Project" in June 2017. In 2020, it started going by "Open Philanthropy" dropping the "Project" word.

Brief notes on broad donor philosophy and major focus areas: Open Philanthropy is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more.

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval.

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy database are not listed on Donations List Website as being under Open Philanthropy. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by open philanthropy that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by Open Philanthropy are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Full donor page for donor Open Philanthropy

Basic donee information

ItemValue
Country United States
Websitehttp://existence.org/
Donate pagehttp://existence.org/donating/
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative
Org Watch pagehttps://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative
Key peopleAndrew Critch|Gina Stuessy|Michael Keenan
Launch date2017-02
NotesLaunched to provide fast-moving support to existing existential risk organizations. Works closely with Machine Intelligence Research Institute, Center for Human-Compatible AI, Centre for the Study of Existential Risk, and Future of Humanity Institute. People working at it are closely involved with MIRI and the Center for Applied Rationality

This entity is also a donor.

Full donee page for donee Berkeley Existential Risk Initiative

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 16 195,000 421,906 30,000 35,000 70,000 100,000 150,000 195,000 210,000 403,890 705,000 1,126,160 2,047,268
AI safety 16 195,000 421,906 30,000 35,000 70,000 100,000 150,000 195,000 210,000 403,890 705,000 1,126,160 2,047,268

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2023 2022 2021 2020 2019 2017
AI safety (filter this donor) 16 6,750,495.00 175,000.00 4,661,605.00 405,000.00 150,000.00 955,000.00 403,890.00
Total 16 6,750,495.00 175,000.00 4,661,605.00 405,000.00 150,000.00 955,000.00 403,890.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (5 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
Staff Members’ Personal Donations for Giving Season 20172017-12-18Holden Karnofsky Open PhilanthropyHolden Karnofsky Alexander Berger Nick Beckstead Helen Toner Claire Zabel Lewis Bollard Ajeya Cotra Morgan Davis Michael Levine GiveWell top charities GiveWell GiveDirectly EA Giving Group Berkeley Existential Risk Initiative Effective Altruism Funds Sentience Institute Encompass The Humane League The Good Food Institute Mercy For Animals Compassion in World Farming USA Animal Equality Donor lottery Against Malaria Foundation GiveDirectly Periodic donation list documentationOpen Philanthropy Project staff members describe where they are donating this year, and the considerations that went into the donation decision. By policy, amounts are not disclosed. This is the first standalone blog post of this sort by the Open Philanthropy Project; in previous years, the corresponding donations were documented in the GiveWell staff members donation post.

Full list of donations in reverse chronological order (16 donations)

Graph of all donations (with known year of donation), showing the timeframe of donations

Graph of donations and their timeframes
Amount (current USD)Amount rank (out of 16)Donation dateCause areaURLInfluencerNotes
70,000.00132023-10AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-university-collaboration-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support [BERI's] university collaboration program. Selected applicants become eligible for support and services from BERI that would be difficult or impossible to obtain through normal university channels. BERI will use these funds to increase the size of its 2024 cohort." The page https://existence.org/2023/07/27/trial-collaborations-2023.html is linked.

Other notes: Intended funding timeframe in months: 12.
70,000.00132023-09AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-scalable-oversight-dataset/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant " to support the creation of a scalable oversight dataset. The purpose of the dataset is to collect questions that non-experts can’t answer even with the internet at their disposal; these kinds of questions can be used to test how well AI systems can lead humans to the right answers without misleading them."
35,000.00152023-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-lab-retreat/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a retreat for Anca Dragan’s BAIR lab group, where members will discuss potential risks from advanced artificial intelligence."

Other notes: Intended funding timeframe in months: 1.
100,000.00122022-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support-2/-- Intended use of funds (category): Organizational general support

Intended use of funds: Grant "for general support. BERI seeks to reduce existential risks to humanity by providing services and support to university-based research groups, including the Center for Human-Compatible AI at the University of California, Berkeley."
2,047,268.0012022-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support their collaboration with the Stanford Existential Risks Initiative (SERI) on SERI’s Machine Learning Alignment Theory Scholars (MATS) program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community. This grant will support the MATS program’s third cohort."

Donor reason for donating at this time (rather than earlier or later): The grant is made in time for the third cohort of the SERI-MATS program; this is the cohort being funded by the grant.
Intended funding timeframe in months: 6

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ for the London-based extension suggests continued satisfaction with this funded program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year. See also the companion grants: https://www.openphilanthropy.org/grants/ai-safety-support-seri-mats-program/ to AI Safety Support and https://www.openphilanthropy.org/grants/conjecture-seri-mats-2023/ to Conjecture for the London-based extension.
30,000.00162022-06AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-language-model-alignment-research/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a project led by Professor Samuel Bowman of New York University to develop a dataset and accompanying methods for language model alignment research."

Other notes: Intended funding timeframe in months: 36.
1,008,127.0032022-04AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Berkeley Existential Risk Initiative to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the second cohort of the SERI Machine Learning Alignment Theory Scholars (MATS) Program. MATS is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment, and connect them with the Berkeley alignment research community."

Donor reason for donating at this time (rather than earlier or later): The grant is made in time for the second cohort of the SERI-MATS program; this is the cohort being funded by the grant.
Intended funding timeframe in months: 6

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the third cohort of the SERI-MATS program suggests the donor's continued satisfaction with the SERI-MATS program. Also, the grant https://www.openphilanthropy.org/grants/conjecture-seri-mats-program-in-london/ for the London-based extension of this cohort (the second cohort) also suggests the donor's satisfaction with the program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year.
210,000.0072022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-ai-standards-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work on the development and implementation of AI safety standards that may reduce potential risks from advanced artificial intelligence."

Donor reason for donating at this time (rather than earlier or later): The grant is made at the same time as the companion grant https://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/ to the Center for Long-Term Cybersecurity (CLTC), via the University of California, Berkeley.

Other notes: There is a companion grant https://www.openphilanthropy.org/grants/center-for-long-term-cybersecurity-ai-standards-2022/ to the Center for Long-Term Cybersecurity (CLTC), via the University of California, Berkeley.
140,050.00112022-04AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-david-krueger-collaboration/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the Berkeley Existential Risk Initiative to support its collaboration with Professor David Krueger."

Other notes: The grant page says: "The grant amount was updated in August 2023.".
1,126,160.0022022-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. BERI will use the funding to facilitate the creation of an in-house compute cluster for CHAI’s use, purchase compute resources, and hire a part-time system administrator to help manage the cluster."
195,000.0092021-11AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program-2/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support its collaboration with the Stanford Existential Risks Initiative (SERI) on the SERI ML Alignment Theory Scholars (MATS) Program. MATS is a two-month program where students will research problems related to AI alignment while supervised by a mentor."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ and https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ for the second and third cohort of the SERI-MATS program suggests the donor's continued satisfaction with the SERI-MATS program.

Other notes: See https://www.serimats.org/program for details of the program including its timeline. Although the research phase of the timeline is just two months, the application process, training phase, and extension phase together make up about half a year. Intended funding timeframe in months: 6.
210,000.0072021-03AI safety/technical research/talent pipelinehttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-summer-fellowships/Claire Zabel Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to provide stipends for the Stanford Existential Risks Initiative (SERI) summer research fellowship program."

Donor retrospective of the donation: The multiple future grants https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program-2/ https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-seri-mats-program/ and https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-machine-learning-alignment-theory-scholars/ from Open Philanthropy to BERI for the SERI-MATS program, a successor of sorts to this program, suggests satisfaction with the outcome of this grant.

Other notes: Intended funding timeframe in months: 2.
150,000.00102020-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support/Claire Zabel Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "BERI seeks to reduce existential risks to humanity, and collaborates with other long-termist organizations, including the Center for Human-Compatible AI at UC Berkeley. This funding is intended to help BERI establish new collaborations."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-general-support-2/ suggests continued satisfaction with the grantee.
705,000.0042019-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2019/Daniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/ from Open Philanthropy to BERI for the same purpose (CHAI collaboration) suggests satisfaction with the outcome of the grant.

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
250,000.0062019-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-ml-engineers/Daniel Dewey Donation process: The grant page describes the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/grants/uc-berkeley-center-for-human-compatible-ai-2016/ for the launch of CHAI and previous grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/ to collaborate with CHAI. Announced: 2019-03-04.
403,890.0052017-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/Daniel Dewey Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx

Other notes: Announced: 2017-09-28.