Daniel Dewey money moved

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

This entity is also a donee.

Full list of documents in reverse chronological order (1 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
My current thoughts on MIRI’s highly reliable agent design work (GW, IR)2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin.

Full list of donations in reverse chronological order (34 donations)

DonorDoneeAmount (current USD)Donation dateCause areaURLNotes
Open PhilanthropyUniversity of Cambridge250,000.002021-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-cambridge-david-krueger Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor David Krueger’s machine learning research."

Other notes: Grant made via Cambridge in America. Intended funding timeframe in months: 48.
Open PhilanthropyOpen Phil AI Fellowship1,300,000.002021-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class Donation process: According to the grant page: "These [five] fellows were selected from 397 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship to five machine learning researchers over five years.

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): An explicit reason for the amount is not specified, and the total amount is lower than previous years, but the amount per researcher ($260,000) is a little higher than previous years. It's likely that the amount per researcher is determined first and the total amount is the sum of these.

Donor reason for donating at this time (rather than earlier or later): This is the fourth of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/open-phil-ai-fellowship-2022-class/ confirms that the program would continue.

Other notes: The initial grant page only listed four of the five fellows and an amount of $1,000,000. The fifth fellow, Tan Zhi-Xuan, was added later and the amount was increased to $1,300,000.
Open PhilanthropyMassachusetts Institute of Technology1,430,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Aleksandr Madry on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyUniversity of California, Berkeley330,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Dawn Song on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the foour other grants.It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in Januaay and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyUniversity of California, Berkeley330,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor David Wagner on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyBerryville Institute of Machine Learning150,000.002021-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berryville-institute-of-machine-learning Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research led by Gary McGraw on machine learning security. The research will focus on building a taxonomy of known attacks on machine learning, exploring a hypothesis of representation and machine learning risk, and performing an architectural risk analysis of machine learning systems."

Donor reason for selecting the donee: The grant page says: "Our potential risks from advanced artificial intelligence team hopes that the research will help advance the field of machine learning security."
Open PhilanthropyUniversity of Toronto520,000.002020-12AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-research Donation process: The researcher (Chris Maddison) whose students' work is to be funded with this grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research on understanding, predicting, and controlling machine learning systems, led by Professor Chris Maddison, a former Open Phil AI Fellow. This funding is intended to enable three students and a postdoctoral researcher to work with Professor Maddison on the research."

Donor reason for selecting the donee: The researcher (Chris Maddison) whose students' work is to be funded with htis grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Other notes: Intended funding timeframe in months: 48.
Open PhilanthropySmitha Milli370.002020-10AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/smitha-milli-participatory-approaches-machine-learning-workshop Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Participatory Approaches to Machine Learning, a virtual workshop held during the 2020 International Conference on Machine Learning."

Donor reason for selecting the donee: The donee had previously been a recipient of the Open Phil AI Fellowship, so it is likely that that relationship helped make the way for this grant.

Donor reason for donating that amount (rather than a bigger or smaller amount): No specific reasons are given for the amount; this is an unusually small grant size by the donor's standards. The amount is likely determined by the limited funding needs of the grantee.

Donor reason for donating at this time (rather than earlier or later): The 2020 International Conference on Machine Learning was held in July 2020, so this grant seems to have been made after the thing it was supporting was already finished. No details on timing are provided.
Intended funding timeframe in months: 1
Open PhilanthropyInternational Conference on Learning Representations3,500.002020-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ICLR-machine-learning-paper-awards Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to the International Conference on Learning Representations to provide awards for the best papers submitted as part of the “Towards Trustworthy Machine Learning” virtual workshop.
Open PhilanthropyOpen Phil AI Fellowship2,300,000.002020-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class Donation process: According to the grant page: "These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship to ten machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests." In a comment reply https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020#BCvuhRCg9egAscpyu (GW, IR) on the Effectiive Altruism Forum, grant investigator Catherine Olsson writes: "But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is comparable to the total amount of the 2019 fellowship grants, though it is distributed among a slightly larger pool of people.

Donor reason for donating at this time (rather than earlier or later): This is the third of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirms that the program would continue.

Other notes: Announced: 2020-05-12.
Open PhilanthropyWorld Economic Forum50,000.002020-04AI safety/governancehttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/world-economic-forum-global-ai-council-workshop Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a workshop hosted by the Global AI Council and co-developed with the Center for Human-Compatible AI at UC Berkeley. The workshop will facilitate the development of AI policy recommendations that could lead to future economic prosperity, and is part of a series of workshops examining solutions to maximize economic productivity and human wellbeing."

Other notes: Intended funding timeframe in months: 1.
Open PhilanthropyStanford University6,500.002020-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-ai-safety-seminar Intended use of funds (category): Direct project expenses

Intended use of funds: The grant "is intended to fund the travel costs for experts on AI safety to present at the [AI safety] seminar [led by Dorsa Sadigh]."

Other notes: Intended funding timeframe in months: 1.
Open PhilanthropyPress Shop17,000.002020-01AI safety/movement growthhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/press-shop-human-compatible Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to the publicity firm Press Shop to support expenses related to publicizing Professor Stuart Russell’s book Human Compatible: Artificial Intelligence and the Problem of Control."

Donor reason for selecting the donee: The grant page links this grant to past support for the Center for Human-Compatible AI (CHAI) where Russell is director, so the reason for the grant is likely similar to reasons for that past support. Grant pages: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019

Donor reason for donating at this time (rather than earlier or later): The grant is made shortly after the release of the book (book release date: October 8, 2019) so the timing is likely related to the release date.
Open PhilanthropyCenter for Human-Compatible AI200,000.002019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says "CHAI plans to use these funds to support graduate student and postdoc research."

Other notes: Open Phil makes a $705,000 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 to the Berkeley Existential Risk Initiative (BERI) at the same time (November 2019) to collaborate with CHAI. Intended funding timeframe in months: 24; announced: 2019-12-20.
Open PhilanthropyOught1,000,000.002019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought

Other notes: Intended funding timeframe in months: 24; announced: 2020-02-14.
Open PhilanthropyUniversity of California, Berkeley1,111,000.002019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-research-2019 Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "This funding will allow Professor Steinhardt to fund students to work on robustness, value learning, aggregating preferences, and other areas of machine learning."

Other notes: This is the third year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Jacob Steinhardt) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 36; announced: 2020-02-19.
Open PhilanthropyBerkeley Existential Risk Initiative705,000.002019-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2019/ Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support continued work with the Center for Human-Compatible AI (CHAI) at UC Berkeley. This includes one year of support for machine learning researchers hired by BERI, and two years of support for CHAI."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-collaboration-2022/ from Open Philanthropy to BERI for the same purpose (CHAI collaboration) suggests satisfaction with the outcome of the grant.

Other notes: Open Phil makes a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai-2019 to the Center for Human-Compatible AI at the same time (November 2019). Intended funding timeframe in months: 24; announced: 2019-12-13.
Open PhilanthropyFuture of Life Institute100,000.002019-10Global catastrophic riskshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-life-institute-general-support-2019 Intended use of funds (category): Organizational general support

Other notes: Announced: 2019-11-18.
Open PhilanthropyOpen Phil AI Fellowship2,325,000.002019-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class Donation process: According to the grant page: "These fellows were selected from more than 175 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship support to eight machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is about double the amount of the 2018 grant, although the number of people supported is just one more (8 instead of 7). No explicit comparison of grant amounts is done in the grant page.

Donor reason for donating at this time (rather than earlier or later): This is the second of annual sets of grants, decided through an annual application process, with the announcement made in May/June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020) and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirm that the program would continue. Among the grantees, Smitha Milli would receive further support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/smitha-milli-participatory-approaches-machine-learning-workshop from Open Philanthropy, indicating continued confidence in the support.

Other notes: Announced: 2019-05-17.
Open PhilanthropyBerkeley Existential Risk Initiative250,000.002019-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-chai-ml-engineers/ Donation process: The grant page describes the donation decision as being based on "conversations with various professors and students"

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).

Donor reason for selecting the donee: The grant page says: "Based on conversations with various professors and students, we believe CHAI could make more progress with more engineering support."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berkeley-existential-risk-initiative-chai-collaboration-2019 suggests that the donor would continue to stand behind the reasoning for the grant.

Other notes: Follows previous support https://www.openphilanthropy.org/grants/uc-berkeley-center-for-human-compatible-ai-2016/ for the launch of CHAI and previous grant https://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/ to collaborate with CHAI. Announced: 2019-03-04.
Open PhilanthropyUniversity of California, Berkeley1,145,000.002018-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-california-berkeley-artificial-intelligence-safety-research-2018 Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "for machine learning researchers Pieter Abbeel and Aviv Tamar to study uses of generative models for robustness and interpretability. This funding will allow Mr. Abbeel and Mr. Tamar to fund PhD students and summer undergraduates to work on classifiers, imitation learning systems, and reinforcement learning systems."

Other notes: This is the second year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It continues an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Pieter Abbeel) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 36; announced: 2018-12-11.
Open PhilanthropyDaniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai2,351.002018-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/study-robustness-machine-learning-models Donation process: The grant page says: "This project was supported through a contractor agreement. While we typically do not publish pages for contractor agreements, we occasionally opt to do so."

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to reimburse technology costs for their efforts to study the robustness of machine learning models, especially robustness to unforeseen adversaries."

Donor reason for selecting the donee: The grant page says "We believe this will accelerate progress in adversarial, worst-case robustness in machine learning."
Open PhilanthropyGoalsRL7,500.002018-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/goals-rl-workshop-on-goal-specifications-for-reinforcement-learning Discretionary grant to offset travel, registration, and other expenses associated with attending the GoalsRL 2018 workshop on goal specifications for reinforcement learning. The workshop was organized by Ashley Edwards, a recent computer science PhD candidate interested in reward learning. Announced: 2018-10-05.
Open PhilanthropyStanford University100,000.002018-07AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramer Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer."

Donor reason for selecting the donee: The grant page gives three reasons: (1) Florian Tremer is a very strong Ph.D. student, (2) excellent machine learning security work is important for AI safety, (3) increased funding in areas relevant to AI safety, like machine learning security, is expected to lead to more long-term benefits for AI safety.

Other notes: Grant is structured as an unrestricted "gift" to Stanford University Computer Science. Announced: 2018-09-06.
Open PhilanthropyAI Impacts100,000.002018-06AI safety/strategyhttps://www.openphilanthropy.org/grants/ai-impacts-general-support-2018/ Donation process: Discretionary grant

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "AI Impacts plans to use this grant to work on strategic questions related to potential risks from advanced artificial intelligence."

Donor retrospective of the donation: Renewal in 2020 https://www.openphilanthropy.org/grants/ai-impacts-general-support-2020/ and 2022 https://www.openphilanthropy.org/grants/ai-impacts-general-support/ suggest continued satisfaction with the grantee, though the amount of the 2020 renewal grant is lower (just $50,000).

Other notes: The grant is via the Machine Intelligence Research Institute. Announced: 2018-06-27.
Open PhilanthropyOught525,000.002018-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."

Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought

Other notes: Intended funding timeframe in months: 36; announced: 2018-05-30.
Open PhilanthropyOpen Phil AI Fellowship1,135,000.002018-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-fellows-program-2018 Donation process: According to the grant page: "These fellows were selected from more than 180 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research"

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship support to seven machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests."

Donor reason for donating at this time (rather than earlier or later): This is the first of annual sets of grants, decided through an annual application process.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The corresponding grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2019-class (2019), https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class (2020), and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirm that these grants will be made annually. Among the grantees, Chris Maddison would continue receiving support from Open Philanthropy in the future in the form of support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-research for his students, indicating continued endorsement of his work.

Other notes: Announced: 2018-05-31.
Open PhilanthropyStanford University6,771.002018-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learning Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775

Donor reason for selecting the donee: No specific reasons are included in the grant, but several of the workshop presenters for the previous year's conference (2017) would have their research funded by Open Philanthropy, including Jacob Steinhardt, Percy Liang, and Dawn Song.

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount was likely determined by the cost of running the workshop. The original amount of $2,539 was updated in June 2020 to $6,771.

Donor reason for donating at this time (rather than earlier or later): The timing was likely determined by the timing of the conference.
Intended funding timeframe in months: 1

Other notes: The original amount of $2,539 was updated in June 2020 to $6,771. Announced: 2018-04-18.
Open PhilanthropyAI Scholarships159,000.002018-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-scholarships-2018 Discretionary grant; total across grants to two artificial intelligence researcher, both over two years. The funding is intended to be used for the students’ tuition, fees, living expenses, and travel during their respective degree programs, and is part of an overall effort to grow the field of technical AI safety by supporting value-aligned and qualified early-career researchers. Recipients are Dmitrii Krasheninnikov, master’s degree, University of Amsterdam and Michael Cohen, master’s degree, Australian National University. Announced: 2018-07-26.
Open PhilanthropyUniversity of California, Berkeley1,450,016.002017-10AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-ai-safety-levine-dragan Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "The work will be led by Professors Sergey Levine and Anca Dragan, who will each devote approximately 20% of their time to the project, with additional assistance from four graduate students. They initially intend to focus their research on how objective misspecification can produce subtle or overt undesirable behavior in robotic systems, though they have the flexibility to adjust their focus during the grant period." The project narrative is at https://www.openphilanthropy.org/files/Grants/UC_Berkeley/Levine_Dragan_Project_Narrative_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our broad goals for this funding are to encourage top researchers to work on AI alignment and safety issues in order to build a pipeline for young researchers; to support progress on technical problems; and to generally support the growth of this area of study."

Other notes: This is the first year that Open Phil makes a grant for AI safety research to the University of California, Berkeley (excluding the founding grant for the Center for Human-Compatible AI). It would begin an annual tradition of multi-year grants to the University of California, Berkeley announced in October/November, though the researchers would be different each year. Note that the grant is to UC Berkeley, but at least one of the researchers (Anca Dragan) is affiliated with the Center for Human-Compatible AI. Intended funding timeframe in months: 48; announced: 2017-10-20.
Open PhilanthropyBerkeley Existential Risk Initiative403,890.002017-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/berkeley-existential-risk-initiative-core-support-and-chai-collaboration/ Donation process: BERI submitted a grant proposal at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work with the Center for Human-Compatible AI (CHAI) at UC Berkeley, to which the Open Philanthropy Project provided a two-year founding grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai The funding is intended to help BERI hire contractors and part-time employees to help CHAI, such as web development and coordination support, research engineers, software developers, or research illustrators. This funding is also intended to help support BERI’s core staff. More in the grant proposal https://www.openphilanthropy.org/files/Grants/BERI/BERI_Grant_Proposal_2017.pdf

Donor reason for selecting the donee: The grant page says: "Our impression is that it is often difficult for academic institutions to flexibly spend funds on technical, administrative, and other support services. We currently see BERI as valuable insofar as it can provide CHAI with these types of services, and think it’s plausible that BERI will be able to provide similar help to other academic institutions in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grantee submitted a budget for the CHAI collaboration project at https://www.openphilanthropy.org/files/Grants/BERI/BERI_Budget_for_CHAI_Collaboration_2017.xlsx

Other notes: Announced: 2017-09-28.
Open PhilanthropyStanford University1,337,600.002017-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang Donation process: The grant is the result of a proposal written by Percy Liang. The writing of the proposal was funded by a previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grant written March 2017. The proposal was reviewed by two of Open Phil's technical advisors, who both felt largely positive about the proposed research directions.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant is intended to fund about 20% of Percy Liang's time as well as about three graduate students. Liang expects to focus on a subset of these topics: robustness against adversarial attacks on ML systems, verification of the implementation of ML systems, calibrated/uncertainty-aware ML, and natural language supervision.

Donor reason for selecting the donee: The grant page says: "Both [technical advisors who reviewed te garnt proposal] felt largely positive about the proposed research directions and recommended to Daniel that Open Philanthropy make this grant, despite some disagreements [...]."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is likely determined by the grant proposal details; it covers about 20% of Percy Liang's time as well as about three graduate students.

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the timing of the grant proposal being ready.
Intended funding timeframe in months: 48

Donor thoughts on making further donations to the donee: The grant page says: "At the end of the grant period, we will decide whether to renew our support based on our technical advisors’ evaluation of Professor Liang’s work so far, his proposed next steps, and our assessment of how well his research program has served as a pipeline for students entering the field. We are optimistic about the chances of renewing our support. We think the most likely reason we might choose not to renew would be if Professor Liang decides that AI alignment research isn’t a good fit for him or for his students."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-2021/ suggests satisfaction with the grant outcome.

Other notes: Announced: 2017-09-26.
Open PhilanthropyStanford University25,000.002017-03AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grant Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to enable Professor Liang to spend significant time engaging in our process to determine whether to provide his research group with a much larger grant." The larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang would be made.

Donor thoughts on making further donations to the donee: The grant is a planning grant intended to help Percy Liang write up a proposal for a bigger grant.

Donor retrospective of the donation: The bigger proposal whose writing was funded by this grant would lead to a bigger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang in May 2017.

Other notes: Announced: 2017-09-26.
Open PhilanthropyDistill25,000.002017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/distill-prize-clarity-machine-learning-general-support Grant covers 25000 out of a total of 125000 USD initial endowment for the Distill prize https://distill.pub/prize/ administered by the Open Philanthropy Project. Other contributors to the endowment include Chris Olah, Greg Brockman, Jeff Dean, and DeepMind. The Open Philanthropy Project grant page says: "Without our funding, we estimate that there is a 60% chance that the prize would be administered at the same level of quality, a 30% chance that it would be administered at lower quality, and a 10% chance that it would not move forward at all. We believe that our assistance in administering the prize will also be of significant help to Distill.". Announced: 2017-08-11.

Donation amounts by donee and year

Donee Donors influenced Cause area Metadata Total 2021 2020 2019 2018 2017
Open Phil AI Fellowship Open Philanthropy (filter this donor) 7,060,000.00 1,300,000.00 2,300,000.00 2,325,000.00 1,135,000.00 0.00
University of California, Berkeley Open Philanthropy (filter this donor) FB Tw WP Site 4,366,016.00 660,000.00 0.00 1,111,000.00 1,145,000.00 1,450,016.00
Ought Open Philanthropy (filter this donor) AI safety Site 1,525,000.00 0.00 0.00 1,000,000.00 525,000.00 0.00
Stanford University Open Philanthropy (filter this donor) FB Tw WP Site 1,475,871.00 0.00 6,500.00 0.00 106,771.00 1,362,600.00
Massachusetts Institute of Technology Open Philanthropy (filter this donor) FB Tw WP Site 1,430,000.00 1,430,000.00 0.00 0.00 0.00 0.00
Berkeley Existential Risk Initiative Open Philanthropy (filter this donor) AI safety/other global catastrophic risks Site TW 1,358,890.00 0.00 0.00 955,000.00 0.00 403,890.00
University of Toronto Open Philanthropy (filter this donor) FB Tw WP Site 520,000.00 0.00 520,000.00 0.00 0.00 0.00
University of Cambridge Open Philanthropy (filter this donor) FB Tw WP Site 250,000.00 250,000.00 0.00 0.00 0.00 0.00
Center for Human-Compatible AI Open Philanthropy (filter this donor) AI safety WP Site TW 200,000.00 0.00 0.00 200,000.00 0.00 0.00
AI Scholarships Open Philanthropy (filter this donor) 159,000.00 0.00 0.00 0.00 159,000.00 0.00
Berryville Institute of Machine Learning Open Philanthropy (filter this donor) 150,000.00 150,000.00 0.00 0.00 0.00 0.00
AI Impacts Open Philanthropy (filter this donor) AI safety Site 100,000.00 0.00 0.00 0.00 100,000.00 0.00
Future of Life Institute Open Philanthropy (filter this donor) AI safety/other global catastrophic risks FB Tw WP Site 100,000.00 0.00 0.00 100,000.00 0.00 0.00
World Economic Forum Open Philanthropy (filter this donor) FB Tw WP Site 50,000.00 0.00 50,000.00 0.00 0.00 0.00
Distill Open Philanthropy (filter this donor) AI capabilities/AI safety Tw Site 25,000.00 0.00 0.00 0.00 0.00 25,000.00
Press Shop Open Philanthropy (filter this donor) 17,000.00 0.00 17,000.00 0.00 0.00 0.00
GoalsRL Open Philanthropy (filter this donor) AI safety Site 7,500.00 0.00 0.00 0.00 7,500.00 0.00
International Conference on Learning Representations Open Philanthropy (filter this donor) 3,500.00 0.00 3,500.00 0.00 0.00 0.00
Daniel Kang|Jacob Steinhardt|Yi Sun|Alex Zhai Open Philanthropy (filter this donor) 2,351.00 0.00 0.00 0.00 2,351.00 0.00
Smitha Milli Open Philanthropy (filter this donor) 370.00 0.00 370.00 0.00 0.00 0.00
Total ---- -- 18,800,498.00 3,790,000.00 2,897,370.00 5,691,000.00 3,180,622.00 3,241,506.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donor and year for influencer Daniel Dewey

Donor Donees Total 2021 2020 2019 2018 2017
Open Philanthropy (filter this donee) AI Impacts (filter this donee), AI Scholarships (filter this donee), Berkeley Existential Risk Initiative (filter this donee), Berryville Institute of Machine Learning (filter this donee), Center for Human-Compatible AI (filter this donee), Daniel Kang (filter this donee), Jacob Steinhardt (filter this donee), Yi Sun (filter this donee), Alex Zhai (filter this donee), Distill (filter this donee), Future of Life Institute (filter this donee), GoalsRL (filter this donee), International Conference on Learning Representations (filter this donee), Massachusetts Institute of Technology (filter this donee), Open Phil AI Fellowship (filter this donee), Ought (filter this donee), Press Shop (filter this donee), Smitha Milli (filter this donee), Stanford University (filter this donee), University of California, Berkeley (filter this donee), University of Cambridge (filter this donee), University of Toronto (filter this donee), World Economic Forum (filter this donee) 18,800,498.00 3,790,000.00 2,897,370.00 5,691,000.00 3,180,622.00 3,241,506.00
Total -- 18,800,498.00 3,790,000.00 2,897,370.00 5,691,000.00 3,180,622.00 3,241,506.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here