Catherine Olsson money moved

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

This entity is also a donor.

Full list of documents in reverse chronological order (0 documents)

There are no documents associated with this influencer.

Full list of donations in reverse chronological order (15 donations)

DonorDoneeAmount (current USD)Donation dateCause areaURLNotes
Jaan TallinnCharter Cities Institute261,000.002022-06-23Alternate governance/charter citieshttps://jaan.online/philanthropy/donations.html Donation process: Part of the Survival and Flourishing Fund's 2022 H1 grants https://survivalandflourishing.fund/sff-2022-h1-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal value functions. Recommenders specified a marginal value function for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's eighth grant round.

Other notes: The grantee is recorded as "Center for Innovative Governance Research" in Jaan Tallinn's donation log; this is the organization's formal name.
Jaan TallinnRedwood Research1,274,000.002022-06-16AI safety/technical researchhttps://survivalandflourishing.fund/sff-2022-h1-recommendations Donation process: Part of the Survival and Flourishing Fund's 2022 H1 grants https://survivalandflourishing.fund/sff-2022-h1-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal value functions. Recommenders specified a marginal value function for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different value functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's seventh grant round and the first with a grant to this grantee.

Donor retrospective of the donation: The future round https://survivalandflourishing.fund/sff-2023-h1-recommendations includes a grant recommendation to Redwood Research for $1,098,000, suggesting satisfaction with the grant outcome.
Open PhilanthropyStanford University330,792.002021-08AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Dimitris Tsipras on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
Open PhilanthropyStanford University330,792.002021-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-adversarial-robustness-research-shibani-santurkar/ Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Shibani Santurkar on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
Open PhilanthropyUniversity of Southern California320,000.002021-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-southern-california-adversarial-robustness-research/ Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Robin Jia on adversarial robustness and out-of-distribution generalization as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36
Open PhilanthropyCarnegie Mellon University330,000.002021-05AI safetyhttps://www.openphilanthropy.org/grants/carnegie-mellon-university-adversarial-robustness-research/ Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support Professor Zico Kolter on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes grants earlier and later in the year to early-stage researchers at UC Berkeley, University of Tübingen, Stanfard University, and University of Southern California.

Other notes: Intended funding timeframe in months: 36.
Open PhilanthropyUniversity of California, Berkeley330,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Dawn Song on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the foour other grants.It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in Januaay and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyUniversity of California, Berkeley330,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor David Wagner on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyMassachusetts Institute of Technology1,430,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Aleksandr Madry on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyUniversity of Tübingen590,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-tubingen-robustness-research-wieland-brendel/ Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support early-career research by Wieland Brendel on robustness as a means to improve AI safety."

Donor reason for selecting the donee: Open Phil made five grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research for "adversarial robustness research" in January and February 2021, around the time of this grant. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating at this time (rather than earlier or later): Open Phil made five grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research for "adversarial robustness research" in January and February 2021, around the time of this grant. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyUniversity of Tübingen300,000.002021-02AI safety/technical researchhttps://www.openphilanthropy.org/grants/university-of-tubingen-adversarial-robustness-research-matthias-hein/ Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research by Professor Matthias Hein on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-santa-cruz-xie-adversarial-robustness https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36
Open PhilanthropyUniversity of California, Santa Cruz265,000.002021-01AI safety/technical researchhttps://www.openphilanthropy.org/grants/uc-santa-cruz-adversarial-robustness-research/ Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says the grant is "to support early-career research by Cihang Xie on adversarial robustness as a means to improve AI safety."

Donor reason for selecting the donee: This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes three other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song made at the same time as well as grants later in the year to early-stage researchers at Carnegie Mellon University, Stanford University, and University of Southern California.

Donor reason for donating at this time (rather than earlier or later): This is one of five grants made by the donor for "adversarial robustness research" in January and February 2021, all with the same grant investigators (Catherine Olsson and Daniel Dewey) except the Santa Cruz grant that had Olsson and Nick Beckstead. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-tuebingen-adversarial-robustness-hein https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/mit-adversarial-robustness-research https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-wagner and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-adversarial-robustness-song are the four other grants. It looks like the donor became interested in funding this research topic at this time.
Intended funding timeframe in months: 36

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/university-of-california-santa-cruz-adversarial-robustness-research-2023/ to support the same research leader and research agenda suggests satisfaction with the grant outcome.
Open PhilanthropyBerryville Institute of Machine Learning150,000.002021-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/berryville-institute-of-machine-learning Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research led by Gary McGraw on machine learning security. The research will focus on building a taxonomy of known attacks on machine learning, exploring a hypothesis of representation and machine learning risk, and performing an architectural risk analysis of machine learning systems."

Donor reason for selecting the donee: The grant page says: "Our potential risks from advanced artificial intelligence team hopes that the research will help advance the field of machine learning security."
Open PhilanthropyUniversity of Toronto520,000.002020-12AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-toronto-machine-learning-research Donation process: The researcher (Chris Maddison) whose students' work is to be funded with this grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says: "[the grant is] to support research on understanding, predicting, and controlling machine learning systems, led by Professor Chris Maddison, a former Open Phil AI Fellow. This funding is intended to enable three students and a postdoctoral researcher to work with Professor Maddison on the research."

Donor reason for selecting the donee: The researcher (Chris Maddison) whose students' work is to be funded with htis grant had previously been an Open Phil AI Fellow while pursuing his DPhil in 2018. The past connection and subsequent academic progress of the researcher (now an assistant professor) may have been factors, but the grant page has no details on the decision process.

Other notes: Intended funding timeframe in months: 48.
Open PhilanthropyOpen Phil AI Fellowship2,300,000.002020-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2020-class Donation process: According to the grant page: "These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research."

Intended use of funds (category): Living expenses during project

Intended use of funds: Grant to provide scholarship to ten machine learning researchers over five years

Donor reason for selecting the donee: According to the grant page: "The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence. We plan to host gatherings once or twice per year where fellows can get to know one another, learn about each other’s work, and connect with other researchers who share their interests." In a comment reply https://forum.effectivealtruism.org/posts/DXqxeg3zj6NefR9ZQ/open-philanthropy-our-progress-in-2019-and-plans-for-2020#BCvuhRCg9egAscpyu (GW, IR) on the Effectiive Altruism Forum, grant investigator Catherine Olsson writes: "But the short answer is I think the key pieces to keep in mind are to view the fellowship as 1) a community, not just individual scholarships handed out, and as such also 2) a multi-year project, built slowly."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is comparable to the total amount of the 2019 fellowship grants, though it is distributed among a slightly larger pool of people.

Donor reason for donating at this time (rather than earlier or later): This is the third of annual sets of grants, decided through an annual application process, with the announcement made between April and June each year. The timing may have been chosen to sync with the academic year.
Intended funding timeframe in months: 60

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/open-phil-ai-fellowship-2021-class (2021) confirms that the program would continue.

Other notes: Announced: 2020-05-12.

Donation amounts by donee and year

Donee Donors influenced Cause area Metadata Total 2022 2021 2020
Open Phil AI Fellowship Open Philanthropy (filter this donor) 2,300,000.00 0.00 0.00 2,300,000.00
Massachusetts Institute of Technology Open Philanthropy (filter this donor) FB Tw WP Site 1,430,000.00 0.00 1,430,000.00 0.00
Redwood Research Jaan Tallinn (filter this donor) 1,274,000.00 1,274,000.00 0.00 0.00
University of Tübingen Open Philanthropy (filter this donor) 890,000.00 0.00 890,000.00 0.00
Stanford University Open Philanthropy (filter this donor) FB Tw WP Site 661,584.00 0.00 661,584.00 0.00
University of California, Berkeley Open Philanthropy (filter this donor) FB Tw WP Site 660,000.00 0.00 660,000.00 0.00
University of Toronto Open Philanthropy (filter this donor) FB Tw WP Site 520,000.00 0.00 0.00 520,000.00
Carnegie Mellon University Open Philanthropy (filter this donor) FB Tw WP Site 330,000.00 0.00 330,000.00 0.00
University of Southern California Open Philanthropy (filter this donor) FB Tw WP Site 320,000.00 0.00 320,000.00 0.00
University of California, Santa Cruz Open Philanthropy (filter this donor) 265,000.00 0.00 265,000.00 0.00
Charter Cities Institute Jaan Tallinn (filter this donor) 261,000.00 261,000.00 0.00 0.00
Berryville Institute of Machine Learning Open Philanthropy (filter this donor) 150,000.00 0.00 150,000.00 0.00
Total ---- -- 9,061,584.00 1,535,000.00 4,706,584.00 2,820,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donor and year for influencer Catherine Olsson

Donor Donees Total 2022 2021 2020
Open Philanthropy (filter this donee) Berryville Institute of Machine Learning (filter this donee), Carnegie Mellon University (filter this donee), Massachusetts Institute of Technology (filter this donee), Open Phil AI Fellowship (filter this donee), Stanford University (filter this donee), University of California, Berkeley (filter this donee), University of California, Santa Cruz (filter this donee), University of Southern California (filter this donee), University of Toronto (filter this donee), University of Tübingen (filter this donee) 7,526,584.00 0.00 4,706,584.00 2,820,000.00
Jaan Tallinn (filter this donee) Charter Cities Institute (filter this donee), Redwood Research (filter this donee) 1,535,000.00 1,535,000.00 0.00 0.00
Total -- 9,061,584.00 1,535,000.00 4,706,584.00 2,820,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here