Open Philanthropy donations made to Stanford University

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy

Brief history: Open Philanthropy (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell as the "Open Philanthropy Project" in June 2017. In 2020, it started going by "Open Philanthropy" dropping the "Project" word.

Brief notes on broad donor philosophy and major focus areas: Open Philanthropy is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more.

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval.

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy database are not listed on Donations List Website as being under Open Philanthropy. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by open philanthropy that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by Open Philanthropy are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Full donor page for donor Open Philanthropy

Basic donee information

ItemValue
Country
Facebook page stanford
Websitehttps://www.stanford.edu/
Twitter usernamestanford
Wikipedia pagehttps://en.wikipedia.org/wiki/Stanford_University
Instagram usernamestanford
Org Watch pagehttps://orgwatch.issarice.com/?organization=Stanford+University

Full donee page for donee Stanford University

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 10 100,000 386,928 6,500 6,500 6,771 25,000 78,000 100,000 153,820 330,792 330,792 1,337,600 1,500,000
AI safety 10 100,000 386,928 6,500 6,500 6,771 25,000 78,000 100,000 153,820 330,792 330,792 1,337,600 1,500,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2022 2021 2020 2018 2017
AI safety (filter this donor) 10 3,869,275.00 153,820.00 2,239,584.00 6,500.00 106,771.00 1,362,600.00
Total 10 3,869,275.00 153,820.00 2,239,584.00 6,500.00 106,771.00 1,362,600.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Full list of documents in reverse chronological order (1 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
How Life Sciences Actually Work: Findings of a Year-Long Investigation (GW, IR)2019-08-16Alexey Guzey Effective Altruism ForumNational Institutes of Health Howard Hughes Medical Institute Chan Zuckerberg Initiative Open Philanthropy Amgen Life Sciences Research Foundation Harvard University Massachusetts Institute of Technology Stanford University Review of current state of cause areaBiomedical researchGuzey surveys the current state of biomedical research, primarily in academia in the United States. His work is the result of interviewing about 60 people. Emergent Ventures provided financial support. His takeaways: (1) Life science is not slowing down (2) Nothing works the way you would naively think it does (for better or for worse) (3) If you're smart and driven, you'll find a way in (4) Nobody cares if you're a genius (5) Almost all biologists are solo founders. This is probably suboptimal (6) There's insufficient space for people who just want to be researchers and not managers (7) Peer review is a disaster (8) Nobody agrees on whether big labs are good or bad (9) Senior scientists are bound by their students' incentives (10) Universities seem to maximize their profits, with good research being a side-effect (11) Large parts of modern scientific literature are wrong (12) Raising money is very difficult even for famous scientists. Final conclusion: "academia has a lot of problems but it's less broken than it seems from the outside."

Full list of donations in reverse chronological order (10 donations)

Graph of all donations (with known year of donation), showing the timeframe of donations

Graph of donations and their timeframes
Amount (current USD)Amount rank (out of 10)Donation dateCause areaURLInfluencerNotes
153,820.0052022-07AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-barrett-and-viteri/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research on AI alignment by Professor Clark Barrett and Stanford student Scott Viteri."
1,500,000.0012021-11AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-2021/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support research led by Professor Percy Liang on AI safety and alignment."

Donor reason for selecting the donee: The grant page says: "We hope this funding will accelerate progress on technical problems and help to build a pipeline for younger researchers to work on AI alignment."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason is given for the amount. It is somewhat but not a lot higher per year ($1,500,000 over 3 years = $500,000) than the previous grant https://www.openphilanthropy.org/grants/stanford-university-support-for-percy-liang/ ($1,337,600 over 4 years = $334,400 per year).

Donor reason for donating at this time (rather than earlier or later): No explicit reason is given for the timing. The grant is made right around the end of the timeframe of the previous grant https://www.openphilanthropy.org/grants/stanford-university-support-for-percy-liang/ (four-year grant made in 2017) also for Percy Liang's research.
Intended funding timeframe in months: 36
78,000.0072021-09AI safety/strategyhttps://www.openphilanthropy.org/grants/stanford-university-ai-index/-- Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to "support the AI Index, which collects and reports data related to artificial intelligence, including data relevant to AI safety and AI ethics." The webpage https://aiindex.stanford.edu/ is linked.
330,792.0032021-08AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsiprasCatherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Dimitris Tsipras on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-santurkar at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
330,792.0032021-08AI safety/technical researchhttps://www.openphilanthropy.org/grants/stanford-university-adversarial-robustness-research-shibani-santurkar/Catherine Olsson Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support early-career research by Shibani Santurkar on adversarial robustness as a means to improve AI safety."

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reasons for the amount are given, but the amount is similar to the amounts for other grants from Open Philanthropy to early-stage researchers in adversarial robustness research. This includes the two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research made around the same time, as well as grants earlier in the year to researchers at Carnegie Mellon University, University of Tübingen, and UC Berkeley.

Donor reason for donating at this time (rather than earlier or later): At around the same time as this grant, Open Philanthropy made two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/university-of-southern-california-adversarial-robustness-research to early-stage researchers in adversarial robustness research.
Intended funding timeframe in months: 36

Other notes: Open Phil made another grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-adversarial-robustness-research-tsipras at the same time, for the same amount and 3-year timeframe, with the same grant investigator, and with the same receiving university.
6,500.00102020-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-ai-safety-seminarDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: The grant "is intended to fund the travel costs for experts on AI safety to present at the [AI safety] seminar [led by Dorsa Sadigh]."

Other notes: Intended funding timeframe in months: 1.
100,000.0062018-07AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-machine-learning-security-research-dan-boneh-florian-tramerDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support machine learning security research led by Professor Dan Boneh and his PhD student, Florian Tramer."

Donor reason for selecting the donee: The grant page gives three reasons: (1) Florian Tremer is a very strong Ph.D. student, (2) excellent machine learning security work is important for AI safety, (3) increased funding in areas relevant to AI safety, like machine learning security, is expected to lead to more long-term benefits for AI safety.

Other notes: Grant is structured as an unrestricted "gift" to Stanford University Computer Science. Announced: 2018-09-06.
6,771.0092018-04AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-nips-workshop-machine-learningDaniel Dewey Donation process: Discretionary grant

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support the Neural Information Processing System (NIPS) workshop “Machine Learning and Computer Security.” at https://nips.cc/Conferences/2017/Schedule?showEvent=8775

Donor reason for selecting the donee: No specific reasons are included in the grant, but several of the workshop presenters for the previous year's conference (2017) would have their research funded by Open Philanthropy, including Jacob Steinhardt, Percy Liang, and Dawn Song.

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount was likely determined by the cost of running the workshop. The original amount of $2,539 was updated in June 2020 to $6,771.

Donor reason for donating at this time (rather than earlier or later): The timing was likely determined by the timing of the conference.
Intended funding timeframe in months: 1

Other notes: The original amount of $2,539 was updated in June 2020 to $6,771. Announced: 2018-04-18.
1,337,600.0022017-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liangDaniel Dewey Donation process: The grant is the result of a proposal written by Percy Liang. The writing of the proposal was funded by a previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grant written March 2017. The proposal was reviewed by two of Open Phil's technical advisors, who both felt largely positive about the proposed research directions.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant is intended to fund about 20% of Percy Liang's time as well as about three graduate students. Liang expects to focus on a subset of these topics: robustness against adversarial attacks on ML systems, verification of the implementation of ML systems, calibrated/uncertainty-aware ML, and natural language supervision.

Donor reason for selecting the donee: The grant page says: "Both [technical advisors who reviewed te garnt proposal] felt largely positive about the proposed research directions and recommended to Daniel that Open Philanthropy make this grant, despite some disagreements [...]."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is likely determined by the grant proposal details; it covers about 20% of Percy Liang's time as well as about three graduate students.

Donor reason for donating at this time (rather than earlier or later): The timing is likely determined by the timing of the grant proposal being ready.
Intended funding timeframe in months: 48

Donor thoughts on making further donations to the donee: The grant page says: "At the end of the grant period, we will decide whether to renew our support based on our technical advisors’ evaluation of Professor Liang’s work so far, his proposed next steps, and our assessment of how well his research program has served as a pipeline for students entering the field. We are optimistic about the chances of renewing our support. We think the most likely reason we might choose not to renew would be if Professor Liang decides that AI alignment research isn’t a good fit for him or for his students."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/grants/stanford-university-ai-alignment-research-2021/ suggests satisfaction with the grant outcome.

Other notes: Announced: 2017-09-26.
25,000.0082017-03AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-percy-liang-planning-grantDaniel Dewey Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to enable Professor Liang to spend significant time engaging in our process to determine whether to provide his research group with a much larger grant." The larger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang would be made.

Donor thoughts on making further donations to the donee: The grant is a planning grant intended to help Percy Liang write up a proposal for a bigger grant.

Donor retrospective of the donation: The bigger proposal whose writing was funded by this grant would lead to a bigger grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/stanford-university-support-percy-liang in May 2017.

Other notes: Announced: 2017-09-26.