Open Philanthropy donations made to OpenAI

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

ItemValue
Country United States
Affiliated organizations (current or former; restricted to potential donees or others relevant to donation decisions)GiveWell Good Ventures
Best overview URLhttps://causeprioritization.org/Open%20Philanthropy%20Project
Facebook username openphilanthropy
Websitehttps://www.openphilanthropy.org/
Donations URLhttps://www.openphilanthropy.org/giving/grants
Twitter usernameopen_phil
PredictionBook usernameOpenPhilUnofficial
Page on philosophy informing donationshttps://www.openphilanthropy.org/about/vision-and-values
Grant application process pagehttps://www.openphilanthropy.org/giving/guide-for-grant-seekers
Regularity with which donor updates donations datacontinuous updates
Regularity with which Donations List Website updates donations data (after donor update)continuous updates
Lag with which donor updates donations datamonths
Lag with which Donations List Website updates donations data (after donor update)days
Data entry method on Donations List WebsiteManual (no scripts used)
Org Watch pagehttps://orgwatch.issarice.com/?organization=Open+Philanthropy

Brief history: Open Philanthropy (Open Phil for short) spun off from GiveWell, starting as GiveWell Labs in 2011, beginning to make strong progress in 2013, and formally separating from GiveWell as the "Open Philanthropy Project" in June 2017. In 2020, it started going by "Open Philanthropy" dropping the "Project" word.

Brief notes on broad donor philosophy and major focus areas: Open Philanthropy is focused on openness in two ways: open to ideas about cause selection, and open in explaining what they are doing. It has endorsed "hits-based giving" and is working on areas of AI risk, biosecurity and pandemic preparedness, and other global catastrophic risks, criminal justice reform (United States), animal welfare, and some other areas.

Notes on grant decision logistics: See https://www.openphilanthropy.org/blog/our-grantmaking-so-far-approach-and-process for the general grantmaking process and https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant for more questions that grant investigators are encouraged to consider. Every grant has a grant investigator that we call the influencer here on Donations List Website; for focus areas that have Program Officers, the grant investigator is usually the Program Officer. The grant investigator has been included in grants published since around July 2017. Grants usually need approval from an executive; however, some grant investigators have leeway to make "discretionary grants" where the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more. Note that the term "discretionary grant" means something different for them compared to government agencies, see https://www.facebook.com/vipulnaik.r/posts/10213483361534364 for more.

Notes on grant publication logistics: Every publicly disclosed grant has a writeup published at the time of public disclosure, but the writeups vary significantly in length. Grant writeups are usually written by somebody other than the grant investigator, but approved by the grant investigator as well as the grantee. Grants have three dates associated with them: an internal grant decision date (that is not publicly revealed but is used in some statistics on total grant amounts decided by year), a grant date (which we call donation date; this is the date of the formal grant commitment, which is the published grant date), and a grant announcement date (which we call donation announcement date; the date the grant is announced to the mailing list and the grant page made publicly visible). Lags are a few months between decision and grant, and a few months between grant and announcement, due to time spent with grant writeup approval.

Notes on grant financing: See https://www.openphilanthropy.org/giving/guide-for-grant-seekers or https://www.openphilanthropy.org/about/who-we-are for more information. Grants generally come from the Open Philanthropy Fund, a donor-advised fund managed by the Silicon Valley Community Foundation, with most of its money coming from Good Ventures. Some grants are made directly by Good Ventures, and political grants may be made by the Open Philanthropy Action Fund. At least one grant https://www.openphilanthropy.org/focus/us-policy/criminal-justice-reform/working-families-party-prosecutor-reforms-new-york was made by Cari Tuna personally. The majority of grants are financed by the Open Philanthropy Project Fund; however, the source of financing of a grant is not always explicitly specified, so it cannot be confidently assumed that a grant with no explicit listed financing is financed through the Open Philanthropy Project Fund; see the comment https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Funding for multi-year grants is usually disbursed annually, and the amounts are often equal across years, but not always. The fact that a grant is multi-year, or the distribution of the grant amount across years, are not always explicitly stated on the grant page; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information. Some grants to universities are labeled "gifts" but this is a donee classification, based on different levels of bureaucratic overhead and funder control between grants and gifts; see https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-462 for more information.

Miscellaneous notes: Most GiveWell-recommended grants made by Good Ventures and listed in the Open Philanthropy database are not listed on Donations List Website as being under Open Philanthropy. Specifically, GiveWell Incubation Grants are not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=GiveWell+Incubation+Grants with donor GiveWell Incubation Grants), and grants made by Good Ventures to GiveWell top and standout charities are also not included (these are listed at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+top+and+standout+charities with donor Good Ventures/GiveWell top and standout charities). Grants to support GiveWell operations are not included here; they can be found at https://donations.vipulnaik.com/donor.php?donor=Good+Ventures%2FGiveWell+support with donor "Good Ventures/GiveWell support".The investment https://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/impossible-foods in Impossible Foods is not included because it does not fit our criteria for a donation, and also because no amount was included. All other grants publicly disclosed by open philanthropy that are not GiveWell Incubation Grants or GiveWell top and standout charity grants should be included. Grants disclosed by grantees but not yet disclosed by Open Philanthropy are not included; some of them may be listed at https://issarice.com/open-philanthropy-project-non-grant-funding

Full donor page for donor Open Philanthropy

Basic donee information

ItemValue
Country United States
Facebook page openai.research
Websitehttps://openai.com/
Twitter usernameopenai
Wikipedia pagehttps://en.wikipedia.org/wiki/OpenAI
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_OpenAI
Org Watch pagehttps://orgwatch.issarice.com/?organization=OpenAI
Key peopleSam Altman|Elon Musk|Ilya Sutskever|Ian Goodfellow|Greg Brockman
Launch date2015-12-11
NotesStarted by Sam Altman and Elon Musk (with billion dollar commitment from Musk) to help with safe creation of human-level artificial intelligence in an open and robust way

Full donee page for donee OpenAI

Donor–donee relationship

Item Value

Donor–donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 1 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000
AI safety 1 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000 30,000,000

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Total 2017
AI safety (filter this donor) 1 30,000,000.00 30,000,000.00
Total 1 30,000,000.00 30,000,000.00

Skipping spending graph as there is at most one year’s worth of donations.

Full list of documents in reverse chronological order (3 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)2019-09-10Ryan Carey Effective Altruism ForumFounders Pledge Open Philanthropy OpenAI Machine Intelligence Research Institute Broad donor strategyAI safety|Global catastrophic risks|Scientific research|PoliticsRyan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.

Full list of donations in reverse chronological order (1 donations)

Graph of all donations (with known year of donation), showing the timeframe of donations

Graph of donations and their timeframes
Amount (current USD)Amount rank (out of 1)Donation dateCause areaURLInfluencerNotes
30,000,000.0012017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support-- Donation process: According to the grant page Section 4 Our process: "OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors."

Intended use of funds (category): Organizational general support

Intended use of funds: The funds will be used for general support of OpenAI, with 10 million USD per year for the next three years. The funding is also accompanied with Holden Karnofsky (Open Phil director) joining the OpenAI Board of Directors. Karnofsky and one other board member will oversee OpenAI's safety and governance work.

Donor reason for selecting the donee: Open Phil says that, given its interest in AI safety, it is looking to fund and closely partner with orgs that (a) are working to build transformative AI, (b) are advancing the state of the art in AI research, (c) employ top AI research talent. OpenAI and Deepmind are two such orgs, and OpenAI is particularly appealing due to "our shared values, different starting assumptions and biases, and potential for productive communication." Open Phil is looking to gain the following from a partnership: (i) Improve its understanding of AI research, (ii) Improve its ability to generically achieve goals regarding technical AI safety research, (iii) Better position Open Phil to promote its ideas and goals.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page Section 2.2 "A note on why this grant is larger than others we’ve recommended in this focus area" explains the reasons for the large grant amount (relative to other grants by Open Phil so far). Reasons listed are: (i) Hits-based giving philosophy, described at https://www.openphilanthropy.org/blog/hits-based-giving in depth, (ii) Disproportionately high importance of the cause if transformative AI is developed in the next 20 years, and likelihood that OpenAI will be very important if that happens, (iii) Benefits of working closely with OpenAI in informing Open Phil's understanding of AI safety, (iv) Field-building benefits, including promoting an AI safety culture, (v) Since OpenAI has a lot of other funding, Open Phil can grant a large amount while still not raising the concern of dominating OpenAI's funding.

Donor reason for donating at this time (rather than earlier or later): No specific timing considerations are provided. It is likely that the timing of the grant is determined by when OpenAI first approached Open Phil and the time taken for the due diligence.
Intended funding timeframe in months: 36

Other notes: External discussions include http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ cross-posted to https://www.lesswrong.com/posts/2z5vrsu7BoiWckLby/an-openai-board-seat-is-surprisingly-expensive (GW, IR) (post by Ben Hoffman, attracting comments at both places), https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31.