, ,
This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2025. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
Item | Value |
---|---|
Country | United States |
Facebook page | openai.research |
Website | https://openai.com/ |
Twitter username | openai |
Wikipedia page | https://en.wikipedia.org/wiki/OpenAI |
Open Philanthropy Project grant review | http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support |
Timelines wiki page | https://timelines.issarice.com/wiki/Timeline_of_OpenAI |
Org Watch page | https://orgwatch.issarice.com/?organization=OpenAI |
Key people | Sam Altman|Elon Musk|Ilya Sutskever|Ian Goodfellow|Greg Brockman |
Launch date | 2015-12-11 |
Notes | Started by Sam Altman and Elon Musk (with billion dollar commitment from Musk) to help with safe creation of human-level artificial intelligence in an open and robust way |
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 1 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 |
AI safety | 1 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 | 30,000,000 |
Donor | Total | 2017 |
---|---|---|
Open Philanthropy (filter this donee) | 30,000,000.00 | 30,000,000.00 |
Total | 30,000,000.00 | 30,000,000.00 |
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR) | 2022-08-28 | Thomas Larsen Eli | LessWrong | Fund for Alignment Resesarch | Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research | Review of current state of cause area | AI safety | This post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them. | |
2021 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2021-12-23 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund | Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure. |
2020 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2020-12-21 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund | Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint. |
2019 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2019-12-19 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund | Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. |
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR) | 2019-09-10 | Ryan Carey | Effective Altruism Forum | Founders Pledge Open Philanthropy | OpenAI Machine Intelligence Research Institute | Broad donor strategy | AI safety|Global catastrophic risks|Scientific research|Politics | Ryan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals. | |
2018 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2018-12-17 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues. | |
Changes in funding in the AI safety field | 2017-02-01 | Sebastian Farquhar | Centre for Effective Altruism | Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab | Review of current state of cause area | AI safety | The post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/ | ||
2016 AI Risk Literature Review and Charity Comparison (GW, IR) | 2016-12-13 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving | Review of current state of cause area | AI safety | The lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute." |
Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations
Donor | Amount (current USD) | Amount rank (out of 1) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
Open Philanthropy | 30,000,000.00 | 1 | AI safety | https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support | -- | Donation process: According to the grant page Section 4 Our process: "OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors." Intended use of funds (category): Organizational general support Intended use of funds: The funds will be used for general support of OpenAI, with 10 million USD per year for the next three years. The funding is also accompanied with Holden Karnofsky (Open Phil director) joining the OpenAI Board of Directors. Karnofsky and one other board member will oversee OpenAI's safety and governance work. Donor reason for selecting the donee: Open Phil says that, given its interest in AI safety, it is looking to fund and closely partner with orgs that (a) are working to build transformative AI, (b) are advancing the state of the art in AI research, (c) employ top AI research talent. OpenAI and Deepmind are two such orgs, and OpenAI is particularly appealing due to "our shared values, different starting assumptions and biases, and potential for productive communication." Open Phil is looking to gain the following from a partnership: (i) Improve its understanding of AI research, (ii) Improve its ability to generically achieve goals regarding technical AI safety research, (iii) Better position Open Phil to promote its ideas and goals. Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page Section 2.2 "A note on why this grant is larger than others we’ve recommended in this focus area" explains the reasons for the large grant amount (relative to other grants by Open Phil so far). Reasons listed are: (i) Hits-based giving philosophy, described at https://www.openphilanthropy.org/blog/hits-based-giving in depth, (ii) Disproportionately high importance of the cause if transformative AI is developed in the next 20 years, and likelihood that OpenAI will be very important if that happens, (iii) Benefits of working closely with OpenAI in informing Open Phil's understanding of AI safety, (iv) Field-building benefits, including promoting an AI safety culture, (v) Since OpenAI has a lot of other funding, Open Phil can grant a large amount while still not raising the concern of dominating OpenAI's funding. Donor reason for donating at this time (rather than earlier or later): No specific timing considerations are provided. It is likely that the timing of the grant is determined by when OpenAI first approached Open Phil and the time taken for the due diligence. Intended funding timeframe in months: 36 Other notes: External discussions include http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ cross-posted to https://www.lesswrong.com/posts/2z5vrsu7BoiWckLby/an-openai-board-seat-is-surprisingly-expensive (GW, IR) (post by Ben Hoffman, attracting comments at both places), https://twitter.com/Pinboard/status/848009582492360704 (critical tweet with replies), https://www.facebook.com/vipulnaik.r/posts/10211478311489366 (Facebook post by Vipul Naik, with some comments), https://www.facebook.com/groups/effective.altruists/permalink/1350683924987961/ (Facebook post by Alasdair Pearce in Effective Altruists Facebook group, with some comments), and https://news.ycombinator.com/item?id=14008569 (Hacker News post, with some comments). Announced: 2017-03-31. |