This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Donors list page||https://ought.org/about|
|Open Philanthropy Project grant review||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support|
|Org Watch page||https://orgwatch.issarice.com/?organization=Ought|
|Key people||Andreas Stuhlmüller|
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
|Open Philanthropy Project (filter this donee)||3,118,333.00||1,593,333.00||1,000,000.00||525,000.00|
|Effective Altruism Funds: Long-Term Future Fund (filter this donee)||60,000.00||0.00||50,000.00||10,000.00|
|Jalex Stark (filter this donee)||10,244.00||0.00||10,244.00||0.00|
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Document scope||Cause area||Notes|
|2019 AI Alignment Literature Review and Charity Comparison (GW, IR)||2019-12-19||Ben Hoskin||Effective Altruism Forum||Ben Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Project Survival and Flourising Fund||Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.|
|Suggestions for Individual Donors from Open Philanthropy Staff - 2019||2019-12-18||Holden Karnofsky||Open Philanthropy Project||Chloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel||National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought||Donation suggestion list||Criminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safety||Continuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views|
|2018 AI Alignment Literature Review and Charity Comparison (GW, IR)||2018-12-17||Ben Hoskin||Effective Altruism Forum||Ben Hoskin||Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis||Review of current state of cause area||AI safety||Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.|
|Donor||Amount (current USD)||Amount rank (out of 6)||Donation date||Cause area||URL||Influencer||Notes|
|Open Philanthropy Project||1,593,333.00||1||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020||Committee for Effective Altruism Support||Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support
Intended use of funds (category): Organizational general support
Intended use of funds: The grant page says: "Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing potential risks from advanced artificial intelligence."
Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter"
Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Machine Intelligence Research Institute ($7,703,750), Centre for Effective Altruism ($4,146,795), and 80,000 Hours ($3,457,284)
Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation Announced: 2020-02-14.
|Open Philanthropy Project||1,000,000.00||2||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019||Daniel Dewey||Intended use of funds (category): Organizational general support
Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."
Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought Intended funding timeframe in months: 1; announced: 2020-02-14.
|Effective Altruism Funds: Long-Term Future Fund||50,000.00||4||AI safety||https://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvl||Matt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka||Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)
Intended use of funds (category): Organization financial buffer
Intended use of funds: No specific information is shared on how the funds will be used at the margin, but the general description gives an idea: "Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications"
Donor reason for selecting the donee: Donor is explicitly interested in diversifying funder base for donee, who currently receives almost all its funding from only two sources and is trying to change that. Othewise, same reason as with last round of funds https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi namely "We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."
Donor reason for donating that amount (rather than a bigger or smaller amount): In write-up for previous grant at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi of $10,000, donor says: "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant." The amount this time is bigger ($50,000) but the general principle likely continues to apply
Percentage of total donor spend in the corresponding batch of donations: 5.42%
Donor reason for donating at this time (rather than earlier or later): In the previous grant round, donor had said "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." Thus, it makes sense to donate again in this round
Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
|Jalex Stark||10,244.00||5||AI safety||http://www.jalexstark.com/donations/2019.html||--||Donation process: The donor writes: "I read their published research program in depth" and links to the summary https://ought.org/updates/2020-01-11-arguments of their latest experiments.
Intended use of funds (category): Organizational general support
Intended use of funds: The donor quotes the donee's self-description: "Ought is a research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems. We study this delegation problem using experiments with human participants."
Donor reason for selecting the donee: The donor writes: "I read their published research program in depth, and I find the directions promising. It is one of three places I was willing to leave grad school to be employed at. (The other two were Jane Street, where I currently work, and OpenAI, which is much more well funded than Ought.) I aim to eventually transition to direct technical work on problems that require theoretical contributions from individuals thinking deeply for a long time. With this work I'm aiming to support the efforts of agents that are similar to future-me."
Donor reason for donating that amount (rather than a bigger or smaller amount): The donor writes that the amount was "chosen after the rest of the numbers to make the sum a round number."
Percentage of total donor spend in the corresponding batch of donations: 28.46%
|Effective Altruism Funds: Long-Term Future Fund||10,000.00||6||AI safety||https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi||Alex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka||Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient
Intended use of funds (category): Organizational general support
Intended use of funds: Grantee is a nonprofit aiming to implement AI alignment concepts in real-world applications.
Donor reason for selecting the donee: The grant page says: "We believe that Ought's approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."
Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant."
Percentage of total donor spend in the corresponding batch of donations: 10.47%
Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Donor thoughts on making further donations to the donee: The grant page says "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." This suggests that Ought will be considered for future grant rounds
Donor retrospective of the donation: The Long-Term Future Fund would make a $50,000 grant to Ought in the April 2019 grant round, suggesting that this grant would be considered a success
|Open Philanthropy Project||525,000.00||3||AI safety||https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support||Daniel Dewey||Intended use of funds (category): Organizational general support
Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."
Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.
Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano
Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup
Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought Intended funding timeframe in months: 1; announced: 2018-05-30.