Ought donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Websitehttps://ought.org
Donors list pagehttps://ought.org/about
Open Philanthropy Project grant reviewhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support
Org Watch pagehttps://orgwatch.issarice.com/?organization=Ought
Key peopleAndreas Stuhlmüller

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 9 525,000 981,175 10,000 10,000 10,244 50,000 100,000 525,000 542,000 1,000,000 1,593,333 5,000,000 5,000,000
AI safety 7 525,000 1,169,797 10,000 10,000 10,244 50,000 50,000 525,000 1,000,000 1,000,000 1,593,333 5,000,000 5,000,000
Global catastrophic risks 1 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000 100,000
1 542,000 542,000 542,000 542,000 542,000 542,000 542,000 542,000 542,000 542,000 542,000 542,000 542,000

Donation amounts by donor and year for donee Ought

Donor Total 2022 2021 2020 2019 2018
FTX Future Fund (filter this donee) 5,000,000.00 5,000,000.00 0.00 0.00 0.00 0.00
Open Philanthropy (filter this donee) 3,118,333.00 0.00 0.00 1,593,333.00 1,000,000.00 525,000.00
Jaan Tallinn (filter this donee) 542,000.00 0.00 542,000.00 0.00 0.00 0.00
Survival and Flourishing Fund (filter this donee) 100,000.00 0.00 0.00 0.00 100,000.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 60,000.00 0.00 0.00 0.00 50,000.00 10,000.00
Jalex Stark (filter this donee) 10,244.00 0.00 0.00 0.00 10,244.00 0.00
Total 8,830,577.00 5,000,000.00 542,000.00 1,593,333.00 1,160,244.00 535,000.00

Full list of documents in reverse chronological order (8 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR)2022-08-28Thomas Larsen Eli LessWrongFund for Alignment Resesarch Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research Review of current state of cause areaAI safetyThis post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them.
Future Fund June 2022 Update2022-06-30Nick Beckstead Leopold Aschenbrenner Avital Balwit William MacAskill Ketan Ramakrishnan FTX Future FundFTX Future Fund Manifold Markets ML Safety Scholars Program Andi Peng Braden Leach Thomas Kwa SecureBio Ray Amjad Apollo Academic Surveys Justin Mares Longview Philanthropy Atlas Fellowship Effective Ideas Blog Prize Ought Swift Centre for Applied Forecasting Federation for American Scientists Public Editor Project Quantified Uncertainty Research Institute Moncef Slaoui AI Impacts EA Critiques and Red Teaming Prize Broad donor strategyLongtermism|AI safety|Biosecurity and pandemic preparedness|Effective altruismThis lengthy blog post, cross-posted at https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update (GW, IR) to the Effective Altruism Forum, goes into detail regarding the grantmaking of the FTX Future Fund so far, and learnings from this grantmaking. The post reports having made 262 grants and investments, with $132 million in total spend. Three funding models are in use: regranting ($31 million so far), open call ($26 million so far), and staff-led grantmaking ($73 million so far).
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
Zvi’s Thoughts on the Survival and Flourishing Fund (SFF) (GW, IR)2021-12-14Zvi Mowshowitz LessWrongSurvival and Flourishing Fund Jaan Tallinn Jed McCaleb The Casey and Family Foundation Effective Altruism Funds:Long-Term Future Fund Center on Long-Term Risk Alliance to Feed the Earth in Disasters The Centre for Long-Term Resilience Lightcone Infrastructure Effective Altruism Funds: Infrastructure Fund Centre for the Governance of AI Ought New Science Research Berkeley Existential Risk Initiative AI Objectives Institute Topos Institute Emergent Ventures India European Biostasis Foundation Laboratory for Social Minds PrivateARPA Charter Cities Institute Survival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Miscellaneous commentaryLongtermism|AI safety|Global catastrophic risksIn this lengthy post, Zvi Mowshowitz, who was one of the recommenders for the Survival and Flourishing Fund's 2021 H2 grant round based on the S-process, describes his experience with the process, his impressions of several of the grantees, and implications for what kinds of grant applications are most likely to succeed. Zvi says that the grant round suffered from the problem of Too Much Money (TMM); there was way more money than any individual recommender felt comfortable granting, and just about enough money for the combined preferences of all recommenders, which meant that any recommender could unilaterally push a particular grantee through. The post has several other observations and attracts several comments.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open PhilanthropyChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.

Full list of donations in reverse chronological order (9 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 9)Donation dateCause areaURLInfluencerNotes
FTX Future Fund5,000,000.0012022-05AI safetyhttps://ftxfuturefund.org/our-regrants/-- Donation process: The grant is made as part of the Future Fund's regranting program. See https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Regranting_program_in_more_detail (GW, IR) for more detail on the regranting program.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant to "support Ought’s work building Elicit, a language-model based research assistant."

Donor reason for selecting the donee: The grant description says: "This work contributes to research on reducing alignment risk through scaling human supervision via process-based systems."
Jaan Tallinn542,000.0042021-10--https://survivalandflourishing.fund/sff-2021-h2-recommendationsSurvival and Flourishing Fund Beth Barnes Oliver Habryka Zvi Mowshowitz Donation process: Part of the Survival and Flourishing Fund's 2021 H2 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a table of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts. [...] [The] system is designed to generally favor funding things that at least one recommender is excited to fund, rather than things that every recommender is excited to fund." https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) explains the process from a recommender's perspective.

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Zvi Mowshowitz, one of the recommenders, writes in https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) "Ought was a weird case, where I had the strong initial instinct that Ought, as I understood it, was doing a net harmful thing. [...] A lot of others positivity seemed to reflect knowing the people involved, whereas I don’t know them at all. A lot of support seemed to come down to People Doing Thing being present, and faith that those people would look for net positive things and to avoid net bad things generally, and that they had an active eye towards AI Safety. [...] I wouldn’t be surprised to learn this was net harmful, but there was enough disagreement and upside in various ways that I concluded that my expectation was positive, so I no longer felt the need to actively try to stop others from funding."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's sixth grant round and the second one with a grant to the grantee.

Other notes: The other two funders in this SFF grant round (Jed McCaleb and The Casey and Family Foundation) do not make grants to Ought. In https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff (GW, IR) Zvi Mowshowitz, one of the recommenders in the grant round, writes about his evaluation of Ought's agenda: "They are using GPT-3 to assist in research, to do things like generate questions to ask, or classify data, or do whatever else GPT-3 can do. The goal is to make research easier. However, because it’s good at the things GPT-3 is good at, this is going to be a much bigger deal for those looking to do performative science or publish papers or keep dumping more compute into the same systems over and over again, than it will help those trying to do something genuinely new and valuable. The hard part where one actually thinks isn’t being sped up, while the rest of the process is. Oh no. [...] I read a comment on LessWrong by Jessica Taylor questioning why one of MIRI’s latest plans wasn’t strictly worse than Ought [...] This frames the whole thing on a meta-level as a way to test a theory of how to build an aligned AI. As per Paul’s theory as I understand it, if you can (1) break up a given task into subcomponents and then (2) solve each subcomponent while (3) ensuring each subcomponent is aligned then that could solve the alignment problem with regard to the larger task, so testing to see what types of things can usefully be split into machine tasks, and whether those tasks can be solved, would be some sort of exploration in that direction under some theories. I notice I have both the ‘yeah sure I guess maybe’ instinct here and the mostly-integrated inner-Eliezer-style reaction that very strongly thinks that this represents fundamental confusion and is wrong. In any case, it’s another perspective, and Paul specifically is excited by this path.". Percentage of total donor spend in the corresponding batch of donations: 6.12%; announced: 2021-11-20.
Open Philanthropy1,593,333.0022020-01AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020Committee for Effective Altruism Support Donation process: The grant was recommended by the Committee for Effective Altruism Support following its process https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored cognition, which we consider relevant to AI alignment and to reducing potential risks from advanced artificial intelligence."

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter"

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Machine Intelligence Research Institute ($7,703,750), Centre for Effective Altruism ($4,146,795), and 80,000 Hours ($3,457,284).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation

Other notes: Announced: 2020-02-14.
Survival and Flourishing Fund100,000.0062019-12-05Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlAlex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round and the first with a grant to the grantee.

Donor retrospective of the donation: In 2021 H2, Jaan Tallinn would make a grant to Ought based on the SFF's S-process.

Other notes: Jaan Tallinn also participates as a funder in this grant round, but makes no grants to the grantee in this grant round. Percentage of total donor spend in the corresponding batch of donations: 10.87%; announced: 2019-12-15.
Open Philanthropy1,000,000.0032019-11AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019Daniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "Ought conducts research on factored condition, which we consider relevant to AI alignment."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 made on the recommendation of the Committee for Effective Altruism Support suggest that Open Phil would continue to have a high opinion of the work of Ought

Other notes: Intended funding timeframe in months: 24; announced: 2020-02-14.
Effective Altruism Funds: Long-Term Future Fund50,000.0072019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Matt Fallshaw Alex Zhu Oliver Habryka Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational financial buffer

Intended use of funds: No specific information is shared on how the funds will be used at the margin, but the general description gives an idea: "Ought is a nonprofit aiming to implement AI alignment concepts in real-world applications"

Donor reason for selecting the donee: Donor is explicitly interested in diversifying funder base for donee, who currently receives almost all its funding from only two sources and is trying to change that. Othewise, same reason as with last round of funds https://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grants namely "We believe that Ought’s approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): In write-up for previous grant at https://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grants of $10,000, donor says: "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant." The amount this time is bigger ($50,000) but the general principle likely continues to apply
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): In the previous grant round, donor had said "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." Thus, it makes sense to donate again in this round

Other notes: The grant reasoning is written up by Matt Wage and is also included in the cross-post of the grant decision to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) but the comments on the post do not discuss this specific grant.
Jalex Stark10,244.0082019AI safetyhttp://www.jalexstark.com/donations/2019.html-- Donation process: The donor writes: "I read their published research program in depth" and links to the summary https://ought.org/updates/2020-01-11-arguments of their latest experiments.

Intended use of funds (category): Organizational general support

Intended use of funds: The donor quotes the donee's self-description: "Ought is a research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems. We study this delegation problem using experiments with human participants."

Donor reason for selecting the donee: The donor writes: "I read their published research program in depth, and I find the directions promising. It is one of three places I was willing to leave grad school to be employed at. (The other two were Jane Street, where I currently work, and OpenAI, which is much more well funded than Ought.) I aim to eventually transition to direct technical work on problems that require theoretical contributions from individuals thinking deeply for a long time. With this work I'm aiming to support the efforts of agents that are similar to future-me."

Donor reason for donating that amount (rather than a bigger or smaller amount): The donor writes that the amount was "chosen after the rest of the numbers to make the sum a round number."
Percentage of total donor spend in the corresponding batch of donations: 28.46%
Effective Altruism Funds: Long-Term Future Fund10,000.0092018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: Grantee is a nonprofit aiming to implement AI alignment concepts in real-world applications.

Donor reason for selecting the donee: The grant page says: "We believe that Ought's approach is interesting and worth trying, and that they have a strong team. [...] Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future."

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "Our understanding is that hiring is currently more of a bottleneck for them than funding, so we are only making a small grant."
Percentage of total donor spend in the corresponding batch of donations: 10.47%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: The grant page says "Part of the aim of the grant is to show Ought as an example of the type of organization we are likely to fund in the future." This suggests that Ought will be considered for future grant rounds

Donor retrospective of the donation: The Long-Term Future Fund would make a $50,000 grant to Ought in the April 2019 grant round, suggesting that this grant would be considered a success
Open Philanthropy525,000.0052018-05AI safety/technical researchhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-supportDaniel Dewey Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Proposed_activities "Ought will conduct research on deliberation and amplification, aiming to organize the cognitive work of ML algorithms and humans so that the combined system remains aligned with human interests even as algorithms take on a much more significant role than they do today." It also links to https://ought.org/approach Also, https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Budget says: "Ought intends to use it for hiring and supporting up to four additional employees between now and 2020. The hires will likely include a web developer, a research engineer, an operations manager, and another researcher."

Donor reason for selecting the donee: The case for the grant includes: (a) Open Phil considers research on deliberation and amplification important for AI safety, (b) Paul Christiano is excited by Ought's approach, and Open Phil trusts his judgment, (c) Ought’s plan appears flexible and we think Andreas is ready to notice and respond to any problems by adjusting his plans, (d) Open Phil has indications that Ought is well-run and has a reasonable chance of success.

Donor reason for donating that amount (rather than a bigger or smaller amount): No explicit reason for the amount is given, but the grant is combined with another grant from Open Philanthropy Project technical advisor Paul Christiano

Donor thoughts on making further donations to the donee: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support#Key_questions_for_follow-up lists some questions for followup

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2019 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ought-general-support-2020 suggest that Open Phil would continue to have a high opinion of Ought

Other notes: Intended funding timeframe in months: 36; announced: 2018-05-30.