Adam Gleave money moved

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

This entity is also a donor.

Full list of documents in reverse chronological order (1 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2017 Donor Lottery Report (GW, IR)2018-11-12Adam Gleave Effective Altruism ForumDonor lottery Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute AI Impacts Wild-Animal Suffering Research Single donation documentationGlobal catastrophic risks|AI safety|Animal welfareThe write-up documents Adam Gleave’s decision process for where he donated the money for the 2017 donor lottery. Adam won one of the two blocks of $100,000 for 2017

Full list of donations in reverse chronological order (12 donations)

DonorDoneeAmount (current USD)Donation dateCause areaURLNotes
Effective Altruism Funds: Long-Term Future FundRethink Priorities70,000.002021-04-01Global catastrophic riskshttps://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: The grant is for "Researching global security, forecasting, and public communication." In more detail: "(1) Global security (conflict, arms control, avoiding totalitarianism) (2) Forecasting (estimating existential risk, epistemic challenges to longtermism) (3) Polling / message testing (identifying longtermist policies, figuring out how to talk about longtermism to the public)." The longtermist hires are Linchuan Zhang, David Reinstein, and 50% of Michael Aird.

Donor reason for selecting the donee: Regarding the researchers who would effectively be funded, the grant evaluator Asya Bergal writes at https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#rethink-priorities--70000 "Rethink’s longtermist team is very new and is proposing work on fairly disparate topics, so I think about funding them similarly to how I would think about funding several independent researchers. Their longtermist hires are Linchuan Zhang, David Reinstein, and 50% of Michael Aird (he will be spending the rest of his time as a Research Scholar at FHI). I’m not familiar with David Reinstein. Michael Aird has produced a lot of writing over the past year, some of which I’ve found useful. l haven’t looked at any written work Linchuan Zhang has produced (and I’m not aware of anything major), but he has a good track record in forecasting, I’ve appreciated some of his EA forum comments, and my impression is that several longtermist researchers I know think he’s smart. Evaluating them as independent researchers, I think they’re both new and promising enough that I’m interested in paying for a year of their time to see what they produce." Regarding the intended uses of funds, Bergal writes: "Broadly, I am most excited about the third of these [polling / message testing], because I think there’s a clear and pressing need for it. I think work in the other two areas could be good, but feels highly dependent on the details (their application only described these broad directions)." Bergal links to https://forum.effectivealtruism.org/posts/h566GT4ECfJAB38af/some-quick-notes-on-effective-altruism?commentId=SD7rcJmY5exTR3aRu (GW, IR) for more context. Bergal also gives specific examples on areas she might be interested in.

Donor reason for donating that amount (rather than a bigger or smaller amount): At https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#rethink-priorities--70000 grant evaluator Asya Bergal writes: "We decided to pay 25% of the budget that Rethink requested, which I guessed was our fair share given Rethink’s other funding opportunities." https://80000hours.org/articles/coordination/#when-deciding-where-to-donate-consider-splitting-or-thresholds is linked for more context on fair share.
Percentage of total donor spend in the corresponding batch of donations: 4.24%

Donor reason for donating at this time (rather than earlier or later): The time is shortly after Rethink Priorities started growing its longtermist team, and is a result of Rethink Priorities seeking funding to support the longtermist team's work.
Effective Altruism Funds: Long-Term Future FundLegal Priorities Project135,000.002021-04-01AI safety/governancehttps://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants Donation process: Grant selected from a pool of applicants. https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#legal-priorities-project--135000 says: "The Legal Priorities Project (LPP) applied for funding to hire Suzanne Van Arsdale and Renan Araújo to conduct academic legal research, and Alfredo Parra to perform operations work. All have previously been involved with the LPP, and Suzanne and Renan contributed to the LPP’s research agenda."

Intended use of funds (category): Direct project expenses

Intended use of funds: https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#legal-priorities-project--135000 says: "Hiring staff to carry out longtermist academic legal research and increase the operational capacity of the organization. The Legal Priorities Project (LPP) applied for funding to hire Suzanne Van Arsdale and Renan Araújo to conduct academic legal research, and Alfredo Parra to perform operations work. All have previously been involved with the LPP, and Suzanne and Renan contributed to the LPP’s research agenda."

Donor reason for selecting the donee: https://funds.effectivealtruism.org/payouts/may-2021-long-term-future-fund-grants#legal-priorities-project--135000 (written by Daniel Eth) says: "I’m excited about this grant for reasons related to LPP as an organization, the specific hires they would use the grant for, and the proposed work of the new hires." It goes into considerable further detail regarding the reasons.

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined based on the amount needed for the intended uses of the grant funds.
Percentage of total donor spend in the corresponding batch of donations: 8.18%
Effective Altruism Funds: Long-Term Future FundAI Safety Camp85,000.002021-04-01AI safety/technical research/talent pipelinehttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants Donation process: Grant selected from a pool of applicants. This particular grantee had received grants in the past, and the grantmaking process was mainly based on soliciting more reviews and feedback from participants in AI Safety Camps funded by past grants.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "running a virtual and physical camp where selected applicants test their fit for AI safety research." Unlike previous grants, no specific date or time is provided for the grant.

Donor reason for selecting the donee: Grant page says: "some alumni of the camp reported very substantial positive benefits from attending the camp, while none of them reported noticing any substantial harmful consequences. [...] all alumni I reached out to thought that the camp was at worst, only a slightly less valuable use of their time than what they would have done instead, so the downside risk seems relatively limited. [...] the need for social events and workshops like this is greater than I previously thought, and that they are in high demand among people new to the AI Alignment field. [...] there is enough demand for multiple programs like this one, which reduces the grant’s downside risk, since it means that AI Safety Camp is not substantially crowding out other similar camps. There also don’t seem to be many similar events to AI Safety Camp right now, which suggests that a better camp would not happen naturally, and makes it seem like a bad idea to further reduce the supply by not funding the camp."

Donor reason for donating that amount (rather than a bigger or smaller amount): No specific reasons are given for the amount, but it is larger than previous grants, possibly reflecting the expanded scope of virtual and physical camp.
Percentage of total donor spend in the corresponding batch of donations: 5.15%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round as well as possibly by time taken to collect and process feedback from past grant participants. The pausing of in-person camps during the COVID-19 pandemic may also explain the gap since the previous grant.
Effective Altruism Funds: Long-Term Future FundCenter for Human-Compatible AI48,000.002021-04-01AI safetyhttps://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant for "hiring research engineers to support CHAI’s technical research projects." "This grant is to support Cody Wild and Steven Wang in their work assisting CHAI as research engineers, funded through BERI."

Donor reason for selecting the donee: Grant investigator and main influencer Evan Hubinger writes: "Overall, I have a very high opinion of CHAI’s ability to produce good alignment researchers—Rohin Shah, Adam Gleave, Daniel Filan, Michael Dennis, etc.—and I think it would be very unfortunate if those researchers had to spend a lot of their time doing non-alignment-relevant engineering work. Thus, I think there is a very strong case for making high-quality research engineers available to help CHAI students run ML experiments. [...] both Cody and Steven have already been working with CHAI doing exactly this sort of work; when we spoke to Adam Gleave early in the evaluation process, he seems to have found their work to be positive and quite helpful. Thus, the risk of this grant hurting rather than helping CHAI researchers seems very minimal, and the case for it seems quite strong overall, given our general excitement about CHAI."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; a grant of $75,000 for a similar purpose was made to the grantee in the Septembe 2020 round, so the timing is likely partly determined by the need to renew funding for the people (Cody Wild and Steven Wang) funded through the previous grant.

Other notes: The grant page says: "Adam Gleave [one of the fund managers] did not participate in the voting or final discussion around this grant." The EA Forum post https://forum.effectivealtruism.org/posts/diZWNmLRgcbuwmYn4/long-term-future-fund-may-2021-grant-recommendations (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 5.15%.
Effective Altruism Funds: Long-Term Future FundCenter for Human-Compatible AI75,000.002020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000 Donation process: Donee submitted grant application through the application form for the September 2020 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support "hiring a research engineer to support CHAI’s technical research projects."

Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka gives these reasons for the grant: "Over the last few years, CHAI has hosted a number of people who I think have contributed at a very high quality level to the AI alignment problem, most prominently Rohin Shah [...] I've also found engaging with Andrew Critch's thinking on AI alignment quite valuable, and I am hopeful about more work from Stuart Russell [...] the specific project that CHAI is requesting money for seems also quite reasonable to me. [...] it seems quite important for them to be able to run engineering-heavy machine learning projects, for which it makes sense to hire research engineers to assist with the associated programming tasks. The reports we've received from students at CHAI also suggest that past engineer hiring has been valuable and has enabled students at CHAI to do substantially better work."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Grant investigator and main influencer Oliver Habryka writes: "Having thought more recently about CHAI as an organization and its place in the ecosystem of AI alignment,I am currently uncertain about its long-term impact and where it is going, and I eventually plan to spend more time thinking about the future of CHAI. So I think it's not that unlikely (~20%) that I might change my mind on the level of positive impact I'd expect from future grants like this. However, I think this holds less for the other Fund members who were also in favor of this grant, so I don't think my uncertainty is much evidence about how LTFF will think about future grants to CHAI."

Donor retrospective of the donation: A later grant round https://funds.effectivealtruism.org/funds/payouts/may-2021-long-term-future-fund-grants includes a $48,000 grant from the LTFF to CHAI for a similar purpose, suggesting continued satisfaction and a continued positive assessment of the grantee.

Other notes: Adam Gleave, though on the grantmaking team, recused himself from discussions around this grant since he is a Ph.D. student at CHAI. Grant investigator and main influencer Oliver Habryka includes a few concerns: "Rohin is leaving CHAI soon, and I'm unsure about CHAI's future impact, since Rohin made up a large fraction of the impact of CHAI in my mind. [...] I also maintain a relatively high level of skepticism about research that tries to embed itself too closely within the existing ML research paradigm. [...] A concrete example of the problems I have seen (chosen for its simplicity more than its importance) is that, on several occasions, I've spoken to authors who, during the publication and peer-review process, wound up having to remove some of their papers' most important contributions to AI alignment. [...] Another concern: Most of the impact that Rohin contributed seemed to be driven more by distillation and field-building work than by novel research. [...] I believe distillation and field-building to be particularly neglected and valuable at the margin. I don't currently see the rest of CHAI engaging in that work in the same way." The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant. Percentage of total donor spend in the corresponding batch of donations: 19.02%.
Effective Altruism Funds: Long-Term Future FundAI Impacts75,000.002020-09-03AI safetyhttps://funds.effectivealtruism.org/funds/payouts/september-2020-long-term-future-fund-grants#center-for-human-compatible-ai-75000 Donation process: Donee submitted grant application through the application form for the April 2021 round of grants from the Long-Term Future Fund, and was selected as a grant recipient.

Intended use of funds (category): Organizational general support

Intended use of funds: Grant for "answering decision-relevant questions about the future of artificial intelligence."

Donor reason for selecting the donee: Grant investigator and main influencer Adam Gleave writes: "Their work has and continues to influence my outlook on how and when advanced AI will develop, and I often see researchers I collaborate with cite their work in conversations. [...] Overall, I would be excited to see more research into better understanding how AI will develop in the future. This research can help funders to decide which projects to support (and when), and researchers to select an impactful research agenda. We are pleased to support AI Impacts' work in this space, and hope this research field will continue to grow.

Donor reason for donating that amount (rather than a bigger or smaller amount): Grant investigator and main influencer Adam Gleave writes: "We awarded a grant of $75,000, approximately one fifth of the AI Impacts budget. We do not expect sharply diminishing returns, so it is likely that at the margin, additional funding to AI Impacts would continue to be valuable. When funding established organizations, we often try to contribute a "fair share" of organizations' budgets based on the Fund's overall share of the funding landscape. This aids coordination with other donors and encourages organizations to obtain funding from diverse sources (which reduces the risk of financial issues if one source becomes unavailable)."
Percentage of total donor spend in the corresponding batch of donations: 19.02%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Intended funding timeframe in months: 12

Other notes: The grant page says: "(Recusal note: Due to working as a contractor for AI Impacts, Asya Bergal recused herself from the discussion and voting surrounding this grant.)" The EA Forum post https://forum.effectivealtruism.org/posts/dgy6m8TGhv4FCn4rx/long-term-future-fund-september-2020-grants (GW, IR) about this grant round attracts comments, but none specific to the CHAI grant.
Effective Altruism Funds: Long-Term Future Fund80,000 Hours100,000.002020-04-14Effective altruism/movement growth/career counselinghttps://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendations Intended use of funds (category): Organizational general support

Other notes: Percentage of total donor spend in the corresponding batch of donations: 20.48%.
Effective Altruism Funds: Long-Term Future FundMachine Intelligence Research Institute100,000.002020-04-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendations Intended use of funds (category): Organizational general support

Other notes: In the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ MIRI mentions the grant along with a $7.7 million grant from the Open Philanthropy Project and a $300,000 grant from Berkeley Existential Risk Initiative. Percentage of total donor spend in the corresponding batch of donations: 20.48%.
Donor lotteryAlliance to Feed the Earth in Disasters70,000.002018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report The blog post explaining the donation contains an extensive discussion of the Alliance to Feed the Earth in Disasters (ALLFED), and also includes a response statement from ALLFED founder David Denkerberger. Gleave writes in the post: "I am somewhat more excited about ALLFED than GCRI since their research agenda seems more directly impactful and there is a clearer pathway for growth. However, I see more downside risks to ALLFED, and in particular would expect GCRI to be in a better position to work productively with governments. ALLFED has a large team of volunteers, which increases reputational risks. I view support for ALLFED at this stage as mostly a test of the tractability of R&D in this area, and to enable them to continue to build relevant collaborations." Earlier in the post, he writes: "If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there. Otherwise, I would split it equally between ALLFED and GCRI. In particular, I recommend a proportionally greater allocation to GCRI than I made. My donation to ALLFED increased their 2018 revenue by 50%: although they have capacity to utilize additional funds, I expect there to be some diminishing returns.". Percentage of total donor spend in the corresponding batch of donations: 70.00%.
Donor lotteryGlobal Catastrophic Risk Institute20,000.002018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report The blog post explaining the donation contains an extensive discussion of the Global Catastrophic Risk Institute (GCRI). Highlight: "Overall I am moderately excited about supporting the work of GCRI and in particular Seth Baum. I am pessimistic about room for growth, with recruitment being a major challenge, similar to that faced by AI Impacts. [...] At their current budget level, additional funding is a factor for whether Seth continues to work at GCRI full-time. Accordingly I would recommend donations sufficient to ensure Seth can continue his work. I would encourage donors to consider funding GCRI to scale beyond this, but to first obtain more information regarding their long-term plans and recruitment strategy." Earlier in the post: "If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there. Otherwise, I would split it equally between ALLFED and GCRI.". Percentage of total donor spend in the corresponding batch of donations: 20.00%.
Donor lotteryAI Impacts5,000.002018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report The blog post explaining the donation contains extensive discussion of AI Impacts. Highlight: "I have found Katja's output in the past to be insightful, so I am excited at ensuring she remains funded. Tegan has less of a track record but based on the output so far I believe she is also worth funding. However, I believe AI Impacts has adequate funding for both of their current employees. Additional contributions would therefore do a combination of increasing their runway and supporting new hires. I am pessimistic about AI Impacts room for growth. This is primarily as I view recruitment in this area being difficult. The ideal candidate would be a cross between an OpenPhil research analyst and a technical AI or strategy researcher. This is a rare skill set with high opportunity cost. Moreover, AI Impacts has had issues with employee retention, with many individuals that have previously worked leaving for other organisations." In terms of the prioritization relative to other grantees: "I ranked GCRI above AI Impacts as AI Impacts core staff are adequately funded, and I am sceptical of their ability to recruit additional qualified staff members. I would favour AI Impacts over GCRI if they had qualified candidates they wanted to hire but were bottlenecked on funding. However, my hunch is that in such a situation they would be able to readily raise funding, although it may be that having an adequate funding reserve would substantially simplify recruitment. [...] If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there.". Percentage of total donor spend in the corresponding batch of donations: 5.00%.
Donor lotteryWild-Animal Suffering Research5,000.002018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report The blog post explaining the donation has some discussion of the grantee. Highlight: "Overall I think WASR is a well-run organisation with a clear strategy and a short but encouraging track record. I would encourage those with a near-term animal welfare centric worldview to support them. Under my own worldview, I did not find them competitive with the other organisations, and so recommended a small grant of $5,000.". Percentage of total donor spend in the corresponding batch of donations: 5.00%.

Donation amounts by donee and year

Donee Donors influenced Cause area Metadata Total 2021 2020 2018
Legal Priorities Project Effective Altruism Funds: Long-Term Future Fund (filter this donor) 135,000.00 135,000.00 0.00 0.00
Center for Human-Compatible AI Effective Altruism Funds: Long-Term Future Fund (filter this donor) AI safety WP Site TW 123,000.00 48,000.00 75,000.00 0.00
80,000 Hours Effective Altruism Funds: Long-Term Future Fund (filter this donor) Career coaching/life guidance FB Tw WP Site 100,000.00 0.00 100,000.00 0.00
Machine Intelligence Research Institute Effective Altruism Funds: Long-Term Future Fund (filter this donor) AI safety FB Tw WP Site CN GS TW 100,000.00 0.00 100,000.00 0.00
AI Safety Camp Effective Altruism Funds: Long-Term Future Fund (filter this donor) 85,000.00 85,000.00 0.00 0.00
AI Impacts Donor lottery (filter this donor), Effective Altruism Funds: Long-Term Future Fund (filter this donor) AI safety Site 80,000.00 0.00 75,000.00 5,000.00
Alliance to Feed the Earth in Disasters Donor lottery (filter this donor) 70,000.00 0.00 0.00 70,000.00
Rethink Priorities Effective Altruism Funds: Long-Term Future Fund (filter this donor) Cause prioritization Site 70,000.00 70,000.00 0.00 0.00
Global Catastrophic Risk Institute Donor lottery (filter this donor) Global catastrophic risks FB Tw Site 20,000.00 0.00 0.00 20,000.00
Wild-Animal Suffering Research Donor lottery (filter this donor) 5,000.00 0.00 0.00 5,000.00
Total ---- -- 788,000.00 338,000.00 350,000.00 100,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donor and year for influencer Adam Gleave

Donor Donees Total 2021 2020 2018
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 80,000 Hours (filter this donee), AI Impacts (filter this donee), AI Safety Camp (filter this donee), Center for Human-Compatible AI (filter this donee), Legal Priorities Project (filter this donee), Machine Intelligence Research Institute (filter this donee), Rethink Priorities (filter this donee) 688,000.00 338,000.00 350,000.00 0.00
Donor lottery (filter this donee) AI Impacts (filter this donee), Alliance to Feed the Earth in Disasters (filter this donee), Global Catastrophic Risk Institute (filter this donee), Wild-Animal Suffering Research (filter this donee) 100,000.00 0.00 0.00 100,000.00
Total -- 788,000.00 338,000.00 350,000.00 100,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here