Future of Humanity Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United Kingdom
Facebook page FHIOxford
Websitehttps://www.fhi.ox.ac.uk/
Donate pagehttps://www.fhi.ox.ac.uk/support-fhi/
Twitter usernameFHIOxford
Wikipedia pagehttps://en.wikipedia.org/wiki/Future_of_Humanity_Institute
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Future_of_Humanity_Institute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Future+of+Humanity+Institute
Key peopleNick Bostrom
Launch date2005

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 8 20,097 1,789,586 24 24 24 82 20,097 20,097 115,652 120,000 1,994,000 12,066,809 12,066,809
Global catastrophic risks 6 82 2,034,506 24 24 24 24 82 82 20,097 120,000 120,000 12,066,809 12,066,809
Biosecurity and pandemic preparedness 1 115,652 115,652 115,652 115,652 115,652 115,652 115,652 115,652 115,652 115,652 115,652 115,652 115,652
AI safety 1 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000 1,994,000

Donation amounts by donor and year for donee Future of Humanity Institute

Donor Total 2018 2017 2016
Open Philanthropy (filter this donee) 14,176,460.93 12,066,808.93 1,994,000.00 115,652.00
Gordon Irlam (filter this donee) 120,000.00 0.00 0.00 120,000.00
Effective Altruism Grants (filter this donee) 20,097.00 0.00 20,097.00 0.00
Samuel Hilton (filter this donee) 82.16 0.00 0.00 82.16
Marius Hobbhahn (filter this donee) 23.74 0.00 0.00 23.74
Cedric Eveleigh (filter this donee) 23.86 0.00 0.00 23.86
Total 14,316,687.69 12,066,808.93 2,014,097.00 235,781.76

Full list of documents in reverse chronological order (26 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR)2022-08-28Thomas Larsen Eli LessWrongFund for Alignment Resesarch Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research Review of current state of cause areaAI safetyThis post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them.
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open PhilanthropyChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.
ALLFED 2019 Annual Report and Fundraising Appeal (GW, IR)2019-11-23Aron Mill Alliance to Feed the Earth in DisastersBerkeley Existential Risk Initiative Donor lottery Effective Altruism Grants Open Philanthropy Alliance to Feed the Earth in Disasters Future of Humanity Institute Donee donation caseAlternative foodsAron Mill provides a summary of the work of the Alliance to Feed the Earth in Disasters (ALLFED) in 2019. He lists key supporters as well as partners that ALLFED worked with during the year. The blog post proceeds to make an appeal and a case for fundraising ALLFED. Sections of the blog post include: (1) research output, (2) preparedness and alliance-building, (3) ALLFED team, (4) current projects, and (5) projects in need of funding.
Problems in effective altruism and existential risk and what to do about them2019-10-16Simon Knutsson Open Philanthropy Effective Altruism Foundation Centre for Effective Altruism Effective Altruism Foundation Future of Humanity Institute Miscellaneous commentaryEffective altruism|Global catastrophic risksSimon Knutsson, a Ph.D. student who previously worked at GiveWell and has, since then, worked on animal welfare and on s-risks, writes about what he sees as problematic dynamics in the effective altruism and x-risk communities. Specifically, he is critical of what he sees as behind-the-scenes coordination work on messaging, between many organizations in the space, notably the Open Philanthropy Project and the Effective Altruism Foundation, and the possible use of grant money to pressure EAF into pushing for guidelines for writers to not talk about s-risks in specific ways. He is also critical of what he sees as the one-sided nature of the syllabi and texts produced by the Centre for Effective Altruism (CEA). The author notes that people have had different reactions to his text, with some considering the behavior described as unproblematic, while others agreeing with him that it is problematic and deserves the spotlight. The post is also shared to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/EescnoaBJsQWz4rii/problems-in-effective-altruism-and-what-to-do-about-them (GW, IR) where it gets a lot of criticism in the comments from people including Peter Hurford and Holly Elmore.
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Job opportunity at the Future of Humanity Institute and Global Priorities Institute2018-04-01Hayden Win Effective Altruism Forum Global Priorities Institute Future of Humanity Institute Job advertisementAI safetyThe blog post advertises a Senior Administrator position that would be shared between the Future of Humanty Institute and the Global Priorities Institute
Opportunities for individual donors in AI safety (GW, IR)2018-03-12Alex Flint Effective Altruism Forum Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyAlex Flint discusses the history of AI safety funding, and suggests some heuristics for individual donors based on what he has seen to be successful in the past.
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).
Tom Sittler: current view, Machine Intelligence Research Institute2017-02-08Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Future of Humanity Institute Evaluator review of doneeAI safetyTom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update
Changes in funding in the AI safety field2017-02-01Sebastian Farquhar Centre for Effective Altruism Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab Review of current state of cause areaAI safetyThe post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/
The effective altruism guide to donating this giving season2016-12-28Robert Wiblin 80,000 Hours Blue Ribbon Study Panel on Biodefense Cool Earth Alliance for Safety and Justice Cosecha Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Compassion in World Farming USA Against Malaria Foundation Schistosomiasis Control Initiative StrongMinds Ploughshares Fund Machine Intelligence Research Institute Future of Humanity Institute Evaluator consolidated recommendation listBiosecurity and pandemic preparedness,Global health and development,Animal welfare,AI risk,Global catastrophic risks,Effective altruism/movement growthRobert Wiblin draws on a number of annual charity evaluations and reviews, as well as staff donation writeups, from sources such as GiveWell and Animal Charity Evaluators, to provide an "effective altruism guide" for 2016 Giving Season donation
Where the ACE Staff Members are Giving in 2016 and Why2016-12-23Leah Edgerton Animal Charity EvaluatorsAllison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds Periodic donation list documentationAnimal welfare|AI safety|Global catastrophic risksAnimal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas.
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
Some Key Ways in Which I've Changed My Mind Over the Last Several Years2016-09-06Holden Karnofsky Open Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Reasoning supplementAI safetyIn this 16-page Google Doc, Holden Karnofsky, Executive Director of the Open Philanthropy Project, lists three issues he has changed his mind about: (1) AI safety (he considers it more important now), (2) effective altruism community (he takes it more seriously now), and (3) general properties of promising ideas and interventions (he considers feedback loops less necessary than he used to, and finding promising ideas through abstract reasoning more promising). The document is linked to and summarized in the blog post https://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority.
Where should you donate to have the most impact during giving season 2015?2015-12-24Robert Wiblin 80,000 Hours Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly Evaluator consolidated recommendation listGlobal health and development|Effective altruism/movement growth|Epistemic institutions|Biosecurity and pandemic preparedness|AI risk|Global catastrophic risksRobert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate
Peter McCluskey's favorite charities2015-12-06Peter McCluskey Peter McCluskey Center for Applied Rationality Future of Humanity Institute AI Impacts GiveWell GiveWell top charities Future of Life Institute Centre for Effective Altruism Brain Preservation Foundation Multidisciplinary Association for Psychedelic Studies Electronic Frontier Foundation Methuselah Mouse Prize SENS Research Foundation Foresigh Institute Evaluator consolidated recommendation listThe page discusses the favorite charities of Peter McCluskey and his opinion on their current room for more funding in light of their financial situation and expansion plans
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier

Full list of donations in reverse chronological order (8 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 8)Donation dateCause areaURLInfluencerNotes
Open Philanthropy12,066,808.9312018-07Global catastrophic riskshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-work-on-global-catastrophic-risksNick Beckstead Donation process: This is a series of awards totaling £13,428,434 ($16,200,062.78 USD at market rate on September 2, 2019); as of September 18, 2019, $12,066,808.93 of the amount has been allocated

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to support work on risks from advanced artificial intelligence, biosecurity and pandemic preparedness, and macrostrategy. The grant page says: "The largest pieces of the omnibus award package will allow FHI to recruit and hire for an education and training program led by Owen Cotton­Barratt, and retain and attract talent in biosecurity research and FHI’s Governance of AI program."

Other notes: Intended funding timeframe in months: 36; announced: 2018-09-01.
Effective Altruism Grants20,097.0052017-09-29Global catastrophic riskshttps://docs.google.com/spreadsheets/d/1iBy--zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk-- Research into biological risk mitigation with the Future of Humanity Institute. See http://effective-altruism.com/ea/1fc/effective_altruism_grants_project_update/ for more context about the grant program. Currency info: donation given as 15,000.00 GBP (conversion done on 2017-09-29 via Bloomberg).
Open Philanthropy1,994,000.0022017-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support-- Grant for general support. A related grant specifically for biosecurity work was granted in 2016-09, made earlier for logistical reasons. Announced: 2017-03-06.
Open Philanthropy115,652.0042016-09Biosecurity and pandemic preparednesshttps://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-biosecurity-and-pandemic-preparedness-- Conceptually part of a larger grant to the recipient, whose primary work area is AI risk reduction. More details in writeup for larger grant at https://www.openphilanthropy.org/focus/global-catastrophic-risks/miscellaneous/future-humanity-institute-general-support. Announced: 2017-03-06.
Marius Hobbhahn23.7482016Global catastrophic risks/AI risk/Biosecurity and pandemic preparednesshttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 20.00 EUR (conversion done on 2017-08-05 via Fixer.io).
Samuel Hilton82.1662016Global catastrophic risks/AI risk/Biosecurity and pandemic preparednesshttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 62.50 GBP (conversion done on 2017-08-05 via Fixer.io).
Cedric Eveleigh23.8672016Global catastrophic risks/AI risk/Biosecurity and pandemic preparednesshttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 30.00 CAD (conversion done on 2017-08-05 via Fixer.io).
Gordon Irlam120,000.0032016Global catastrophic risks/AI risk/Biosecurity and pandemic preparednesshttps://www.gricf.org/2016-report.html--