2019 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2019-12-19 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund | Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. |
2018 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2018-12-17 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis | | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues. |
Effective Altruism Foundation update: Plans for 2018 and room for more funding (GW, IR) | 2017-12-15 | Jonas Vollmer | Effective Altruism Foundation | | Effective Altruism Foundation Raising for Effective Giving Foundational Research Institute Wild-Animal Suffering Research | | Donee donation case | Effective altruism/movement growth/s-risk reduction | The document describes the 2018 plan and room for more funding of the Effective Altruism Foundation. Subsidiaries include Raising for Effective Giving, Foundational Research Institute, and Wild-Animal Suffering Research, Also cross-posted at https://ea-foundation.org/blog/our-plans-for-2018/ (own blog) |
Fear and Loathing at Effective Altruism Global 2017 | 2017-08-16 | Scott Alexander | Slate Star Codex | Open Philanthropy | GiveWell Centre for Effective Altruism Center for Effective Global Action Raising for Effective Giving 80,000 Hours Wild-Animal Suffering Research Qualia Research Institute Foundational Research Institute | | | Miscellaneous commentary | Scott Alexander describes his experience at Effective ALtruism Global 2017. He describes how the effective altruism movement has both the formal-looking, "suits" people who are in charge of large amounts of money, and the "weirdos" who are toying around with ideas that seem strange and are not mainstream even within effective altruism. However, he feels that rather than being two separate groups, the two groups blend into and overlap with each other. He sees this as a sign that the effective altruism movement is composed of genuinely good people who are looking to make a difference, and explains why he thinks they are succeeding |
Introducing CEA’s Guiding Principles | 2017-03-07 | William MacAskill | Centre for Effective Altruism | Effective Altruism Foundation | Rethink Charity Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Charity Science Effective Altruism Foundation Foundational Research Institute Future of Life Institute Raising for Effective Giving The Life You Can Save | | Miscellaneous commentary | Effective altruism | Willam MacAskill outlines CEA's understanding of the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit. The post also lists other organizations that voice their support for these definitions and guiding principles, including: .impact, 80,000 Hours, Animal Charity Evaluators, Charity Science, Effective Altruism Foundation, Foundational Research Institute, Future of Life Institute, Raising for Effective Giving, and The Life You Can Save. The following individuals are also listed as voicing their support for the definition and guiding principles: Elie Hassenfeld of GiveWell and the Open Philanthropy Project, Holden Karnofsky of GiveWell and the Open Philanthropy Project, Toby Ord of the Future of Humanity Institute, Nate Soares of the Machine Intelligence Research Institute, and Peter Singer. William MacAskill worked on the document with Julia Wise, and also expresses gratitude to Rob Bensinger and Hilary Mayhew for their comments and wording suggestions. The post also briefly mentions an advisory panel set up by Julia Wise, and links to https://forum.effectivealtruism.org/posts/mdMyPRSSzYgk7X45K/advisory-panel-at-cea (GW, IR) for more detail |
CEA Staff Donation Decisions 2016 | 2016-12-06 | Sam Deere | Centre for Effective Altruism | William MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz | Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute | | Periodic donation list documentation | | Centre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed. |
My Cause Selection: Michael Dickens | 2015-09-15 | Michael Dickens | Effective Altruism Forum | Michael Dickens | Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving | | Single donation documentation | Animal welfare,AI risk,Effective altruism | Explanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier |