This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 480 | 6,000 | 100,150 | 0 | 130 | 650 | 2,000 | 5,000 | 6,000 | 9,990 | 12,500 | 20,518 | 55,103 | 15,592,829 |
AI safety | 476 | 6,000 | 92,584 | 0 | 150 | 800 | 2,000 | 5,000 | 6,000 | 10,000 | 12,550 | 20,518 | 55,103 | 15,592,829 |
FIXME | 1 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 | 20 |
3 | 25 | 1,333,967 | 20 | 20 | 20 | 20 | 25 | 25 | 25 | 4,001,855 | 4,001,855 | 4,001,855 | 4,001,855 |
Donor | Total | 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2009 | 2008 | 2007 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anonymous MIRI cryptocurrency donor (filter this donee) | 16,599,378.00 | 15,592,829.00 | 0.00 | 0.00 | 0.00 | 1,006,549.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Open Philanthropy (filter this donee) | 14,756,250.00 | 0.00 | 7,703,750.00 | 2,652,500.00 | 150,000.00 | 3,750,000.00 | 500,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Vitalik Buterin (filter this donee) | 4,803,990.50 | 4,001,854.50 | 0.00 | 0.00 | 802,136.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Thiel Foundation (filter this donee) | 1,627,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 250,000.00 | 250,000.00 | 27,000.00 | 570,000.00 | 0.00 | 255,000.00 | 150,000.00 | 125,000.00 |
Jaan Tallinn (filter this donee) | 1,447,500.00 | 0.00 | 843,000.00 | 0.00 | 0.00 | 60,500.00 | 80,000.00 | 0.00 | 100,000.00 | 100,000.00 | 264,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Berkeley Existential Risk Initiative (filter this donee) | 1,100,000.00 | 0.00 | 300,000.00 | 600,000.00 | 0.00 | 200,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Effective Altruism Funds: Long-Term Future Fund (filter this donee) | 678,994.00 | 0.00 | 100,000.00 | 50,000.00 | 528,994.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jed McCaleb (filter this donee) | 671,137.00 | 0.00 | 40,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 631,137.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Loren Merritt (filter this donee) | 525,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 25,000.00 | 115,000.00 | 0.00 | 0.00 | 245,000.00 | 130,000.00 | 10,000.00 | 0.00 | 0.00 | 0.00 |
Edwin Evans (filter this donee) | 475,080.00 | 0.00 | 0.00 | 0.00 | 35,550.00 | 60,000.00 | 40,000.00 | 0.00 | 50,030.00 | 52,500.00 | 237,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Richard Schwall (filter this donee) | 419,495.00 | 0.00 | 0.00 | 0.00 | 65,189.00 | 46,698.00 | 30,000.00 | 0.00 | 106,608.00 | 10,000.00 | 161,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Christian Calderon (filter this donee) | 367,574.00 | 0.00 | 0.00 | 0.00 | 0.00 | 367,574.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Blake Borgeson (filter this donee) | 350,470.00 | 0.00 | 0.00 | 0.00 | 10.00 | 0.00 | 300,000.00 | 50,460.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Investling Group (filter this donee) | 309,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 65,000.00 | 24,000.00 | 220,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Future of Life Institute (filter this donee) | 250,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 250,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Raising for Effective Giving (filter this donee) | 204,167.00 | 0.00 | 0.00 | 0.00 | 0.00 | 204,167.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jonathan Weissman (filter this donee) | 171,290.00 | 0.00 | 0.00 | 0.00 | 20,000.00 | 40,000.00 | 20,000.00 | 0.00 | 20,010.00 | 30,280.00 | 41,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Brian Cartmell (filter this donee) | 146,700.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 700.00 | 146,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Scott Dickey (filter this donee) | 130,520.00 | 0.00 | 0.00 | 0.00 | 3,000.00 | 33,000.00 | 11,000.00 | 10,000.00 | 11,020.00 | 14,500.00 | 48,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Eric Rogstad (filter this donee) | 120,236.00 | 0.00 | 0.00 | 0.00 | 19,000.00 | 101,236.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Ben Hoskin (filter this donee) | 94,209.00 | 0.00 | 0.00 | 0.00 | 31,481.00 | 30,000.00 | 32,728.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Ethan Dickinson (filter this donee) | 93,408.00 | 0.00 | 0.00 | 0.00 | 0.00 | 25,400.00 | 20,518.00 | 12,000.00 | 35,490.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Peter Scott (filter this donee) | 80,000.00 | 0.00 | 0.00 | 0.00 | 30,000.00 | 50,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Sebastian Hagen (filter this donee) | 74,587.00 | 0.00 | 0.00 | 0.00 | 22,384.00 | 10,851.00 | 12,085.00 | 12,113.00 | 17,154.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Buck Shlegeris (filter this donee) | 72,679.00 | 0.00 | 0.00 | 0.00 | 5.00 | 72,674.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Marius van Voorden (filter this donee) | 71,461.00 | 0.00 | 0.00 | 0.00 | 0.00 | 59,251.00 | 0.00 | 0.00 | 7,210.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Leif K-Brooks (filter this donee) | 67,216.00 | 0.00 | 0.00 | 0.00 | 0.00 | 67,216.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Alexei Andreev (filter this donee) | 64,605.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 525.00 | 0.00 | 23,280.00 | 16,000.00 | 24,800.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Chris Haley (filter this donee) | 60,250.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 250.00 | 60,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Guy Srinivasan (filter this donee) | 58,310.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 6,910.00 | 8,400.00 | 43,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Gordon Irlam (filter this donee) | 55,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 20,000.00 | 10,000.00 | 10,000.00 | 10,000.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Henrik Jonsson (filter this donee) | 54,525.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1.00 | 36,975.00 | 0.00 | 17,549.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Mikko Rauhala (filter this donee) | 53,745.00 | 0.00 | 0.00 | 0.00 | 7,200.00 | 13,200.00 | 6,000.00 | 0.00 | 170.00 | 2,575.00 | 24,600.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Blume (filter this donee) | 51,755.00 | 0.00 | 0.00 | 0.00 | 1,400.00 | 5,425.00 | 15,000.00 | 4,000.00 | 5,140.00 | 9,990.00 | 10,800.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Mihaly Barasz (filter this donee) | 51,623.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 24,073.00 | 12,550.00 | 15,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Alan Chang (filter this donee) | 51,050.00 | 0.00 | 0.00 | 0.00 | 33,050.00 | 18,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Luke Stebbing (filter this donee) | 50,500.00 | 0.00 | 0.00 | 0.00 | 3,000.00 | 18,050.00 | 10,500.00 | 9,650.00 | 9,300.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Misha Gurevich (filter this donee) | 50,370.00 | 0.00 | 0.00 | 0.00 | 1,500.00 | 9,000.00 | 6,000.00 | 4,500.00 | 5,520.00 | 7,550.00 | 16,300.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Brandon Reinhart (filter this donee) | 50,050.00 | 0.00 | 0.00 | 0.00 | 0.00 | 25,050.00 | 15,000.00 | 0.00 | 0.00 | 0.00 | 10,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Scott Worley (filter this donee) | 50,005.00 | 0.00 | 0.00 | 0.00 | 9,486.00 | 21,687.00 | 5,488.00 | 13,344.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Marcello Herreshoff (filter this donee) | 49,110.00 | 0.00 | 0.00 | 0.00 | 0.00 | 12,000.00 | 12,000.00 | 12,560.00 | 12,550.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jason Joachim (filter this donee) | 44,100.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 100.00 | 44,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Cohen (filter this donee) | 39,359.00 | 0.00 | 0.00 | 0.00 | 0.00 | 9,977.00 | 29,382.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Scott Siskind (filter this donee) | 38,500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 29,000.00 | 2,037.00 | 30.00 | 7,433.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Robin Powell (filter this donee) | 37,560.00 | 0.00 | 0.00 | 0.00 | 1,000.00 | 11,200.00 | 200.00 | 0.00 | 1,810.00 | 2,350.00 | 21,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Austin Peña (filter this donee) | 37,517.00 | 0.00 | 0.00 | 0.00 | 26,554.00 | 10,963.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Max Kesin (filter this donee) | 37,000.00 | 0.00 | 0.00 | 0.00 | 10,000.00 | 20,420.00 | 6,580.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Sadowsky (filter this donee) | 34,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 9,000.00 | 25,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Gustav Simonsson (filter this donee) | 33,285.00 | 0.00 | 0.00 | 0.00 | 33,285.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Nathaniel Soares (filter this donee) | 33,230.00 | 0.00 | 0.00 | 0.00 | 0.00 | 10.00 | 0.00 | 0.00 | 33,220.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Kelsey Piper (filter this donee) | 30,730.00 | 0.00 | 0.00 | 0.00 | 30,730.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Quinn Maurmann (filter this donee) | 30,575.00 | 0.00 | 0.00 | 0.00 | 30,575.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jesse Liptrap (filter this donee) | 29,590.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 10,490.00 | 0.00 | 19,100.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jeremy Schlatter (filter this donee) | 28,711.00 | 0.00 | 0.00 | 0.00 | 150.00 | 150.00 | 4,000.00 | 1.00 | 310.00 | 15,000.00 | 9,100.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Tomer Kagan (filter this donee) | 26,500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 16,500.00 | 10,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Brian Tomasik (filter this donee) | 26,010.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 2,000.00 | 10.00 | 0.00 | 0.00 | 0.00 | 12,000.00 | 12,000.00 | 0.00 |
Patrick LaVictoire (filter this donee) | 25,885.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 20,885.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Paul Crowley (filter this donee) | 25,850.00 | 0.00 | 0.00 | 0.00 | 13,400.00 | 12,450.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Zvi Mowshowitz (filter this donee) | 25,010.00 | 0.00 | 0.00 | 0.00 | 10,000.00 | 10,000.00 | 0.00 | 5,010.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Gary Basin (filter this donee) | 25,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 25,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Martine Rothblatt (filter this donee) | 25,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 25,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Mick Porter (filter this donee) | 24,810.00 | 0.00 | 0.00 | 0.00 | 1,200.00 | 9,200.00 | 2,400.00 | 4,000.00 | 8,010.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Liron Shapira (filter this donee) | 24,750.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 15,100.00 | 0.00 | 9,650.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Johan Edström (filter this donee) | 23,680.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,700.00 | 300.00 | 0.00 | 2,250.00 | 7,730.00 | 7,700.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Ethan Sterling (filter this donee) | 23,418.00 | 0.00 | 0.00 | 0.00 | 23,418.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Elliot Glaysher (filter this donee) | 23,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 23,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Mike Anderson (filter this donee) | 23,000.00 | 0.00 | 0.00 | 0.00 | 23,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Janos Kramar (filter this donee) | 22,811.00 | 0.00 | 0.00 | 0.00 | 0.00 | 200.00 | 541.00 | 800.00 | 4,870.00 | 5,600.00 | 10,800.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Rolf Nelson (filter this donee) | 21,810.00 | 0.00 | 0.00 | 0.00 | 100.00 | 0.00 | 0.00 | 0.00 | 1,710.00 | 0.00 | 20,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Bruno Parga (filter this donee) | 21,743.00 | 0.00 | 0.00 | 0.00 | 10,382.00 | 11,361.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Kevin Fischer (filter this donee) | 21,230.00 | 0.00 | 0.00 | 0.00 | 1,000.00 | 2,000.00 | 0.00 | 3,780.00 | 4,270.00 | 4,280.00 | 5,900.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Pasha Kamyshev (filter this donee) | 20,200.00 | 0.00 | 0.00 | 0.00 | 20,200.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Survival and Flourishing Fund (filter this donee) | 20,000.00 | 0.00 | 20,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Victoria Krakovna (filter this donee) | 19,867.00 | 0.00 | 0.00 | 0.00 | 19,867.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Nicolas Tarleton (filter this donee) | 19,559.00 | 0.00 | 0.00 | 0.00 | 0.00 | 2,000.00 | 0.00 | 0.00 | 200.00 | 9,659.00 | 7,700.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Simon Sáfár (filter this donee) | 19,162.00 | 0.00 | 0.00 | 0.00 | 3,131.00 | 16,031.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jai Dhyani (filter this donee) | 19,156.00 | 0.00 | 0.00 | 0.00 | 7,751.00 | 11,405.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Sergejs Silko (filter this donee) | 18,200.00 | 0.00 | 0.00 | 0.00 | 18,200.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
John Salvatier (filter this donee) | 17,718.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 9,110.00 | 2,000.00 | 10.00 | 0.00 | 6,598.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Stanley Pecavar (filter this donee) | 17,450.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 17,450.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Benjamin Goldhaber (filter this donee) | 17,250.00 | 0.00 | 0.00 | 0.00 | 0.00 | 17,250.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Leopold Bauernfeind (filter this donee) | 16,900.00 | 0.00 | 0.00 | 0.00 | 0.00 | 16,900.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Wolf Tivy (filter this donee) | 16,758.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 16,758.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
The Maurice Amado Foundation (filter this donee) | 16,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 16,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Sergio Tarrero (filter this donee) | 15,220.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 620.00 | 0.00 | 14,600.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Donald King (filter this donee) | 15,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 9,000.00 | 0.00 | 6,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Quixey (filter this donee) | 15,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 15,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Stephan T. Lavavej (filter this donee) | 15,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 15,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Tran Bao Trung (filter this donee) | 14,379.00 | 0.00 | 0.00 | 0.00 | 0.00 | 14,379.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Aleksei Riikonen (filter this donee) | 14,372.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 130.00 | 0.00 | 242.00 | 14,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
William Morgan (filter this donee) | 13,571.00 | 0.00 | 0.00 | 0.00 | 0.00 | 1,000.00 | 400.00 | 0.00 | 0.00 | 5,171.00 | 7,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
James Mazur (filter this donee) | 13,127.00 | 0.00 | 0.00 | 0.00 | 675.00 | 12,452.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michal Pokorný (filter this donee) | 13,000.00 | 0.00 | 0.00 | 0.00 | 1,000.00 | 12,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Eric Lin (filter this donee) | 12,870.00 | 0.00 | 0.00 | 0.00 | 12,870.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Ames (filter this donee) | 12,500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 12,500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Roy Ames (filter this donee) | 12,500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 12,500.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Sam Eisenstat (filter this donee) | 12,356.00 | 0.00 | 0.00 | 0.00 | 0.00 | 12,356.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Benjamin Hoffman (filter this donee) | 12,332.00 | 0.00 | 0.00 | 0.00 | 0.00 | 100.00 | 0.00 | 3.00 | 12,229.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Emma Borhanian (filter this donee) | 12,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 12,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Plotz (filter this donee) | 11,960.00 | 0.00 | 0.00 | 0.00 | 11,960.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Joshua Fox (filter this donee) | 11,934.00 | 0.00 | 0.00 | 0.00 | 360.00 | 2,040.00 | 1,310.00 | 105.00 | 490.00 | 1,529.00 | 6,100.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Louie Helm (filter this donee) | 11,930.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 270.00 | 1,260.00 | 10,400.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Kenn Hamm (filter this donee) | 11,472.00 | 0.00 | 0.00 | 0.00 | 11,472.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Stephanie Zolayvar (filter this donee) | 11,247.00 | 0.00 | 0.00 | 0.00 | 0.00 | 11,247.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jean-Philippe Sugarbroad (filter this donee) | 11,200.00 | 0.00 | 0.00 | 0.00 | 0.00 | 11,200.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Robert and Judith Babcock (filter this donee) | 11,100.00 | 0.00 | 0.00 | 0.00 | 11,100.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Ryan Carey (filter this donee) | 10,172.00 | 0.00 | 0.00 | 0.00 | 5,086.00 | 5,086.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Riley Goodside (filter this donee) | 10,049.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 3,949.00 | 6,100.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Adam Weissman (filter this donee) | 10,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 10,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Gil Elbaz (filter this donee) | 10,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Alex Schell (filter this donee) | 9,575.00 | 0.00 | 0.00 | 0.00 | 9,575.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Luke Titmus (filter this donee) | 8,837.00 | 0.00 | 0.00 | 0.00 | 0.00 | 8,837.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Xerxes Dotiwalla (filter this donee) | 8,700.00 | 0.00 | 0.00 | 0.00 | 1,350.00 | 7,350.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Daniel Nelson (filter this donee) | 8,147.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 70.00 | 2,077.00 | 6,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Paul Rhodes (filter this donee) | 8,025.00 | 0.00 | 0.00 | 0.00 | 1,024.00 | 869.00 | 61.00 | 122.00 | 430.00 | 5,519.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Laura and Chris Soares (filter this donee) | 7,510.00 | 0.00 | 0.00 | 0.00 | 0.00 | 7,510.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Nhat Anh Phan (filter this donee) | 7,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 7,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Paul Christiano (filter this donee) | 7,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 7,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Alex Edelman (filter this donee) | 6,932.00 | 0.00 | 0.00 | 0.00 | 1,800.00 | 5,132.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Nader Chehab (filter this donee) | 6,786.00 | 0.00 | 0.00 | 0.00 | 6,786.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
James Douma (filter this donee) | 6,430.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 550.00 | 0.00 | 5,880.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Cliff & Stephanie Hyra (filter this donee) | 6,208.00 | 0.00 | 0.00 | 0.00 | 6,208.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Andrew Hay (filter this donee) | 6,201.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 6,201.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Raymond Arnold (filter this donee) | 5,920.00 | 0.00 | 0.00 | 0.00 | 500.00 | 3,420.00 | 2,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Tobias Dänzer (filter this donee) | 5,734.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,734.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Bryan Dana (filter this donee) | 5,599.00 | 0.00 | 0.00 | 0.00 | 70.00 | 5,529.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Phil Hazelden (filter this donee) | 5,559.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,559.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Robert and Gery Ruddick (filter this donee) | 5,500.00 | 0.00 | 0.00 | 0.00 | 5,500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jacob Falkovich (filter this donee) | 5,415.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,065.00 | 300.00 | 50.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Frank Adamek (filter this donee) | 5,250.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,250.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Giles Edkins (filter this donee) | 5,145.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 145.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Daniel Weinand (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Daniel Ziegler (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Jeff Bone (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Joshua Looks (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Kevin R. Fischer (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Thomas Jackson (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Tuxedage John Adams (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Robert Yaman (filter this donee) | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 5,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Patrick Brinich-Langlois (filter this donee) | 3,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 3,000.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
JP Addison (filter this donee) | 500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Vipul Naik (filter this donee) | 500.00 | 0.00 | 0.00 | 0.00 | 500.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Tim Bakker (filter this donee) | 474.72 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 474.72 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Kyle Bogosian (filter this donee) | 385.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 135.00 | 250.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Nick Brown (filter this donee) | 199.28 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 119.57 | 79.71 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Mathieu Roy (filter this donee) | 198.85 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 198.85 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Johannes Gätjen (filter this donee) | 118.68 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 118.68 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Aaron Gertler (filter this donee) | 100.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 100.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Peter Hurford (filter this donee) | 90.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 90.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
William Grunow (filter this donee) | 74.98 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 37.49 | 37.49 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Vegard Blindheim (filter this donee) | 63.39 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 63.39 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Alexandre Zani (filter this donee) | 50.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 50.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Henry Cooksley (filter this donee) | 38.12 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 38.12 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Akhil Jalan (filter this donee) | 31.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 31.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Pablo Stafforini (filter this donee) | 25.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 25.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Dello-Iacovo (filter this donee) | 20.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 20.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Michael Dickens (filter this donee) | 20.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 20.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Gwern Branwen (filter this donee) | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
Total | 48,071,776.52 | 19,594,683.50 | 9,006,750.00 | 3,302,500.00 | 2,145,164.00 | 6,754,495.00 | 1,316,133.14 | 689,164.88 | 1,607,877.00 | 669,931.00 | 2,421,078.00 | 10,000.00 | 267,000.00 | 162,000.00 | 125,000.00 |
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
(My understanding of) What Everyone in Technical Alignment is Doing and Why (GW, IR) | 2022-08-28 | Thomas Larsen Eli | LessWrong | Fund for Alignment Resesarch | Aligned AI Alignment Research Center Anthropic Center for AI Safety Center for Human-Compatible AI Center on Long-Term Risk Conjecture DeepMind Encultured Future of Humanity Institute Machine Intelligence Research Institute OpenAI Ought Redwood Research | Review of current state of cause area | AI safety | This post, cross-posted between LessWrong and the Alignment Forum, goes into detail on the authors' understanding of various research agendas and the organizations pursuing them. | |
2021 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2021-12-23 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund | Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure. |
Our all-time largest donation, and major crypto support from Vitalik Buterin | 2021-05-13 | Colm Ó Riain | Machine Intelligence Research Institute | Anonymous Vitalik Buterin | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces two major donations: one (MIRI's largest donation to date) from an anonymous donor donating $15.6 million ($2.5 million per year from 2021 to 2024 and an additional $5.6 million in 2025), and 1050 ETH ($4,378,159) from Vitalik Buterin. | |
2020 Updates and Strategy | 2020-12-21 | Malo Bourgon | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI provides a general update and includes thoughts on strategy. On the strategy front, MIRI says that it is moving away from the strategy it announced in its 2017 post https://intelligence.org/2017/04/30/2017-updates-and-strategy/ that involved “seeking entirely new low-level foundations for optimization,” “endeavoring to figure out parts of cognition that can be very transparent as cognition,” and “experimenting with some specific alignment problems.” MIRI also provides updated thoughts on remote work during the COVID-19 pandemic and the possibility of relocating its office from Berkeley. | ||
2020 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2020-12-21 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund | Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint. |
Our 2019 Fundraiser Review | 2020-02-13 | Colm Ó Riain | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI gives an update on its 2019 fundraiser. It goes into several reasons for the total amout of money raised in the fundraiser ($601,120) being less than the amounts raised in 2017 and 2018. Reasons listed include: (1) lower value of cryptocurrency than in 2017, (2) nondisclosed-by-default policy making it harder for potential donors to evaluate research, (3) changes to US tax law in 2018 that may encourage donation bunching, (4) fewer counterfactual matching opportunities for donations, (4) possible donor perception of diminishing returns on marginal donations, (5) variation driven by fluctuation in amounts from larger donors, (6) former earning-to-give donors moving to direct work, (7) urgent needs for funds expressed by MIRI to donors in previous years, causing a front-loading of donations in those years. The post ends by saying that although the fundraiser raised less than expected, MIRI appreciates the donor support and that they will be able to pursue the majority of their growth plans. | ||
MIRI paid in to Epstein's network of social legitimacy | 2019-12-24 | Jeffrey Epstein | Machine Intelligence Research Institute | Miscellaneous commentary | AI safety | The blog post discusses donations made by convicted sex offender Jeffrey Epstein to MIRI, and the ethics of MIRI accepting the money. It links to https://t.umblr.com/redirect?z=https%3A%2F%2Fprojects.propublica.org%2Fnonprofits%2Fdisplay_990%2F582565917%2F2011_01_EO%252F58-2565917_990_200912&t=NzAxNDk3OWUzMzkwMTQyN2YxYzY1NGY4M2EzYjk2NDY3Y2FhNDQ2OCxjMmU4MDVmNTk2OTRkMmE1MjNmMWI5OTUzMTBjMjI5OGNmMmMxMThm for proof of donation; it includes a screenshot of a tweet by Eliezer that no longer seems available. The post suggests a criterion for accepting "bad" money: if after accepting it MIRI can make sure that it confers no additional social legitimacy to the donor. | |||
2019 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2019-12-19 | Larks | Effective Altruism Forum | Larks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund | Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse | Survival and Flourishing Fund | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. |
Suggestions for Individual Donors from Open Philanthropy Staff - 2019 | 2019-12-18 | Holden Karnofsky | Open Philanthropy | Chloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel | National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought | Donation suggestion list | Criminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safety | Continuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views. | |
MIRI’s 2019 Fundraiser | 2019-12-02 | Malo Bourgon | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee donation case | AI safety | MIRI announces its 2019 fundraiser, with a target of $1 million for fundraising. The blog post describes MIRI's projected budget, and provides more details on MIRI's activities in 2019, including (1) workshops and scaling up, and (2) research and write-ups. Regarding research, the blog post reaffirms continuation of the nondisclosure-by-default policy announced in 2018 at https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ The post is link-posted to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/LCApwxbdX4njYzgdr/miri-s-2019-fundraiser (GW, IR) | ||
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR) | 2019-09-10 | Ryan Carey | Effective Altruism Forum | Founders Pledge Open Philanthropy | OpenAI Machine Intelligence Research Institute | Broad donor strategy | AI safety|Global catastrophic risks|Scientific research|Politics | Ryan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals. | |
New grants from the Open Philanthropy Project and BERI | 2019-04-01 | Rob Bensinger | Machine Intelligence Research Institute | Open Philanthropy Berkeley Existential Risk Initiative | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ | |
Committee for Effective Altruism Support | 2019-02-27 | Open Philanthropy | Open Philanthropy | Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute | Broad donor strategy | Effective altruism|AI safety | The document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019 | ||
Our 2018 Fundraiser Review | 2019-02-11 | Colm Ó Riain | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI gives an update on its 2018 fundraiser. Key topics discussed include four types of donation matching programs that MIRI benefited from: (1) WeTrust Spring's ETH-matching event, (2) Facebook's Giving Tuesday event with https://donations.fb.com/giving-tuesday/ linked to, (3) Double Up Drive challenge, (4) Corporate matching. | ||
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR) | 2019-01-06 | Avi Norowitz | Effective Altruism Forum | Avi Norowitz William Kiely | Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund | Miscellaneous commentary | The blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched | ||
EA orgs are trying to fundraise ~$10m - $16m (GW, IR) | 2019-01-06 | Hauke Hillebrandt | Effective Altruism Forum | Centre for Effective Altruism Effective Altruism Foundation Machine Intelligence Research Institute Forethought Foundation for Global Priorities Research Sentience Institute Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute Rethink Priorities EA Hotel 80,000 Hours Rethink Charity | Miscellaneous commentary | The blog post links to and discusses the spreadsheet https://docs.google.com/spreadsheets/d/10zU6gp_H_zuvlZ2Vri-epSK0_urbcmdS-5th3mXQGXM/edit which tabulates various organizations and their fundraising targets, along with quotes and links to fundraising posts. The blog post itself has three points, the last of which is that the EA community is relatively more funding-constrained again | |||
2018 AI Alignment Literature Review and Charity Comparison (GW, IR) | 2018-12-17 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis | Review of current state of cause area | AI safety | Cross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues. | |
MIRI’s 2018 Fundraiser | 2018-11-26 | Malo Bourgon | Machine Intelligence Research Institute | Dan Smith Aaron Merchak Matt Ashton Stephen Chidwick | Machine Intelligence Research Institute | Donee donation case | AI safety | MIRI announces its 2018 end-of-year fundraising, with Target 1 of $500,000 and Target 2 of $1,200,000. It provides an overview of its 2019 budget and plans to explain the values it has worked out for Target 1 and Target 2. The post also mentions a matching opportunity sponsored by professional poker players Dan Smith, Aaron Merchak, Matt Ashton, and Stephen Chidwick, in partnership with Raising for Effective Giving (REG), which provides matching for donations to MIRI and REG up to $20,000. The post is referenced by Effective Altruism Funds in their grant write-up for a $40,000 grant to MIRI, at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi | |
My 2018 donations (GW, IR) | 2018-11-23 | Vipul Naik | Effective Altruism Forum | Vipul Naik | GiveWell top charities Machine Intelligence Research Institute Donor lottery | Periodic donation list documentation | Global health and development|AI safety | The blog post describes an allocation of $2,000 to GiveWell for regranting to top charities, and $500 each to MIRI and the $500,000 donor lottery. The latter two donations are influenced by Issa Rice, who describes his reasoning at https://issarice.com/donation-history#section-3 Vipul Naik's post explains the reason for donating now rather than earlier or later, the reason for donating this amount, and the selection of recipients. The post is also cross-posted at https://vipulnaik.com/blog/my-2018-donations/ and https://github.com/vipulnaik/working-drafts/blob/master/eaf/my-2018-donations.md | |
2018 Update: Our New Research Directions | 2018-11-22 | Nate Soares | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI executive director Nate Soares explains the new research directions being followed by MIRI, and how they differ from the original Agent Foundations agenda. The post also talks about how MIRI is being cautious in terms of sharing technical details of its research, until there is greater internal clarity on what findings need to be developed further, and what findings should be shared with what group. The post ends with guidance for people interested in joining the MIRI team to further the technical agenda. The post is referenced by Effective Altruism Funds in their grant write-up for a $40,000 grant to MIRI, at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi The nondisclosure-by-default section of the post is also referenced by Larks in https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#MIRI__The_Machine_Intelligence_Research_Institute (GW, IR) and also cited by him as one of the reasons he is not donating to MIRI this year (general considerations related to this are described at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#Openness (GW, IR) in the same post). Issa Rice also references these concerns in his donation decision write-up for 2018 at https://issarice.com/donation-history#section-3 but nonetheless decides to allocate $500 to MIRI. | ||
Opportunities for individual donors in AI safety (GW, IR) | 2018-03-12 | Alex Flint | Effective Altruism Forum | Machine Intelligence Research Institute Future of Humanity Institute | Review of current state of cause area | AI safety | Alex Flint discusses the history of AI safety funding, and suggests some heuristics for individual donors based on what he has seen to be successful in the past. | ||
Fundraising success! | 2018-01-10 | Malo Bourgon | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces the success of its fundraiser, providing information on its top doonors, and thanking everybody who contributed. | ||
Where the ACE Staff Members Are Giving in 2017 and Why | 2017-12-26 | Allison Smith | Animal Charity Evaluators | Jon Bockman Allison Smith Toni Adleberg Sofia Davis-Fogel Kieran Greig Jamie Spurgeon Erika Alonso Eric Herboso Gina Stuessy | Animal Charity Evaluators The Good Food Institute Vegan Outreach A Well-Fed World Better Eating International Encompass Direct Action Everywhere Animal Charity Evaluators Recommended Charity Fund Against Malaria Foundation Animal equality The Nonhuman Rights Project AnimaNaturalis Internacional The Humane League GiveDirectly Food Empowerment Project Mercy For Animals New Harvest StrongMinds Centre for Effective Altruism Effective Altruism Funds Machine Intelligence Research Institute Donor lottery Sentience Institute Wild-Animal Suffering Research | Periodic donation list documentation | Animal welfare|AI safety|Global health and development|Effective altruism | Continuing an annual tradition started in 2016, Animal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2017. Donation amounts are not disclosed, likely by policy | |
Suggestions for Individual Donors from Open Philanthropy Project Staff - 2017 | 2017-12-21 | Holden Karnofsky | Open Philanthropy | Jaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey | Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters | Donation suggestion list | Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reform | Open Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups. | |
2017 AI Safety Literature Review and Charity Comparison (GW, IR) | 2017-12-20 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours | Review of current state of cause area | AI safety | The lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts." | |
I Vouch For MIRI | 2017-12-17 | Zvi Mowshowitz | Zvi Mowshowitz | Machine Intelligence Research Institute | Single donation documentation | AI safety | Mowshowitz explains why he made his $10,000 donation to MIRI, and makes the case for others to support MIRI. He believes that MIRI understands the hardness of the AI safety problem, is focused on building solutions for the long term, and has done humanity a great service through its work on functional decision theory. | ||
MIRI 2017 Fundraiser and Strategy Update (GW, IR) | 2017-12-15 | Malo Bourgon | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee donation case | AI safety | MIRI provides an update on its fundraiser and its strategy in a general-interest forum for people interested in effective altruism. They say the fundraiser is already going quite well, but believe they can still use marginal funds well to expand more. | ||
End-of-the-year matching challenge! | 2017-12-14 | Rob Bensinger | Machine Intelligence Research Institute | Christian Calderon Marius van Voorden | Machine Intelligence Research Institute | Donee donation case | AI safety | MIRI gives an update on how its fundraising efforts are going, noting that it has met its first fundraising target, listing two major donations (Christian Calderon: $367,574 and Marius van Voorden: $59K), and highlighting the 2017 charity drive where donations up to $1 million to a list of charities including MIRI will be matched. | |
AI: a Reason to Worry, and to Donate | 2017-12-10 | Jacob Falkovich | Jacob Falkovich | Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds | Single donation documentation | AI safety | Falkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund). | ||
MIRI’s 2017 Fundraiser | 2017-12-01 | Malo Bourgon | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee donation case | AI safety | Document provides cumulative target amounts for 2017 fundraiser ($625,000 Target 1, $850,000 Target 2, $1,250,000 Target 3) along with what MIRI expects to accomplish at each target level. Funds raised from the Open Philanthropy Project and an anonymous cryptocurrency donor (see https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/ for more) are identified as reasons for the greater financial security and more long-term and ambitious planning. | ||
Claim: if you work in an AI alignment org funded by donations, you should not own much cryptocurrency, since much of your salary comes from people who do | 2017-11-18 | Daniel Filan | Machine Intelligence Research Institute | Miscellaneous commentary | AI safety | The post by Daniel Filan claims that organizations working in AI risk get a large share of their donations from cryptocurrency investors, so their fundraising success is tied to the success of cryptocurrency. For better diversification, therefore, people working at such organizations should not own cryptocurrency. The post has a number of comments from Malo Bourgon of the Machine Intelligence Research Institute, which is receiving a lot of money from cryptocurrency investors in the months surrounding the post date | |||
Superintelligence Risk Project: Conclusion | 2017-09-15 | Jeff Kaufman | Machine Intelligence Research Institute | Review of current state of cause area | AI safety | This is the concluding post (with links to all earlier posts) of a month-long investigation by Jeff Kaufman into AI risk. Kaufman investigates by reading the work of, and talking with, both people who work in AI risk reduction and people who work on machine learning and AI in industry and academia, but are not directly involved with safety. His conclusion is that there likely should continue to be some work on AI risk reduction, and this should be respected by people working on AI. He is not confident about how the current level and type of work on AI risk compares with the optimal level and type of such work | |||
A major grant from the Open Philanthropy Project | 2017-09-08 | Malo Bourgon | Machine Intelligence Research Institute | Open Philanthropy | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding." | |
I’ve noticed that this misconception is still floating around | 2017-08-30 | Rob Bensinger | Machine Intelligence Research Institute | Reasoning supplement | AI safety | Post notes an alleged popular misconception that the reason to focus on AI risk is that it is low-probability but high-impact, but MIRI researchers assign a medium-to-high probability of AI risk in the medium-term future. | |||
My current thoughts on MIRI’s highly reliable agent design work (GW, IR) | 2017-07-07 | Daniel Dewey | Effective Altruism Forum | Open Philanthropy | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Post discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin. | |
Updates to the research team, and a major donation | 2017-07-04 | Malo Bourgon | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI announces a surprise $1.01 million donation from an Ethereum cryptocurrency investor (2017-05-30) as well as updates related to team and fundraising. | ||
Four quantiative models, aggregation, and final decision | 2017-05-20 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | 80,000 Hours Animal Charity Evaluators Machine Intelligence Research Institute StrongMinds | Single donation documentation | Effective altruism/career advice | The post describes how the Oxford Prioritisation Project compared its four finalists (80000 Hours, Animal Charity Evaluators, Machine Intelligence Research Institute, and StrongMinds) by building quantitative models for each, including modeling of uncertainties. Based on these quantitative models, 80000 Hours was chosen as the winner. Also posted to http://effective-altruism.com/ea/1ah/four_quantiative_models_aggregation_and_final/ for comments | |
A model of the Machine Intelligence Research Institute | 2017-05-20 | Sindy Li | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | The post describes a quantitative model of the Machine Intelligence Research Institute, available at https://www.getguesstimate.com/models/8789 on Guesstimate. Also posted to http://effective-altruism.com/ea/1ae/a_model_of_the_machine_intelligence_research/ for comments | |
2017 Updates and Strategy | 2017-04-30 | Rob Bensinger | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee periodic update | AI safety | MIRI provides updates on its progress as an organization and outlines its strategy and budget for the coming year. Key update is that recent developments in AI have made them increase the probability of AGI before 2035 by a little bit. MIRI has also been in touch with researchers at FAIR, DeepMind, and OpenAI. | ||
AI Safety: Is it worthwhile for us to look further into donating into AI research? | 2017-03-11 | Qays Langan-Dathi | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute | Review of current state of cause area | AI safety | The post concludes: "In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives." | |
Final decision: Version 0 | 2017-03-01 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Against Malaria Foundation Machine Intelligence Research Institute The Good Food Institute StrongMinds | Reasoning supplement | Version 0 of a decision process for what charity to grant 10,000 UK pouds to. Result was a tie between Machine Intelligence Research Institute and StrongMinds. See http://effective-altruism.com/ea/187/oxford_prioritisation_project_version_0/ for a cross-post with comments | ||
Konstantin Sietzy: current view, StrongMinds | 2017-02-21 | Konstantin Sietzy | Oxford Prioritisation Project | Oxford Prioritisation Project | StrongMinds Machine Intelligence Research Institute | Evaluator review of donee | Mental health | Konstantin Sietzy explains why StrongMinds is the best charity in his view. Also lists Machine Intelligence Research Institute as the runner-up | |
Daniel May: current view, Machine Intelligence Research Institute | 2017-02-15 | Daniel May | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Daniel May evaluates the Machine Intelligence Research Institute and describes his reasons for considering it the best donation opportunity | |
Tom Sittler: current view, Machine Intelligence Research Institute | 2017-02-08 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute Future of Humanity Institute | Evaluator review of donee | AI safety | Tom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update | |
Changes in funding in the AI safety field | 2017-02-01 | Sebastian Farquhar | Centre for Effective Altruism | Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab | Review of current state of cause area | AI safety | The post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/ | ||
Belief status: off-the-cuff thoughts! | 2017-01-19 | Vipul Naik | Machine Intelligence Research Institute | Reasoning supplement | AI safety | The post argues that (lack of) academic endorsement of the work done by MIRI should not be an important factor in evaluating MIRI, offering three reasons. Commenters include Rob Bensinger, Research Communications Manager at MIRI. | |||
The effective altruism guide to donating this giving season | 2016-12-28 | Robert Wiblin | 80,000 Hours | Blue Ribbon Study Panel on Biodefense Cool Earth Alliance for Safety and Justice Cosecha Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Compassion in World Farming USA Against Malaria Foundation Schistosomiasis Control Initiative StrongMinds Ploughshares Fund Machine Intelligence Research Institute Future of Humanity Institute | Evaluator consolidated recommendation list | Biosecurity and pandemic preparedness,Global health and development,Animal welfare,AI risk,Global catastrophic risks,Effective altruism/movement growth | Robert Wiblin draws on a number of annual charity evaluations and reviews, as well as staff donation writeups, from sources such as GiveWell and Animal Charity Evaluators, to provide an "effective altruism guide" for 2016 Giving Season donation | ||
Where the ACE Staff Members are Giving in 2016 and Why | 2016-12-23 | Leah Edgerton | Animal Charity Evaluators | Allison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso | Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds | Periodic donation list documentation | Animal welfare|AI safety|Global catastrophic risks | Animal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy | |
Suggestions for Individual Donors from Open Philanthropy Project Staff - 2016 | 2016-12-14 | Holden Karnofsky | Open Philanthropy | Jaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead | Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund | Donation suggestion list | Animal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policy | Open Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas. | |
2016 AI Risk Literature Review and Charity Comparison (GW, IR) | 2016-12-13 | Larks | Effective Altruism Forum | Larks | Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving | Review of current state of cause area | AI safety | The lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute." | |
EAs write about where they give | 2016-12-09 | Julia Wise | Effective Altruism Forum | Blake Borgeson Eva Vivalt Ben Kuhn Alexander Gordon-Brown and Denise Melchin Elizabeth Van Nostrand | Machine Intelligence Research Institute Center for Applied Rationality AidGrade Charity Science: Health 80,000 Hours Centre for Effective Altruism Tostan | Periodic donation list documentation | Global health and development, AI risk | Julia Wise got submissions from multiple donors about their donation plans and put them together in a single post. The goal was to cover people outside of organizations that publish such posts for their employees | |
CEA Staff Donation Decisions 2016 | 2016-12-06 | Sam Deere | Centre for Effective Altruism | William MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz | Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute | Periodic donation list documentation | Centre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed. | ||
Why I'm donating to MIRI this year (GW, IR) | 2016-11-30 | Owen Cotton-Barratt | Owen Cotton-Barratt | Machine Intelligence Research Institute | Single donation documentation | AI safety | Primary interest is in existential risk. Cited CoI and other reasons for not donating to own employer, Centre for Effective Altruism. Notes disagreements with MIRI, citing http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#research but highlights need for epistemic humility. | ||
Crunch time!! The 2016 fundraiser for the AI safety group I work at, MIRI, is going a lot slower than expected | 2016-10-25 | Rob Bensinger | Machine Intelligence Research Institute | Donee donation case | AI safety | Rob Bensinger, Research Communications Director at MIRI, takes to his personal Facebook to ask people to chip in for the MIRI fundraiser, which is going slower than he and MIRI expected, and may not meet its target. The final comment by Bensinger notes that $582,316 out of the target of $750,000 was raised, and that about $260k of that was raised after his post, so he credits the final push for helping MIRI move closer to its fundraising goals. | |||
Ask MIRI Anything (AMA) (GW, IR) | 2016-10-11 | Rob Bensinger | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee AMA | AI safety | Rob Bensinger, the Research Communications Manager at MIRI, hosts an Ask Me Anything (AMA) on the Effective Altruism Forum during the October 2016 Fundraiser. | ||
MIRI’s 2016 Fundraiser | 2016-09-16 | Nate Soares | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee donation case | AI safety | MIRI announces its single 2016 fundraiser (as opposed to previous years when it conducted two fundraisers, it is conducting just one this time, in the Fall). | ||
Machine Intelligence Research Institute — General Support | 2016-09-06 | Open Philanthropy | Open Philanthropy | Open Philanthropy | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Open Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email. | |
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF) | 2016-09-06 | Open Philanthropy | Open Philanthropy | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Reviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06). | ||
Some Key Ways in Which I've Changed My Mind Over the Last Several Years | 2016-09-06 | Holden Karnofsky | Open Philanthropy | Machine Intelligence Research Institute Future of Humanity Institute | Reasoning supplement | AI safety | In this 16-page Google Doc, Holden Karnofsky, Executive Director of the Open Philanthropy Project, lists three issues he has changed his mind about: (1) AI safety (he considers it more important now), (2) effective altruism community (he takes it more seriously now), and (3) general properties of promising ideas and interventions (he considers feedback loops less necessary than he used to, and finding promising ideas through abstract reasoning more promising). The document is linked to and summarized in the blog post https://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about | ||
Here are the biggest things I got wrong in my attempts at effective altruism over the last ~3 years. | 2016-05-24 | Buck Shlegeris | Buck Shlegeris Open Philanthropy | Vegan Outreach Machine Intelligence Research Institute | Broad donor strategy | Global health|Animal welfare|AI safety | Buck Shlegeris, reflecting on his past three years as an effective altruist, identifies two mistakes he made in his past 3 years as an effective altruist: (1) "I thought leafleting about factory farming was more effective than GiveWell top charities. [...] I probably made this mistake because of emotional bias. I was frustrated by people who advocated for global poverty charities for dumb reasons. [...] I thought that if they really had that belief, they should either save their money just in case we found a great intervention for animals in the future, or donate it to the people who were trying to find effective animal right interventions. I think that this latter argument was correct, but I didn't make it exclusively." (2) "In 2014 and early 2015, I didn't pay as much attention to OpenPhil as I should have. [...] Being wrong about OpenPhil's values is forgivable, but what was really dumb is that I didn't realize how incredibly important it was to my life plan that I understand OpenPhil's values." (3) "I wish I'd thought seriously about donating to MIRI sooner. [...] Like my error #2, this is an example of failing to realize that when there's an unknown which is extremely important to my plans but I'm very unsure about it and haven't really seriously thought about it, I should probably try to learn more about it." | ||
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity | 2016-05-06 | Holden Karnofsky | Open Philanthropy | Open Philanthropy | Machine Intelligence Research Institute Future of Humanity Institute | Review of current state of cause area | AI safety | In this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority. | |
Concerning MIRI’s Place in the EA Movement | 2016-02-17 | Ozy Brennan | Thing of Things | Machine Intelligence Research Institute | Miscellaneous commentary | AI safety | The post does not directly evaluate MIRI, but highlights the importance of object-level evaluation of the quality and value of the work done by MIRI. Also thanks MIRI, LessWrong, and Yudkowsky for contributions to the growth of the effective altruist movement. | ||
Where should you donate to have the most impact during giving season 2015? | 2015-12-24 | Robert Wiblin | 80,000 Hours | Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly | Evaluator consolidated recommendation list | Global health and development|Effective altruism/movement growth|Epistemic institutions|Biosecurity and pandemic preparedness|AI risk|Global catastrophic risks | Robert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate | ||
My Cause Selection: Michael Dickens | 2015-09-15 | Michael Dickens | Effective Altruism Forum | Michael Dickens | Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving | Single donation documentation | Animal welfare,AI risk,Effective altruism | Explanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier | |
MIRI Fundraiser: Why now matters (GW, IR) | 2015-07-24 | Nate Soares | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee donation case | AI safety | Cross-posted at LessWrong and on the MIRI blog at https://intelligence.org/2015/07/20/why-now-matters/ -- this post occurs just two months after Soares takes over as MIRI Executive Director. It is a followup to https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/ | ||
MIRI’s 2015 Summer Fundraiser! | 2015-07-17 | Nate Soares | Machine Intelligence Research Institute | Machine Intelligence Research Institute | Donee donation case | AI safety | MIRI announces its summer fundraiser and links to a number of documents to help donors evaluate it. This is the first fundraiser under new Executive Director Nate Soares, just a couple months after he assumed office. | ||
Tumblr on MIRI | 2014-10-07 | Scott Alexander | Slate Star Codex | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | The blog post is structured as a response to recent criticism of MIRI on Tumblr, but is mainly a guardedly positive assessment of MIRI. In particular, it highlights the important role played by MIRI in elevating the profile of AI risk, citing attention from Stephen Hawking, Elon Musk, Gary Drescher, Max Tegmark, Stuart Russell, and Peter Thiel. | ||
How does MIRI Know it Has a Medium Probability of Success? (GW, IR) | 2013-08-01 | Peter Hurford | LessWrong | Machine Intelligence Research Institute | Miscellaneous commentary | AI safety | In this bleg, Peter Hurford asks why MIRI thinks it has a medium probability of success at achieving the goal of friendly AI (and avoiding unfriendly AI). The post attracts multiple comments from Eliezer Yudkowsky, Carl Shulman, Wei Dai, and others. | ||
Earning to Give vs. Altruistic Career Choice Revisited (GW, IR) | 2013-06-01 | Jonah Sinick | Jonah Sinick Willam MacAskill Eliezer Yudkowsky | Against Malaria Foundation Machine Intelligence Research Institute GiveWell Maximum Impact Fund | Miscellaneous commentary | Global health|AI safety | Jonah Sinick gives a number of arguments against the view that earning to give is likely to be the most socially valuable path. For contrast, he considers direct work in nonprofits and in other high-impact careers. He talks about the value of direct feedback and the significant difference between what a skilled person and a less skilled person can accomplish with direct work. Sinick draws extensively on his experience working at GiveWell where he evaluated the cost-effectiveness of charities. | ||
Evaluating the feasibility of SI's plan (GW, IR) | 2013-01-10 | Joshua Fox | LessWrong | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | This blog post, co-authored with Kaj Sotala, gives a simplified description of the plan being followed by the Singularity Institute (SI), the former name of the Machine Intelligence Research Institute (MIRI). It is critical of SI for focusing on its "perfect" friendly AI, and suggests that more focus be given to improving the safety of existing systems in development, such as OpenCog. In a reply comment, Eliezer notes that the "heuristic safety" that the blog post suggests focusing on is difficult, that people overestimate the feasibility of heuristic safety ideas, and that trying for a safety approach that seems highly likely to succeed is the best way to guard against safety approaches that are doomed to fail. There is further discussion in the comments from Wei Dai, Gwern, and a cryptography researcher. | ||
Thoughts on the Singularity Institute (SI) (GW, IR) | 2012-05-11 | Holden Karnofsky | LessWrong | Open Philanthropy | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Post discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit | |
SIAI - An Examination (GW, IR) | 2011-05-02 | Brandon Reinhart | LessWrong | Brandon Reinhart | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Post discussing initial investigation into the Singularity Institute for Artificial Intelligence (SIAI), the former name of Machine Intelligence Research Institute (MIRI), with the intent of deciding whether to donate. Final takeaway is that it was a worthy donation target, though no specific donation is announced in the post. See http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/ for an earlier draft of the post (along with a number of comments that were incorporated into the official version). | |
Singularity Institute for Artificial Intelligence | 2011-04-30 | Holden Karnofsky | GiveWell | Open Philanthropy | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | In this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI. | |
The Singularity Institute's Scary Idea (and Why I Don't Buy It) | 2010-10-29 | Ben Goertzel | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Ben Goertzel, who previously worked as Director of Research at MIRI (then called the Singularity Institute for Artificial Intelligence (SIAI)) articulates its "Scary Idea" and explains why he does not believe in it. His articulation of the Scary Idea: "If I or anybody else actively trying to build advanced AGI succeeds, we're highly likely to cause an involuntary end to the human race." | |||
Funding safe AGI | 2009-08-03 | Shane Legg | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Shane Legg, who had previously received a $10,000 grant from the Singularity Institute for Artificial Intelligence (SIAI) and would go on to co-found DeepMind, talks about SIAI and AI safety. He says that, probably, nobody knows how to deal with the problem of constructing a safe AGI, but SIAI is, in relative terms, the best. However, he provides some suggestions on how it could encourage and monitor AI development more closely rather than trying to build everything on its own. SIAI would later change its name to the Machine Intelligence Research Institute (MIRI). | |||
'Technology Is at the Center' Entrepreneur and philanthropist Peter Thiel on liberty and scientific progress | 2008-05-01 | Ronald Bailey | Reason Magazine | Peter Thiel | Machine Intelligence Research Institute Methuselah Foundation | Broad donor strategy | AI safety|Scientific research/longevity research | In an interview with Ronald Bailey, the science correspondent of Reason Magazine, Peter Thiel talks about his political ideology of libertarianism as well as his philanthropic activities. He talks about two areas that he is donating heavily in: accelerating a safe technological singularity (through donations to the Singulariy Institute) and anti-aging research (through donations to the Methuselah Foundation). |
Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations