Machine Intelligence Research Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Facebook page MachineIntelligenceResearchInstitute
Websitehttps://intelligence.org
Donate pagehttps://intelligence.org/donate/
Donors list pagehttps://intelligence.org/topdonors/
Transparency and financials pagehttps://intelligence.org/transparency/
Donation case pagehttp://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/
Twitter usernameMIRIBerkeley
Wikipedia pagehttps://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Charity Navigator pagehttps://www.charitynavigator.org/index.cfm?bay=search.profile&ein=582565917
Guidestar pagehttps://www.guidestar.org/profile/58-2565917
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Machine+Intelligence+Research+Institute
Key peopleEliezer Yudkowsky|Nate Soares|Luke Muehlhauser
Launch date2000

This entity is also a donor.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 467 6,000 38,381 0 120 600 1,810 4,239 6,000 9,575 12,000 20,000 46,133 3,750,000
AI safety 464 6,000 38,629 0 145 650 2,000 4,280 6,000 9,600 12,000 20,000 46,133 3,750,000
FIXME 1 20 20 20 20 20 20 20 20 20 20 20 20 20
2 20 23 20 20 20 20 20 20 25 25 25 25 25

Donation amounts by donor and year for donee Machine Intelligence Research Institute

Donor Total 2019 2018 2017 2016 2015 2014 2013 2012 2011 2009 2008
Open Philanthropy Project (filter this donee) 6,512,500.00 2,112,500.00 150,000.00 3,750,000.00 500,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Thiel Foundation (filter this donee) 1,627,000.00 0.00 0.00 0.00 0.00 250,000.00 250,000.00 27,000.00 1,100,000.00 0.00 0.00 0.00
Vitalik Buterin (filter this donee) 802,136.00 0.00 802,136.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donee) 800,000.00 600,000.00 0.00 200,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jed McCaleb (filter this donee) 631,137.00 0.00 0.00 0.00 0.00 0.00 631,137.00 0.00 0.00 0.00 0.00 0.00
Jaan Tallinn (filter this donee) 604,500.00 0.00 0.00 60,500.00 80,000.00 0.00 100,000.00 100,000.00 264,000.00 0.00 0.00 0.00
Effective Altruism Funds (filter this donee) 578,994.00 50,000.00 528,994.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Loren Merritt (filter this donee) 525,000.00 0.00 0.00 25,000.00 115,000.00 0.00 0.00 245,000.00 130,000.00 10,000.00 0.00 0.00
Edwin Evans (filter this donee) 475,080.00 0.00 35,550.00 60,000.00 40,000.00 0.00 50,030.00 52,500.00 237,000.00 0.00 0.00 0.00
Richard Schwall (filter this donee) 419,495.00 0.00 65,189.00 46,698.00 30,000.00 0.00 106,608.00 10,000.00 161,000.00 0.00 0.00 0.00
Christian Calderon (filter this donee) 367,574.00 0.00 0.00 367,574.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Blake Borgeson (filter this donee) 350,470.00 0.00 10.00 0.00 300,000.00 50,460.00 0.00 0.00 0.00 0.00 0.00 0.00
Investling Group (filter this donee) 309,000.00 0.00 0.00 0.00 0.00 0.00 65,000.00 24,000.00 220,000.00 0.00 0.00 0.00
Future of Life Institute (filter this donee) 250,000.00 0.00 0.00 0.00 0.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Raising for Effective Giving (filter this donee) 204,167.00 0.00 0.00 204,167.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jonathan Weissman (filter this donee) 171,290.00 0.00 20,000.00 40,000.00 20,000.00 0.00 20,010.00 30,280.00 41,000.00 0.00 0.00 0.00
Brian Cartmell (filter this donee) 146,700.00 0.00 0.00 0.00 0.00 0.00 0.00 700.00 146,000.00 0.00 0.00 0.00
Scott Dickey (filter this donee) 130,520.00 0.00 3,000.00 33,000.00 11,000.00 10,000.00 11,020.00 14,500.00 48,000.00 0.00 0.00 0.00
Eric Rogstad (filter this donee) 120,236.00 0.00 19,000.00 101,236.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Ben Hoskin (filter this donee) 94,209.00 0.00 31,481.00 30,000.00 32,728.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Ethan Dickinson (filter this donee) 93,408.00 0.00 0.00 25,400.00 20,518.00 12,000.00 35,490.00 0.00 0.00 0.00 0.00 0.00
Peter Scott (filter this donee) 80,000.00 0.00 30,000.00 50,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Sebastian Hagen (filter this donee) 74,587.00 0.00 22,384.00 10,851.00 12,085.00 12,113.00 17,154.00 0.00 0.00 0.00 0.00 0.00
Buck Shlegeris (filter this donee) 72,679.00 0.00 5.00 72,674.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Marius van Voorden (filter this donee) 71,461.00 0.00 0.00 59,251.00 0.00 0.00 7,210.00 5,000.00 0.00 0.00 0.00 0.00
Leif K-Brooks (filter this donee) 67,216.00 0.00 0.00 67,216.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alexei Andreev (filter this donee) 64,605.00 0.00 0.00 0.00 525.00 0.00 23,280.00 16,000.00 24,800.00 0.00 0.00 0.00
Chris Haley (filter this donee) 60,250.00 0.00 0.00 0.00 0.00 0.00 0.00 250.00 60,000.00 0.00 0.00 0.00
Guy Srinivasan (filter this donee) 58,310.00 0.00 0.00 0.00 0.00 0.00 6,910.00 8,400.00 43,000.00 0.00 0.00 0.00
Gordon Irlam (filter this donee) 55,000.00 0.00 0.00 20,000.00 10,000.00 10,000.00 10,000.00 5,000.00 0.00 0.00 0.00 0.00
Henrik Jonsson (filter this donee) 54,525.00 0.00 0.00 0.00 0.00 1.00 36,975.00 0.00 17,549.00 0.00 0.00 0.00
Mikko Rauhala (filter this donee) 53,745.00 0.00 7,200.00 13,200.00 6,000.00 0.00 170.00 2,575.00 24,600.00 0.00 0.00 0.00
Michael Blume (filter this donee) 51,755.00 0.00 1,400.00 5,425.00 15,000.00 4,000.00 5,140.00 9,990.00 10,800.00 0.00 0.00 0.00
Mihaly Barasz (filter this donee) 51,623.00 0.00 0.00 0.00 0.00 0.00 24,073.00 12,550.00 15,000.00 0.00 0.00 0.00
Alan Chang (filter this donee) 51,050.00 0.00 33,050.00 18,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Luke Stebbing (filter this donee) 50,500.00 0.00 3,000.00 18,050.00 10,500.00 9,650.00 9,300.00 0.00 0.00 0.00 0.00 0.00
Misha Gurevich (filter this donee) 50,370.00 0.00 1,500.00 9,000.00 6,000.00 4,500.00 5,520.00 7,550.00 16,300.00 0.00 0.00 0.00
Brandon Reinhart (filter this donee) 50,050.00 0.00 0.00 25,050.00 15,000.00 0.00 0.00 0.00 10,000.00 0.00 0.00 0.00
Scott Worley (filter this donee) 50,005.00 0.00 9,486.00 21,687.00 5,488.00 13,344.00 0.00 0.00 0.00 0.00 0.00 0.00
Marcello Herreshoff (filter this donee) 49,110.00 0.00 0.00 12,000.00 12,000.00 12,560.00 12,550.00 0.00 0.00 0.00 0.00 0.00
Jason Joachim (filter this donee) 44,100.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 44,000.00 0.00 0.00 0.00
Michael Cohen (filter this donee) 39,359.00 0.00 0.00 9,977.00 29,382.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Scott Siskind (filter this donee) 38,500.00 0.00 0.00 29,000.00 2,037.00 30.00 7,433.00 0.00 0.00 0.00 0.00 0.00
Robin Powell (filter this donee) 37,560.00 0.00 1,000.00 11,200.00 200.00 0.00 1,810.00 2,350.00 21,000.00 0.00 0.00 0.00
Austin Peña (filter this donee) 37,517.00 0.00 26,554.00 10,963.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Max Kesin (filter this donee) 37,000.00 0.00 10,000.00 20,420.00 6,580.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Sadowsky (filter this donee) 34,000.00 0.00 0.00 0.00 9,000.00 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Gustav Simonsson (filter this donee) 33,285.00 0.00 33,285.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nathaniel Soares (filter this donee) 33,230.00 0.00 0.00 10.00 0.00 0.00 33,220.00 0.00 0.00 0.00 0.00 0.00
Kelsey Piper (filter this donee) 30,730.00 0.00 30,730.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quinn Maurmann (filter this donee) 30,575.00 0.00 30,575.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jesse Liptrap (filter this donee) 29,590.00 0.00 0.00 0.00 0.00 0.00 10,490.00 0.00 19,100.00 0.00 0.00 0.00
Jeremy Schlatter (filter this donee) 28,711.00 0.00 150.00 150.00 4,000.00 1.00 310.00 15,000.00 9,100.00 0.00 0.00 0.00
Tomer Kagan (filter this donee) 26,500.00 0.00 0.00 0.00 0.00 0.00 0.00 16,500.00 10,000.00 0.00 0.00 0.00
Brian Tomasik (filter this donee) 26,010.00 0.00 0.00 0.00 0.00 2,000.00 10.00 0.00 0.00 0.00 12,000.00 12,000.00
Patrick LaVictoire (filter this donee) 25,885.00 0.00 5,000.00 20,885.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Paul Crowley (filter this donee) 25,850.00 0.00 13,400.00 12,450.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Zvi Mowshowitz (filter this donee) 25,010.00 0.00 10,000.00 10,000.00 0.00 5,010.00 0.00 0.00 0.00 0.00 0.00 0.00
Martine Rothblatt (filter this donee) 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 25,000.00 0.00 0.00 0.00
Gary Basin (filter this donee) 25,000.00 0.00 0.00 0.00 0.00 0.00 25,000.00 0.00 0.00 0.00 0.00 0.00
Mick Porter (filter this donee) 24,810.00 0.00 1,200.00 9,200.00 2,400.00 4,000.00 8,010.00 0.00 0.00 0.00 0.00 0.00
Liron Shapira (filter this donee) 24,750.00 0.00 0.00 0.00 0.00 0.00 15,100.00 0.00 9,650.00 0.00 0.00 0.00
Johan Edström (filter this donee) 23,680.00 0.00 0.00 5,700.00 300.00 0.00 2,250.00 7,730.00 7,700.00 0.00 0.00 0.00
Ethan Sterling (filter this donee) 23,418.00 0.00 23,418.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Elliot Glaysher (filter this donee) 23,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23,000.00 0.00 0.00 0.00
Mike Anderson (filter this donee) 23,000.00 0.00 23,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Janos Kramar (filter this donee) 22,811.00 0.00 0.00 200.00 541.00 800.00 4,870.00 5,600.00 10,800.00 0.00 0.00 0.00
Rolf Nelson (filter this donee) 21,810.00 0.00 100.00 0.00 0.00 0.00 1,710.00 0.00 20,000.00 0.00 0.00 0.00
Bruno Parga (filter this donee) 21,743.00 0.00 10,382.00 11,361.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kevin Fischer (filter this donee) 21,230.00 0.00 1,000.00 2,000.00 0.00 3,780.00 4,270.00 4,280.00 5,900.00 0.00 0.00 0.00
Pasha Kamyshev (filter this donee) 20,200.00 0.00 20,200.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Victoria Krakovna (filter this donee) 19,867.00 0.00 19,867.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nicolas Tarleton (filter this donee) 19,559.00 0.00 0.00 2,000.00 0.00 0.00 200.00 9,659.00 7,700.00 0.00 0.00 0.00
Simon Sáfár (filter this donee) 19,162.00 0.00 3,131.00 16,031.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jai Dhyani (filter this donee) 19,156.00 0.00 7,751.00 11,405.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Sergejs Silko (filter this donee) 18,200.00 0.00 18,200.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
John Salvatier (filter this donee) 17,718.00 0.00 0.00 0.00 9,110.00 2,000.00 10.00 0.00 6,598.00 0.00 0.00 0.00
Stanley Pecavar (filter this donee) 17,450.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17,450.00 0.00 0.00 0.00
Benjamin Goldhaber (filter this donee) 17,250.00 0.00 0.00 17,250.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Leopold Bauernfeind (filter this donee) 16,900.00 0.00 0.00 16,900.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Wolf Tivy (filter this donee) 16,758.00 0.00 0.00 0.00 0.00 0.00 16,758.00 0.00 0.00 0.00 0.00 0.00
The Maurice Amado Foundation (filter this donee) 16,000.00 0.00 0.00 16,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Sergio Tarrero (filter this donee) 15,220.00 0.00 0.00 0.00 0.00 0.00 620.00 0.00 14,600.00 0.00 0.00 0.00
Stephan T. Lavavej (filter this donee) 15,000.00 0.00 0.00 0.00 0.00 0.00 15,000.00 0.00 0.00 0.00 0.00 0.00
Donald King (filter this donee) 15,000.00 0.00 0.00 0.00 0.00 0.00 9,000.00 0.00 6,000.00 0.00 0.00 0.00
Quixey (filter this donee) 15,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15,000.00 0.00 0.00 0.00
Tran Bao Trung (filter this donee) 14,379.00 0.00 0.00 14,379.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Aleksei Riikonen (filter this donee) 14,372.00 0.00 0.00 0.00 0.00 130.00 0.00 242.00 14,000.00 0.00 0.00 0.00
William Morgan (filter this donee) 13,571.00 0.00 0.00 1,000.00 400.00 0.00 0.00 5,171.00 7,000.00 0.00 0.00 0.00
James Mazur (filter this donee) 13,127.00 0.00 675.00 12,452.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michal Pokorný (filter this donee) 13,000.00 0.00 1,000.00 12,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Eric Lin (filter this donee) 12,870.00 0.00 12,870.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Ames (filter this donee) 12,500.00 0.00 0.00 0.00 0.00 0.00 0.00 12,500.00 0.00 0.00 0.00 0.00
Michael Roy Ames (filter this donee) 12,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12,500.00 0.00 0.00 0.00
Sam Eisenstat (filter this donee) 12,356.00 0.00 0.00 12,356.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Benjamin Hoffman (filter this donee) 12,332.00 0.00 0.00 100.00 0.00 3.00 12,229.00 0.00 0.00 0.00 0.00 0.00
Emma Borhanian (filter this donee) 12,000.00 0.00 0.00 12,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Plotz (filter this donee) 11,960.00 0.00 11,960.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Joshua Fox (filter this donee) 11,934.00 0.00 360.00 2,040.00 1,310.00 105.00 490.00 1,529.00 6,100.00 0.00 0.00 0.00
Louie Helm (filter this donee) 11,930.00 0.00 0.00 0.00 0.00 0.00 270.00 1,260.00 10,400.00 0.00 0.00 0.00
Kenn Hamm (filter this donee) 11,472.00 0.00 11,472.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Stephanie Zolayvar (filter this donee) 11,247.00 0.00 0.00 11,247.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jean-Philippe Sugarbroad (filter this donee) 11,200.00 0.00 0.00 11,200.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Robert and Judith Babcock (filter this donee) 11,100.00 0.00 11,100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Ryan Carey (filter this donee) 10,172.00 0.00 5,086.00 5,086.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Riley Goodside (filter this donee) 10,049.00 0.00 0.00 0.00 0.00 0.00 0.00 3,949.00 6,100.00 0.00 0.00 0.00
Gil Elbaz (filter this donee) 10,000.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 5,000.00 0.00 0.00 0.00
Adam Weissman (filter this donee) 10,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10,000.00 0.00 0.00 0.00
Alex Schell (filter this donee) 9,575.00 0.00 9,575.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Luke Titmus (filter this donee) 8,837.00 0.00 0.00 8,837.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Xerxes Dotiwalla (filter this donee) 8,700.00 0.00 1,350.00 7,350.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Daniel Nelson (filter this donee) 8,147.00 0.00 0.00 0.00 0.00 0.00 70.00 2,077.00 6,000.00 0.00 0.00 0.00
Paul Rhodes (filter this donee) 8,025.00 0.00 1,024.00 869.00 61.00 122.00 430.00 5,519.00 0.00 0.00 0.00 0.00
Laura and Chris Soares (filter this donee) 7,510.00 0.00 0.00 7,510.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nhat Anh Phan (filter this donee) 7,000.00 0.00 0.00 7,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Paul Christiano (filter this donee) 7,000.00 0.00 0.00 0.00 0.00 7,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Alex Edelman (filter this donee) 6,932.00 0.00 1,800.00 5,132.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nader Chehab (filter this donee) 6,786.00 0.00 6,786.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
James Douma (filter this donee) 6,430.00 0.00 0.00 0.00 0.00 0.00 550.00 0.00 5,880.00 0.00 0.00 0.00
Cliff & Stephanie Hyra (filter this donee) 6,208.00 0.00 6,208.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Andrew Hay (filter this donee) 6,201.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 6,201.00 0.00 0.00 0.00
Raymond Arnold (filter this donee) 5,920.00 0.00 500.00 3,420.00 2,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Tobias Dänzer (filter this donee) 5,734.00 0.00 0.00 5,734.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Bryan Dana (filter this donee) 5,599.00 0.00 70.00 5,529.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Phil Hazelden (filter this donee) 5,559.00 0.00 0.00 5,559.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Robert and Gery Ruddick (filter this donee) 5,500.00 0.00 5,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jacob Falkovich (filter this donee) 5,415.00 0.00 0.00 5,065.00 300.00 50.00 0.00 0.00 0.00 0.00 0.00 0.00
Frank Adamek (filter this donee) 5,250.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,250.00 0.00 0.00 0.00
Giles Edkins (filter this donee) 5,145.00 0.00 0.00 0.00 0.00 0.00 0.00 145.00 5,000.00 0.00 0.00 0.00
Daniel Ziegler (filter this donee) 5,000.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Thomas Jackson (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00
Daniel Weinand (filter this donee) 5,000.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Tuxedage John Adams (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00
Jeff Bone (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00
Robert Yaman (filter this donee) 5,000.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kevin R. Fischer (filter this donee) 5,000.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Joshua Looks (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00
Patrick Brinich-Langlois (filter this donee) 3,000.00 0.00 0.00 3,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Vipul Naik (filter this donee) 500.00 0.00 500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
JP Addison (filter this donee) 500.00 0.00 0.00 0.00 500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Tim Bakker (filter this donee) 474.72 0.00 0.00 0.00 474.72 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kyle Bogosian (filter this donee) 385.00 0.00 0.00 0.00 135.00 250.00 0.00 0.00 0.00 0.00 0.00 0.00
Nick Brown (filter this donee) 199.28 0.00 0.00 0.00 119.57 79.71 0.00 0.00 0.00 0.00 0.00 0.00
Mathieu Roy (filter this donee) 198.85 0.00 0.00 0.00 198.85 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Johannes Gätjen (filter this donee) 118.68 0.00 0.00 0.00 0.00 118.68 0.00 0.00 0.00 0.00 0.00 0.00
Aaron Gertler (filter this donee) 100.00 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00
Peter Hurford (filter this donee) 90.00 0.00 0.00 0.00 0.00 0.00 90.00 0.00 0.00 0.00 0.00 0.00
William Grunow (filter this donee) 74.98 0.00 0.00 0.00 37.49 37.49 0.00 0.00 0.00 0.00 0.00 0.00
Vegard Blindheim (filter this donee) 63.39 0.00 0.00 0.00 63.39 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alexandre Zani (filter this donee) 50.00 0.00 0.00 0.00 50.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Henry Cooksley (filter this donee) 38.12 0.00 0.00 0.00 38.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Akhil Jalan (filter this donee) 31.00 0.00 0.00 0.00 31.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Pablo Stafforini (filter this donee) 25.00 0.00 0.00 0.00 0.00 0.00 0.00 25.00 0.00 0.00 0.00 0.00
Michael Dickens (filter this donee) 20.00 0.00 0.00 0.00 20.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Dello-Iacovo (filter this donee) 20.00 0.00 0.00 0.00 0.00 20.00 0.00 0.00 0.00 0.00 0.00 0.00
Gwern Branwen (filter this donee) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 17,923,794.02 2,762,500.00 2,145,164.00 5,747,946.00 1,316,133.14 689,164.88 1,607,877.00 669,931.00 2,951,078.00 10,000.00 12,000.00 12,000.00

Full list of documents in reverse chronological order (53 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Project Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
Committee for Effective Altruism Support2019-02-27Open Philanthropy ProjectOpen Philanthropy Project Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
EA orgs are trying to fundraise ~$10m - $16m2019-01-06Hauke Hillebrandt Effective Altruism Forum Centre for Effective Altruism Effective Altruism Foundation Machine Intelligence Research Institute Forethought Foundation for Global Priorities Research Sentience Institute Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute Rethink Priorities EA Hotel 80,000 Hours Rethink Charity Miscellaneous commentaryThe blog post links to and discusses the spreadsheet https://docs.google.com/spreadsheets/d/10zU6gp_H_zuvlZ2Vri-epSK0_urbcmdS-5th3mXQGXM/edit which tables various organizations and their fundraising targets, along with quotes and links to fundraising posts. The blog post itself has three points, the last of whichis that the EA community is relatively more funding constrained again
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
2018 AI Alignment Literature Review and Charity Comparison2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
MIRI’s 2018 Fundraiser2018-11-26Malo Bourgon Machine Intelligence Research InstituteDan Smith Aaron Merchak Matt Ashton Stephen Chidwick Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its 2018 end-of-year fundraising, with Target 1 of $500,000 and Target 2 of $1,200,000. It provides an overview of its 2019 budget and plans to explain the values it has worked out for Target 1 and Target 2. The post also mentions a matching opportunity sponsored by professional poker players Dan Smith, Aaron Merchak, Matt Ashton, and Stephen Chidwick, in partnership with Raising for Effective Giving (REG), which provides matching for donations to MIRI and REG up to $20,000. The post is referenced by Effective Altruism Funds in their grant write-up for a $40,000 grant to MIRI, at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi
My 2018 donations2018-11-23Vipul Naik Effective Altruism ForumVipul Naik GiveWell top charities Machine Intelligence Research Institute Donor lottery Periodic donation list documentationGlobal health and development|AI safetyThe blog post describes an allocation of $2,000 to GiveWell for regranting to top charities, and $500 each to MIRI and the $500,000 donor lottery. The latter two donations are influenced by Issa Rice, who describes his reasoning at https://issarice.com/donation-history#section-3 Vipul Naik's post explains the reason for donating now rather than earlier or later, the reason for donating this amount, and the selection of recipients. The post is also cross-posted at https://vipulnaik.com/blog/my-2018-donations/ and https://github.com/vipulnaik/working-drafts/blob/master/eaf/my-2018-donations.md
2018 Update: Our New Research Directions2018-11-22Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI executive director Nate Soares explains the new research directions being followed by MIRI, and how they differ from the original Agent Foundations agenda. The post also talks about how MIRI is being cautious in terms of sharing technical details of its research, until there is greater internal clarity on what findings need to be developed further, and what findings should be shared with what group. The post ends with guidance for people interested in joining the MIRI team to further the technical agenda. The post is referenced by Effective Altruism Funds in their grant write-up for a $40,000 grant to MIRI, at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi The nondisclosure-by-default section of the post is also referenced by Ben Hoskin in https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#MIRI__The_Machine_Intelligence_Research_Institute and also cited by him as one of the reasons he is not donating to MIRI this year (general considerations related to this are described at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#Openness in the same post). Issa Rice also references these concerns in his donation decision write-up for 2018 at https://issarice.com/donation-history#section-3 but nonetheless decides to allocate $500 to MIRI
Opportunities for individual donors in AI safety2018-03-12Alex Flint Effective Altruism Forum Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyAlex Flint discusses the history of AI safety funding, and suggests some heuristics for individual donors based on what he has seen to be successful in the past.
Fundraising success!2018-01-10Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces the success of its fundraiser, providing information on its top doonors, and thanking everybody who contributed
Where the ACE Staff Members Are Giving in 2017 and Why2017-12-26Allison Smith Animal Charity EvaluatorsJon Bockman Allison Smith Toni Adleberg Sofia Davis-Fogel Kieran Greig Jamie Spurgeon Erika Alonso Eric Herboso Gina Stuessy Animal Charity Evaluators The Good Food Institute Vegan Outreach A Well-Fed World Better Eating International Encompass Direct Action Everywhere Animal Charity Evaluators Recommended Charity Fund Against Malaria Foundation Animal equality The Nonhuman Rights Project AnimaNaturalis Internacional The Humane League GiveDirectly Food Empowerment Project Mercy For Animals New Harvest StrongMinds Centre for Effective Altruism Effective Altruism Funds Machine Intelligence Research Institute Donor lottery Sentience Institute Wild-Animal Suffering Research Periodic donation list documentationAnimal welfare|AI safety|Global health and development|Effective altruismAnimal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2017. Donation amounts are not disclosed, likely by policy
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups
2017 AI Safety Literature Review and Charity Comparison2017-12-20Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
I Vouch For MIRI2017-12-17Zvi Mowshowitz Zvi Mowshowitz Machine Intelligence Research Institute Single donation documentationAI safetyMowshowitz explains why he made his $10,000 donation to MIRI, and makes the case for others to support MIRI. He believes that MIRI understands the hardness of the AI safety problem, is focused on building solutions for the long term, and has done humanity a great service through its work on functional decision theory
MIRI 2017 Fundraiser and Strategy Update2017-12-15Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI provides an update on its fundraiser and its strategy in a general-interest forum for people interested in effective altruism. They say the fundraiser is already going quite well, but believe they can still use marginal funds well to expand more
End-of-the-year matching challenge!2017-12-14Rob Bensinger Machine Intelligence Research InstituteChristian Calderon Marius van Voorden Machine Intelligence Research Institute Donee donation caseAI safetyMIRI gives an update on how its fundraising efforts are going, noting that it has met its first fundraising target, listing two major donations (Christian Calderon: $367,574 and Marius van Voorden: $59K), and highlighting the 2017 charity drive where donations up to $1 million to a list of charities including MIRI will be matched
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund)
MIRI’s 2017 Fundraiser2017-12-01Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyDocument provides cumulative target amounts for 2017 fundraiser ($625,000 Target 1, $850,000 Target 2, $1,250,000 Target 3) along with what MIRI expects to accomplish at each target level. Funds raised from the Open Philanthropy Project and an anonymous cryptocurrency donor (see https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/ for more) are identified as reasons for the greater financial security and more long-term and ambitious planning
Claim: if you work in an AI alignment org funded by donations, you should not own much cryptocurrency, since much of your salary comes from people who do2017-11-18Daniel Filan Machine Intelligence Research Institute Miscellaneous commentaryAI safetyThe post by Daniel Filan claims that organizations working in AI risk get a large share of their donations from cryptocurrency investors, so their fundraising success is tied to the success of cryptocurrency. For better diversification, therefore, people working at such organizations should not own cryptocurrency. The post has a number of comments from Malo Bourgon of the Machine Intelligence Research Institute, which is receiving a lot of money from cryptocurrency investors in the months surrounding the post date
Superintelligence Risk Project: Conclusion2017-09-15Jeff Kaufman Machine Intelligence Research Institute Review of current state of cause areaAI safetyThis is the concluding post (with links to all earlier posts) of a month-long investigation by Jeff Kaufman into AI risk. Kaufman investigates by reading the work of, and talking with, both people who work in AI risk reduction and people who work on machine learning and AI in industry and academia, but are not directly involved with safety. His conclusion is that there likely should continue to be some work on AI risk reduction, and this should be respected by people working on AI. He is not confident about how the current level and type of work on AI risk compares with the optimal level and type of such work
I’ve noticed that this misconception is still floating around2017-08-30Rob Bensinger Facebook Machine Intelligence Research Institute Reasoning supplementAI safetyPost notes an alleged popular misconception that the reason to focus on AI risk is that it is low-probability but high-impact, but MIRI researchers assign a medium-to-high probability of AI risk in the medium-term future
My current thoughts on MIRI’s highly reliable agent design work2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin
Updates to the research team, and a major donation2017-07-04Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces a surprise $1.01 million donation from an Ethereum cryptocurrency investor (2017-05-30) as well as updates related to team and fundraising
Four quantiative models, aggregation, and final decision2017-05-20Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project 80,000 Hours Animal Charity Evaluators Machine Intelligence Research Institute StrongMinds Single donation documentationEffective altruism/career adviceThe post describes how the Oxford Prioritisation Project compared its four finalists (80000 Hours, Animal Charity Evaluators, Machine Intelligence Research Institute, and StrongMinds) by building quantitative models for each, including modeling of uncertainties. Based on these quantitative models, 80000 Hours was chosen as the winner. Also posted to http://effective-altruism.com/ea/1ah/four_quantiative_models_aggregation_and_final/ for comments
A model of the Machine Intelligence Research Institute2017-05-20Sindy Li Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyThe post describes a quantitative model of the Machine Intelligence Research Institute, available at https://www.getguesstimate.com/models/8789 on Guesstimate. Also posted to http://effective-altruism.com/ea/1ae/a_model_of_the_machine_intelligence_research/ for comments
2017 Updates and Strategy2017-04-30Rob Bensinger Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI provides updates on its progress as an organization and outlines its strategy and budget for the coming year. Key update is that recent developments in AI have made them increase the probability of AGI before 2035 by a little bit. MIRI has also been in touch with researchers at FAIR, DeepMind, and OpenAI
AI Safety: Is it worthwhile for us to look further into donating into AI research?2017-03-11Qays Langan-Dathi Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Review of current state of cause areaAI safetyThe post concludes: "In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives."
Final decision: Version 02017-03-01Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Against Malaria Foundation Machine Intelligence Research Institute The Good Food Institute StrongMinds Reasoning supplementVersion 0 of a decision process for what charity to grant 10,000 UK pouds to. Result was a tie between Machine Intelligence Research Institute and StrongMinds. See http://effective-altruism.com/ea/187/oxford_prioritisation_project_version_0/ for a cross-post with comments
Konstantin Sietzy: current view, StrongMinds2017-02-21Konstantin Sietzy Oxford Prioritisation ProjectOxford Prioritisation Project StrongMinds Machine Intelligence Research Institute Evaluator review of doneeMental healthKonstantin Sietzy explains why StrongMinds is the best charity in his view. Also lists Machine Intelligence Research Institute as the runner-up
Daniel May: current view, Machine Intelligence Research Institute2017-02-15Daniel May Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyDaniel May evaluates the Machine Intelligence Research Institute and describes his reasons for considering it the best donation opportunity
Tom Sittler: current view, Machine Intelligence Research Institute2017-02-08Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Future of Humanity Institute Evaluator review of doneeAI safetyTom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update
Changes in funding in the AI safety field2017-02-01Sebastian Farquhar Centre for Effective Altruism Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab Review of current state of cause areaAI safetyThe post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/
Belief status: off-the-cuff thoughts!2017-01-19Vipul Naik Facebook Machine Intelligence Research Institute Reasoning supplementAI safetyThe post argues that (lack of) academic endorsement of the work done by MIRI should not be an important factor in evaluating MIRI, offering three reasons. Commenters include Rob Bensinger, Research Communications Manager at MIRI
The effective altruism guide to donating this giving season2016-12-28Robert Wiblin 80,000 Hours Blue Ribbon Study Panel on Biodefense Cool Earth Alliance for Safety and Justice Cosecha Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Compassion in World Farming USA Against Malaria Foundation Schistosomiasis Control Initiative StrongMinds Ploughshares Fund Machine Intelligence Research Institute Future of Humanity Institute Evaluator consolidated recommendation listBiosecurity and pandemic preparedness,Global health and development,Animal welfare,AI risk,Global catastrophic risks,Effective altruism/movement growthRobert Wiblin draws on a number of annual charity evaluations and reviews, as well as staff donation writeups, from sources such as GiveWell and Animal Charity Evaluators, to provide an "effective altruism guide" for 2016 Giving Season donation
Where the ACE Staff Members are Giving in 2016 and Why2016-12-23Leah Edgerton Animal Charity EvaluatorsAllison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds Periodic donation list documentationAnimal welfare|AI safety|Global catastrophic risksAnimal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open Philanthropy ProjectJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas
2016 AI Risk Literature Review and Charity Comparison2016-12-13Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
EAs write about where they give2016-12-09Julia Wise Effective Altruism ForumBlake Borgeson Eva Vivalt Ben Kuhn Alexander Gordon-Brown and Denise Melchin Elizabeth Van Nostrand Machine Intelligence Research Institute Center for Applied Rationality AidGrade Charity Science: Health 80,000 Hours Centre for Effective Altruism Tostan Periodic donation list documentationGlobal health and development, AI riskJulia Wise got submissions from multiple donors about their donation plans and put them together in a single post. The goal was to cover people outside of organizations that publish such posts for their employees
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
Why I'm donating to MIRI this year2016-11-30Owen Cotton-Barratt Owen Cotton-Barratt Machine Intelligence Research Institute Single donation documentationAI safetyPrimary interest is in existential risk. Cited CoI and other reasons for not donating to own employer, Centre for Effective Altruism. Notes disagreements with MIRI, citing http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#research but highlights need for epistemic humility
Crunch time!! The 2016 fundraiser for the AI safety group I work at, MIRI, is going a lot slower than expected2016-10-25Rob Bensinger Facebook Machine Intelligence Research Institute Donee donation caseAI safetyRob Bensinger, Research Communications Director at MIRI, takes to his personal Facebook to ask people to chip in for the MIRI fundraiser, which is going slower than he and MIRI expected, and may not meet its target. The final comment by Bensinger notes that $582,316 out of the target of $750,000 was raised, and that about $260k of that was raised after his post, so he credits the final push for helping MIRI move closer to its fundraising goals
Ask MIRI Anything (AMA)2016-10-11Rob Bensinger Machine Intelligence Research Institute Machine Intelligence Research Institute Donee AMAAI safetyRob Bensinger, the Research Communications Manager at MIRI, hosts an Ask Me Anything (AMA) on the Effective Altruism Forum during the October 2016 Fundraiser
MIRI’s 2016 Fundraiser2016-09-16Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its single 2016 fundraiser (as opposed to previous years when it conducted two fundraisers, it is conducting just one this time, in the Fall)
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Project Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open Philanthropy ProjectOpen Philanthropy Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06)
Concerning MIRI’s Place in the EA Movement2016-02-17Ozy Brennan Thing of Things Machine Intelligence Research Institute Miscellaneous commentaryAI safetyThe post does not directly evaluate MIRI, but highlights the importance of object-level evaluation of the quality and value of the work done by MIRI. Also thanks MIRI, LessWrong, and Yudkowsky for contributions to the growth of the effective altruist movement
Where should you donate to have the most impact during giving season 2015?2015-12-24Robert Wiblin 80,000 Hours Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly Evaluator consolidated recommendation listGlobal health and development,Effective altruism/movement growth,Rationality improvement,Biosecurity and pandemic preparedness,AI risk,Global catastrophic risksRobert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Project Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier
MIRI Fundraiser: Why now matters2015-07-24Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyCross-posted at LessWrong and on the MIRI blog at https://intelligence.org/2015/07/20/why-now-matters/ -- this post occurs just two months after Soares takes over as MIRI Executive Director. It is a followup to https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/
MIRI’s 2015 Summer Fundraiser!2015-07-17Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its summer fundraiser and links to a number of documents to help donors evaluate it. This is the first fundraiser under new Executive Director Nate Soares, just a couple months after he assumed office
Tumblr on MIRI2014-10-07Scott Alexander Slate Star Codex Machine Intelligence Research Institute Evaluator review of doneeAI safetyThe blog post is structured as a response to recent criticism of MIRI on Tumblr, but is mainly a guardedly positive assessment of MIRI. In particular, it highlights the important role played by MIRI in elevating the profile of AI risk, citing attention from Stephen Hawking, Elon Musk, Gary Drescher, Max Tegmark, Stuart Russell, and Peter Thiel.
Thoughts on the Singularity Institute (SI)2012-05-11Holden Karnofsky LessWrong Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute
SIAI - An Examination2011-05-02Brandon Reinhart LessWrongBrandon Reinhart Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing initial investigation into the Singularity Institute for Artificial Intelligence (SIAI), the former name of Machine Intelligence Research Institute (MIRI), with the intent of deciding whether to donate. Final takeaway is that it was a worthy donation target, though no specific donation is announced in the post. See http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/ for an earlier draft of the post (along with a number of comments that were incorporated into the official version)

Full list of donations in reverse chronological order (467 donations)

DonorAmount (current USD)Amount rank (out of 467)Donation dateCause areaURLInfluencerNotes
Effective Altruism Funds50,000.00422019-04-07AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvlOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Grant investigation and influencer Oliver Habryka believes that MIRI is making real progress in its approach of "creating a fundamental piece of theory that helps humanity to understand a wide range of powerful phenomena" He notes that MIRI started work on the alignment problem long before it became cool, which gives him more confidence that they will do the right thing and even their seemingly weird actions may be justified in ways that are not yet obvious. He also thinks that both the research team and ops staff are quite competent

Donor reason for donating that amount (rather than a bigger or smaller amount): Habryka offers the following reasons for giving a grant of just $50,000, which is small relative to the grantee budget: (1) MIRI is in a solid position funding-wise, and marginal use of money may be lower-impact. (2) There is a case for investing in helping grow a larger and more diverse set of organizations, as opposed to putting money in a few stable and well-funded onrganizations.
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Oliver Habryka writes: "I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions . Despite these, Habryka recommends a relatively small grant to MIRI, because they are already relatively well-funded and are not heavily bottlenecked on funding. However, he ultimately decides to grant some amount to MIRI, giving some explanation. He says he will think more about this before the next funding round.
Berkeley Existential Risk Initiative600,000.0062019-02-26AI safetyhttp://web.archive.org/web/20190623203105/http://existence.org/grants/-- This grant is also discussed by the Machine Intelligence Research Institute (the grant recipient) at https://intelligence.org/2017/11/08/major-grant-open-phil/ along with a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 from the Open Philanthropy Project.
Open Philanthropy Project2,112,500.0022019-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effectlve Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.

Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by the Committee for Effective Altruism Support, made at the same time and therefore likely drawing from the same money pot, are to the Center for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later.

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant

Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-02.
Vipul Naik500.003802018-12-22AI safetyhttps://forum.effectivealtruism.org/posts/dznyZNkAQMNq6HtXf/my-2018-donationsIssa Rice Donation decided on by Issa Rice. The blog post explaining the donation says: "For each of the years 2017 and 2018, I had given Issa the option of assigning $500 of my money to charitable causes of his choosing (with no strict requirement that these be recognized as charities). In 2017, Issa deferred the use of the money, so he had $1,000 to allocate. Issa ultimately decided to allocate 50% of the $1,000 (i.e., $500) to the $500,000 EA Donor Lottery, and another 50% to the Machine Intelligence Research Institute (MIRI).". Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Effective Altruism Funds40,000.00522018-11-29AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSiAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Grant made from the Long-Term Future Fund. Donor believes that the new research directions outlined by donee at https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ are promising, and donee fundraising post suggests it could productively absorb additional funding. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Effective Altruism Funds488,994.0082018-08-14AI safetyhttps://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0Nick Beckstead Grant made from the Long-Term Future Fund. Beckstead recommended that the grantee spend the money to save time and increase productivity of employees (for instance, by subsidizing childcare or electronics). Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Open Philanthropy Project150,000.00202018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."

Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-28.
Robin Powell600.003742018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Ethan Sterling23,418.00792018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Ryan Carey5,086.002542018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Gustav Simonsson33,285.00572018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Scott Dickey2,000.003112018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Scott Worley9,486.001882018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Jai Dhyani6,100.002222018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
James Mazur475.003842018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Sebastian Hagen6,000.002252018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Sergejs Silko18,200.001002018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Simon Sáfár3,131.002902018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Jeremy Schlatter150.004162018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Victoria Krakovna2,801.003032018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Jonathan Weissman20,000.00892018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Blake Borgeson10.004562018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Joshua Fox240.004082018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Kelsey Piper30,730.00612018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Kenn Hamm11,472.001452018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Luke Stebbing3,000.002922018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Zvi Mowshowitz10,000.001662018AI safetyhttps://intelligence.org/topcontributors/--
Max Kesin10,000.001662018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Michael Blume1,100.003532018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Michael Plotz4,000.002822018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Michal Pokorný1,000.003562018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Mick Porter800.003672018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Mike Anderson23,000.00812018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Mikko Rauhala7,200.002092018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Alan Chang17,000.001072018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Alex Edelman1,800.003282018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003362018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Nader Chehab6,786.002162018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Pasha Kamyshev20,200.00862018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Paul Crowley7,400.002062018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Bruno Parga10,382.001642018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Bryan Dana70.004342018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Buck Shlegeris5.004622018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Paul Rhodes1,017.003552018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Peter Scott30,000.00632018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Cliff & Stephanie Hyra1,000.003562018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Quinn Maurmann3,000.002922018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Daniel Weinand5,000.002592018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Raymond Arnold250.004032018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Richard Schwall59,639.00352018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Edwin Evans35,550.00552018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Robert and Gery Ruddick5,500.002452018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Eric Lin1,850.003262018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Eric Rogstad19,000.00972018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Robin Powell400.003882018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Rolf Nelson100.004242018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Scott Dickey1,000.003562018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Jai Dhyani1,651.003312018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
James Mazur200.004092018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Sebastian Hagen16,384.001122018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Ben Hoskin31,481.00602018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/-- See post http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ assessing research. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts.".
Victoria Krakovna17,066.001062018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Vitalik Buterin802,136.0042018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Xerxes Dotiwalla1,350.003432018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Joshua Fox120.004212018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Kevin Fischer1,000.003562018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Michael Blume300.003992018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Michael Plotz7,960.002002018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Mick Porter400.003882018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Alan Chang16,050.001132018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Alex Schell9,575.001872018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Austin Peña26,554.00712018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Patrick LaVictoire5,000.002592018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Paul Crowley6,000.002252018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Paul Rhodes7.004612018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Cliff & Stephanie Hyra5,208.002502018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Quinn Maurmann27,575.00692018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Raymond Arnold250.004032018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Richard Schwall5,550.002422018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Eric Lin11,020.001502018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Robert and Judith Babcock11,100.001492018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Berkeley Existential Risk Initiative100,000.00252017-12-28AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- See blog announcement at http://existence.org/2018/01/11/activity-update-december-2017.html.
Robin Powell400.003882017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Ethan Dickinson17,400.001042017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Ryan Carey5,086.002542017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Scott Dickey8,000.001992017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Scott Siskind10,000.001662017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Jacob Falkovich5,065.002562017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- 350 from EA Survey subtracted.
Jai Dhyani11,405.001462017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Scott Worley5,500.002452017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
James Mazur825.003662017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Simon Sáfár3,131.002902017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Stephanie Zolayvar11,247.001482017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Jeremy Schlatter150.004162017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Tobias Dänzer5,734.002382017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Tran Bao Trung1,240.003462017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Xerxes Dotiwalla7,350.002072017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Jonathan Weissman20,000.00892017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Joshua Fox360.003942017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Kevin R. Fischer5,000.002592017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Laura and Chris Soares7,510.002032017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Leopold Bauernfeind16,900.001092017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Luke Stebbing7,500.002042017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Luke Titmus8,837.001952017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Marius van Voorden59,251.00362017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Zvi Mowshowitz10,000.001662017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- The document https://thezvi.wordpress.com/2017/12/17/i-vouch-for-miri/ explains that he believes that MIRI understands the hardness of the AI safety problem, is focused on building solutions for the long term, and has done humanity a great service through its work on functional decision theory.
Max Kesin10,000.001662017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Mick Porter1,200.003472017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Mikko Rauhala7,200.002092017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Alex Edelman5,132.002532017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003362017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Austin Peña1,034.003542017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Paul Crowley12,450.001342017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Bruno Parga11,361.001472017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Bryan Dana66.004362017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Buck Shlegeris72,674.00312017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Paul Rhodes22.004512017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Christian Calderon367,574.0092017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Raising for Effective Giving29,140.00682017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Raymond Arnold3,420.002882017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- 2000 subtracted because it is recorded in EA Survey data.
Richard Schwall46,698.00462017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Edwin Evans30,000.00632017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Eric Rogstad55,103.00382017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Patrick Brinich-Langlois3,000.002922017-12-10AI safetyhttps://www.patbl.com/misc/other/donations/--
Tran Bao Trung4,239.002812017AI safetyhttps://web.archive.org/web/20171003083300/https://intelligence.org/topcontributors/--
Loren Merritt25,000.00722017-10AI safetyhttp://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- Total amount donated by Loren Merritt to MIRI as of 2017-12-23 is $525,000. The amount listed as of October 2017 was http://web.archive.org/web/20171003083300/https://intelligence.org/topcontributors/ so the extra $25,000 was donated between those months.
Open Philanthropy Project3,750,000.0012017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design suggests that work on the review had been going on well before the grant renewal date

Intended use of funds (category): Organizational general expenses

Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."

Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."

Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36

Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."

Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." are discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
Ethan Dickinson3,000.002922017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Sam Eisenstat12,356.001352017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Scott Dickey3,000.002922017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
James Mazur1,617.003322017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Simon Sáfár1,000.003562017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Tran Bao Trung8,900.001942017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Joshua Fox360.003942017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Leif K-Brooks67,216.00322017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Max Kesin5,000.002592017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Michal Pokorný12,000.001382017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Mick Porter1,200.003472017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Alan Chang18,000.001012017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003362017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Nhat Anh Phan7,000.002122017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Austin Peña9,929.001822017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Patrick LaVictoire20,885.00842017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Bryan Dana5,463.002482017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Paul Rhodes789.003702017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Phil Hazelden5,559.002412017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Berkeley Existential Risk Initiative100,000.00252017-09-13AI safetyhttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/--
Ethan Dickinson2,000.003112017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Scott Dickey2,000.003112017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Scott Worley5,677.002392017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
James Mazur10,010.001652017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
The Maurice Amado Foundation16,000.001142017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Ben Hoskin5,000.002592017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/-- See post http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ assessing research. The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute.".
Joshua Fox360.003942017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Max Kesin2,000.003112017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Michael Blume375.003932017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Mick Porter2,000.003112017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003362017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Nathaniel Soares10.004562017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Benjamin Goldhaber3,625.002872017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Brandon Reinhart5,000.002592017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Paul Rhodes58.004382017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Daniel Ziegler5,000.002592017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Eric Rogstad46,133.00472017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Ethan Dickinson3,000.002922017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Scott Dickey3,000.002922017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Sebastian Hagen409.003872017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Simon Sáfár1,000.003562017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Janos Kramar200.004092017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Jean-Philippe Sugarbroad200.004092017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Ben Hoskin5,000.002592017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/-- See post http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ assessing research. The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute.".
Johan Edström2,000.003112017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Joshua Fox360.003942017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003362017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Benjamin Goldhaber1,375.003422017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Peter Scott50,000.00422017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Raising for Effective Giving175,027.00172017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Robin Powell10,400.001622017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Scott Dickey17,000.001072017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Scott Siskind19,000.00972017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Jaan Tallinn60,500.00342017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Scott Worley10,510.001582017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Sebastian Hagen10,442.001612017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Simon Sáfár10,900.001552017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Jean-Philippe Sugarbroad11,000.001522017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Ben Hoskin20,000.00892017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/-- See post http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ assessing research. The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute.".
Johan Edström3,700.002862017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
William Morgan1,000.003562017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Jonathan Weissman20,000.00892017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Joshua Fox600.003742017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Kevin Fischer2,000.003112017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Luke Stebbing10,550.001572017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Marcello Herreshoff12,000.001382017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Max Kesin3,420.002882017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Michael Blume5,050.002572017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Michael Cohen9,977.001812017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Mick Porter4,800.002782017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Mikko Rauhala6,000.002252017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Misha Gurevich3,000.002922017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Benjamin Goldhaber12,250.001362017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Nicolas Tarleton2,000.003112017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Benjamin Hoffman100.004242017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Brandon Reinhart20,050.00872017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Edwin Evans30,000.00632017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Emma Borhanian12,000.001382017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Robin Powell400.003882017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Gordon Irlam20,000.00892017AI safetyhttps://www.gricf.org/2017-report.html--
Open Philanthropy Project500,000.0072016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.

Intended use of funds (category): Organizational general expenses

Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."

Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety

Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."

Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."

Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant

Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Sebastian Hagen6,085.002242016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Joshua Fox560.003762016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich2,000.003112016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Alexei Andreev525.003792016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Brandon Reinhart15,000.001192016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes26.004492016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Joshua Fox100.004242016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Max Kesin6,580.002182016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Michael Cohen29,382.00672016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich1,000.003562016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes5.004622016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Ethan Dickinson20,518.00852016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Scott Dickey11,000.001522016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Scott Siskind2,037.003102016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Jaan Tallinn80,000.00302016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Scott Worley5,488.002472016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,000.002252016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Janos Kramar541.003782016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Jeremy Schlatter4,000.002822016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Ben Hoskin32,728.00592016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Johan Edström300.003992016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
William Morgan400.003882016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
John Salvatier9,110.001902016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Jonathan Weissman20,000.00892016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Joshua Fox650.003722016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Luke Stebbing10,500.001592016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Marcello Herreshoff12,000.001382016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Michael Blume15,000.001192016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Mick Porter2,400.003052016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Mikko Rauhala6,000.002252016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich3,000.002922016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes30.004472016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Richard Schwall30,000.00632016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Edwin Evans40,000.00522016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Robin Powell200.004092016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Gordon Irlam10,000.001662016AI safetyhttps://www.gricf.org/2016-report.html--
Henry Cooksley38.124432016AI safetyhttps://github.com/peterhurford/ea-data/--
William Grunow37.494442016AI safetyhttps://github.com/peterhurford/ea-data/--
Raymond Arnold2,000.003112016AI safetyhttps://github.com/peterhurford/ea-data/--
Jacob Falkovich300.003992016AI safetyhttps://github.com/peterhurford/ea-data/--
Blake Borgeson300,000.00102016AI safetyhttps://intelligence.org/2016/08/05/miri-strategy-update-2016/-- Second biggest donation in the history of MIRI, see also http://effective-altruism.com/ea/14u/eas_write_about_where_they_give/ for more context on overall donations for the year. Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Akhil Jalan31.004462016AI safetyhttps://github.com/peterhurford/ea-data/--
Kyle Bogosian135.004192016AI safetyhttps://github.com/peterhurford/ea-data/--
Vegard Blindheim63.394372016AI safetyhttps://github.com/peterhurford/ea-data/--
Loren Merritt115,000.00212016AI safetyhttp://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/-- Total amount donated up to this point is listed as $500,000. The amount listed as of November 2016 at http://web.archive.org/web/20161118163935/https://intelligence.org/topdonors/ is 4385,000. The additional $115,000 was likely raised at the end of 2016.
Robert Yaman5,000.002592016AI safetyhttps://github.com/peterhurford/ea-data/--
Nick Brown119.574222016AI safetyhttps://github.com/peterhurford/ea-data/--
Alexandre Zani50.004392016AI safetyhttps://github.com/peterhurford/ea-data/--
Michael Dickens20.004522016-01--http://mdickens.me/donations/small.html--
JP Addison500.003802016AI safetyhttps://github.com/peterhurford/ea-data/--
Michael Sadowsky9,000.001922016AI safetyhttps://github.com/peterhurford/ea-data/--
Mathieu Roy198.854142016AI safetyhttps://github.com/peterhurford/ea-data/--
Tim Bakker474.723852016AI safetyhttps://github.com/peterhurford/ea-data/--
Michael Dello-Iacovo20.004522015-12-16FIXMEhttp://www.michaeldello.com/donations-log/-- Affected regions: FIXME; affected countries: FIXME.
Future of Life Institute250,000.00112015-09-01AI safetyhttps://futureoflife.org/AI/2015awardees#Fallenstein-- A project grant. Project title: Benja Fallenstein. It appears like this was added on the MIRI top donors website with an amount of $250,252 on 2017-02-04: see https://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/ for more.
Scott Dickey3,000.002922015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,113.002212015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Joshua Fox20.004522015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Kevin Fischer840.003652015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Michael Blume2,000.003112015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Mick Porter1,200.003472015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich1,000.003562015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes8.004602015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Scott Dickey3,000.002922015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Henrik Jonsson1.004652015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Scott Siskind30.004472015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Scott Worley13,344.001292015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,000.002252015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Jeremy Schlatter1.004652015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Thiel Foundation250,000.00112015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Joshua Fox40.004422015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Kevin Fischer1,680.003302015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Luke Stebbing9,650.001842015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Marcello Herreshoff6,560.002192015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Michael Blume2,000.003112015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Mick Porter1,200.003472015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich2,000.003112015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Benjamin Hoffman3.004642015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes14.004552015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Scott Dickey4,000.002822015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Janos Kramar800.003672015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
John Salvatier2,000.003112015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Joshua Fox45.004412015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Kevin Fischer1,260.003442015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Marcello Herreshoff6,000.002252015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Zvi Mowshowitz5,010.002582015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Mick Porter1,600.003332015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich1,500.003362015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Aleksei Riikonen130.004202015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Paul Christiano7,000.002122015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes100.004242015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Ethan Dickinson12,000.001382015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Gordon Irlam10,000.001662015AI safetyhttps://www.gricf.org/2015-report.html--
William Grunow37.494442015AI safetyhttps://github.com/peterhurford/ea-data/--
Jacob Falkovich50.004392015AI safetyhttps://github.com/peterhurford/ea-data/--
Blake Borgeson50,460.00402015AI safetyhttps://intelligence.org/topdonors/-- Took total donation amount and subtracted known donation of 300000 for 2016.
Brian Tomasik2,000.003112015AI safety--Luke Muehlhauser Thank you for a helpful conversation with outgoing director Luke Muehlhauser; information conveyed via private communication and published with permission.
Kyle Bogosian250.004032015AI safetyhttps://github.com/peterhurford/ea-data/--
Johannes Gätjen118.684232015AI safetyhttps://github.com/peterhurford/ea-data/--
Nick Brown79.714332015AI safetyhttps://github.com/peterhurford/ea-data/--
Michael Sadowsky25,000.00722015AI safetyhttps://github.com/peterhurford/ea-data/--
Rolf Nelson1,710.003292014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Gary Basin25,000.00722014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Scott Dickey11,020.001502014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Guy Srinivasan6,910.002142014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Henrik Jonsson36,975.00542014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Scott Siskind7,433.002052014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Investling Group65,000.00332014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jaan Tallinn100,000.00252014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Sebastian Hagen17,154.001052014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
James Douma550.003772014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Janos Kramar4,870.002772014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Sergio Tarrero620.003732014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Stephan T. Lavavej15,000.001192014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jed McCaleb631,137.0052014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jeremy Schlatter310.003982014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Thiel Foundation250,000.00112014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Thomas Jackson5,000.002592014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jesse Liptrap10,490.001602014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Tuxedage John Adams5,000.002592014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Johan Edström2,250.003082014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
John Salvatier10.004562014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Wolf Tivy16,758.001102014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jonathan Weissman20,010.00882014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Joshua Fox490.003832014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Kevin Fischer4,270.002802014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Liron Shapira15,100.001182014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Louie Helm270.004022014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Luke Stebbing9,300.001892014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Marcello Herreshoff12,550.001302014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Marius van Voorden7,210.002082014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Michael Blume5,140.002522014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Mick Porter8,010.001982014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Mihaly Barasz24,073.00772014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Mikko Rauhala170.004152014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Misha Gurevich5,520.002432014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Alexei Andreev23,280.00802014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Nathaniel Soares33,220.00582014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Nicolas Tarleton200.004092014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Benjamin Hoffman12,229.001372014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Paul Rhodes430.003862014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Daniel Nelson70.004342014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Donald King9,000.001922014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Richard Schwall106,608.00242014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Edwin Evans50,030.00412014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Robin Powell1,810.003272014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Ethan Dickinson35,490.00562014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Aaron Gertler100.004242014-08-01AI safetyhttps://aarongertler.net/donations-all-years/-- Monthly donation.
Peter Hurford90.004322014-05-06AI safetyhttp://peterhurford.com/other/donations.html--
Brian Tomasik10.004562014AI safety---- Part of a donation drive; information conveyed via private communication and published with permission.
Gordon Irlam10,000.001662014AI safetyhttps://www.gricf.org/2014-report.html--
Gil Elbaz5,000.002592013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Scott Dickey14,500.001262013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Giles Edkins145.004182013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Guy Srinivasan8,400.001972013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Investling Group24,000.00782013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jaan Tallinn100,000.00252013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Janos Kramar5,600.002402013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jason Joachim100.004242013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jeremy Schlatter15,000.001192013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Tomer Kagan16,500.001112013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Johan Edström7,730.002012013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
William Morgan5,171.002512013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jonathan Weissman30,280.00622013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Joshua Fox1,529.003352013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Kevin Fischer4,280.002792013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Louie Helm1,260.003442013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Marius van Voorden5,000.002592013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Michael Ames12,500.001322013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Michael Blume9,990.001802013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Mihaly Barasz12,550.001302013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Mikko Rauhala2,575.003042013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Misha Gurevich7,550.002022013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Aleksei Riikonen242.004072013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Alexei Andreev16,000.001142013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Nicolas Tarleton9,659.001832013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Paul Rhodes5,519.002442013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Brian Cartmell700.003712013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Chris Haley250.004032013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Daniel Nelson2,077.003092013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Edwin Evans52,500.00392013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Riley Goodside3,949.002852013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Robin Powell2,350.003062013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Pablo Stafforini25.004502013-08-29--http://www.stafforini.com/blog/donations/--
Thiel Foundation27,000.00702013AI safetyhttps://web.archive.org/web/20130115144542/http://singularity.org/topdonors/--
Richard Schwall10,000.001662013AI safetyhttps://web.archive.org/web/20130115144542/http://singularity.org/topdonors/--
Loren Merritt245,000.00142013AI safetyhttp://web.archive.org/web/20140403110808/http://intelligence.org/topdonors/-- Total amount donated up to this point is listed as $385,000. Of this, $140,000 is accounted for by explicitly disclosed donations; the remainder is approximately attributed to 2013.
Gordon Irlam5,000.002592013AI safetyhttps://www.gricf.org/2013-report.html--
Loren Merritt20,000.00892012-12-07AI safetyhttp://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/7zt4-- Donation is announced in response to the post http://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/ for the MIRI 2012 winter fundraiser.
Investling Group18,000.001012012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
William Morgan1,200.003472012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
Aleksei Riikonen14,000.001272012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
Alexei Andreev24,800.00762012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
Gil Elbaz5,000.002592012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Giles Edkins5,000.002592012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Guy Srinivasan43,000.00502012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Henrik Jonsson1,549.003342012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Investling Group11,000.001522012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jaan Tallinn109,000.00232012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Janos Kramar1,200.003472012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jason Joachim44,000.00492012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Stanley Pecavar17,450.001032012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jeff Bone5,000.002592012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jeremy Schlatter9,100.001912012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Tomer Kagan10,000.001662012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jesse Liptrap100.004242012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Johan Edström800.003672012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
William Morgan5,800.002372012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
John Salvatier6,598.002172012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Joshua Fox100.004242012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Kevin Fischer5,900.002352012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Liron Shapira9,650.001842012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Martine Rothblatt25,000.00722012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Michael Blume10,800.001562012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Michael Roy Ames12,500.001322012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Mihaly Barasz15,000.001192012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Mikko Rauhala8,600.001962012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Misha Gurevich2,300.003072012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Nicolas Tarleton500.003802012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Brandon Reinhart10,000.001662012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/-- See 2011 LessWrong post announcing reasoning for first MIRI donation: http://lesswrong.com/lw/5il/siai_an_examination/.
Brian Cartmell46,000.00482012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Chris Haley50,000.00422012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Daniel Nelson6,000.002252012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Quixey15,000.001192012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Edwin Evans57,000.00372012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Rolf Nelson20,000.00892012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Frank Adamek5,250.002492012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Scott Dickey48,000.00452012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Henrik Jonsson16,000.001142012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Investling Group191,000.00152012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Jaan Tallinn155,000.00192012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
James Douma5,880.002362012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Janos Kramar9,600.001862012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Sergio Tarrero14,600.001252012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Thiel Foundation1,100,000.0032012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Jesse Liptrap19,000.00972012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Johan Edström6,900.002152012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Jonathan Weissman41,000.00512012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Joshua Fox6,000.002252012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Joshua Looks5,000.002592012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Louie Helm10,400.001622012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Mikko Rauhala16,000.001142012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Misha Gurevich14,000.001272012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Adam Weissman10,000.001662012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Andrew Hay6,201.002202012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Nicolas Tarleton7,200.002092012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Brian Cartmell100,000.00252012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Chris Haley10,000.001662012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Donald King6,000.002252012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Edwin Evans180,000.00162012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Richard Schwall161,000.00182012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Riley Goodside6,100.002222012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Elliot Glaysher23,000.00812012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Robin Powell21,000.00832012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Loren Merritt110,000.00222012AI safetyhttp://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/7zt4-- Donation is announced in response to the post http://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/ for the MIRI 2012 winter fundraiser.
Loren Merritt10,000.001662011-08-25AI safetyhttp://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/4p1x-- Donation is announced in response to the post http://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/ by Eliezer Yudkowsky asking for help to fund Luke Muehlhauser at MIRI (then called SIAI, the Singularity Institute for Artificial Intelligence.
Gwern Branwen----2009-09-22AI safetyhttps://www.lesswrong.com/posts/XSqYe5Rsqq4TR7ryL/the-finale-of-the-ultimate-meta-mega-crossover#6T24xqkJotPQTJL5R-- Gwern mentions that he is a past donor to MIRI in this discussion thread, but gives neither a date nor an amount.
Brian Tomasik12,000.001382009AI safety/suffering reductionhttp://reducing-suffering.org/my-donations-past-and-present/-- Donation earmarked for suffering reduction work that would not have happened counterfactually; at the time, the organization was called the Singularity Institute for Artificial Intelligence (SIAI). Employer match: Microsoft matched 12,000.00; Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Brian Tomasik12,000.001382008AI safetyhttp://reducing-suffering.org/my-donations-past-and-present/-- Percentage of total donor spend in the corresponding batch of donations: 100.00%.