Machine Intelligence Research Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Facebook page MachineIntelligenceResearchInstitute
Websitehttps://intelligence.org
Donate pagehttps://intelligence.org/donate/
Donors list pagehttps://intelligence.org/topdonors/
Transparency and financials pagehttps://intelligence.org/transparency/
Donation case pagehttps://forum.effectivealtruism.org/posts/EKfjh5W7PkykLM7eG/miri-update-and-fundraising-case-1
Twitter usernameMIRIBerkeley
Wikipedia pagehttps://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute
Open Philanthropy Project grant reviewhttp://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support
Charity Navigator pagehttps://www.charitynavigator.org/index.cfm?bay=search.profile&ein=582565917
Guidestar pagehttps://www.guidestar.org/profile/58-2565917
Timelines wiki pagehttps://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Machine+Intelligence+Research+Institute
Key peopleEliezer Yudkowsky|Nate Soares|Luke Muehlhauser
Launch date2000

This entity is also a donor.

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 480 6,000 100,150 0 130 650 2,000 5,000 6,000 9,990 12,500 20,518 55,103 15,592,829
AI safety 476 6,000 92,584 0 150 800 2,000 5,000 6,000 10,000 12,550 20,518 55,103 15,592,829
FIXME 1 20 20 20 20 20 20 20 20 20 20 20 20 20
3 25 1,333,967 20 20 20 20 25 25 25 4,001,855 4,001,855 4,001,855 4,001,855

Donation amounts by donor and year for donee Machine Intelligence Research Institute

Donor Total 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2009 2008 2007
Anonymous MIRI cryptocurrency donor (filter this donee) 16,599,378.00 15,592,829.00 0.00 0.00 0.00 1,006,549.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Open Philanthropy (filter this donee) 14,756,250.00 0.00 7,703,750.00 2,652,500.00 150,000.00 3,750,000.00 500,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Vitalik Buterin (filter this donee) 4,803,990.50 4,001,854.50 0.00 0.00 802,136.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Thiel Foundation (filter this donee) 1,627,000.00 0.00 0.00 0.00 0.00 0.00 0.00 250,000.00 250,000.00 27,000.00 570,000.00 0.00 255,000.00 150,000.00 125,000.00
Jaan Tallinn (filter this donee) 1,447,500.00 0.00 843,000.00 0.00 0.00 60,500.00 80,000.00 0.00 100,000.00 100,000.00 264,000.00 0.00 0.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donee) 1,100,000.00 0.00 300,000.00 600,000.00 0.00 200,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Effective Altruism Funds: Long-Term Future Fund (filter this donee) 678,994.00 0.00 100,000.00 50,000.00 528,994.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jed McCaleb (filter this donee) 671,137.00 0.00 40,000.00 0.00 0.00 0.00 0.00 0.00 631,137.00 0.00 0.00 0.00 0.00 0.00 0.00
Loren Merritt (filter this donee) 525,000.00 0.00 0.00 0.00 0.00 25,000.00 115,000.00 0.00 0.00 245,000.00 130,000.00 10,000.00 0.00 0.00 0.00
Edwin Evans (filter this donee) 475,080.00 0.00 0.00 0.00 35,550.00 60,000.00 40,000.00 0.00 50,030.00 52,500.00 237,000.00 0.00 0.00 0.00 0.00
Richard Schwall (filter this donee) 419,495.00 0.00 0.00 0.00 65,189.00 46,698.00 30,000.00 0.00 106,608.00 10,000.00 161,000.00 0.00 0.00 0.00 0.00
Christian Calderon (filter this donee) 367,574.00 0.00 0.00 0.00 0.00 367,574.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Blake Borgeson (filter this donee) 350,470.00 0.00 0.00 0.00 10.00 0.00 300,000.00 50,460.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Investling Group (filter this donee) 309,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 65,000.00 24,000.00 220,000.00 0.00 0.00 0.00 0.00
Future of Life Institute (filter this donee) 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00 250,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Raising for Effective Giving (filter this donee) 204,167.00 0.00 0.00 0.00 0.00 204,167.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jonathan Weissman (filter this donee) 171,290.00 0.00 0.00 0.00 20,000.00 40,000.00 20,000.00 0.00 20,010.00 30,280.00 41,000.00 0.00 0.00 0.00 0.00
Brian Cartmell (filter this donee) 146,700.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 700.00 146,000.00 0.00 0.00 0.00 0.00
Scott Dickey (filter this donee) 130,520.00 0.00 0.00 0.00 3,000.00 33,000.00 11,000.00 10,000.00 11,020.00 14,500.00 48,000.00 0.00 0.00 0.00 0.00
Eric Rogstad (filter this donee) 120,236.00 0.00 0.00 0.00 19,000.00 101,236.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Ben Hoskin (filter this donee) 94,209.00 0.00 0.00 0.00 31,481.00 30,000.00 32,728.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Ethan Dickinson (filter this donee) 93,408.00 0.00 0.00 0.00 0.00 25,400.00 20,518.00 12,000.00 35,490.00 0.00 0.00 0.00 0.00 0.00 0.00
Peter Scott (filter this donee) 80,000.00 0.00 0.00 0.00 30,000.00 50,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Sebastian Hagen (filter this donee) 74,587.00 0.00 0.00 0.00 22,384.00 10,851.00 12,085.00 12,113.00 17,154.00 0.00 0.00 0.00 0.00 0.00 0.00
Buck Shlegeris (filter this donee) 72,679.00 0.00 0.00 0.00 5.00 72,674.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Marius van Voorden (filter this donee) 71,461.00 0.00 0.00 0.00 0.00 59,251.00 0.00 0.00 7,210.00 5,000.00 0.00 0.00 0.00 0.00 0.00
Leif K-Brooks (filter this donee) 67,216.00 0.00 0.00 0.00 0.00 67,216.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alexei Andreev (filter this donee) 64,605.00 0.00 0.00 0.00 0.00 0.00 525.00 0.00 23,280.00 16,000.00 24,800.00 0.00 0.00 0.00 0.00
Chris Haley (filter this donee) 60,250.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 250.00 60,000.00 0.00 0.00 0.00 0.00
Guy Srinivasan (filter this donee) 58,310.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 6,910.00 8,400.00 43,000.00 0.00 0.00 0.00 0.00
Gordon Irlam (filter this donee) 55,000.00 0.00 0.00 0.00 0.00 20,000.00 10,000.00 10,000.00 10,000.00 5,000.00 0.00 0.00 0.00 0.00 0.00
Henrik Jonsson (filter this donee) 54,525.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 36,975.00 0.00 17,549.00 0.00 0.00 0.00 0.00
Mikko Rauhala (filter this donee) 53,745.00 0.00 0.00 0.00 7,200.00 13,200.00 6,000.00 0.00 170.00 2,575.00 24,600.00 0.00 0.00 0.00 0.00
Michael Blume (filter this donee) 51,755.00 0.00 0.00 0.00 1,400.00 5,425.00 15,000.00 4,000.00 5,140.00 9,990.00 10,800.00 0.00 0.00 0.00 0.00
Mihaly Barasz (filter this donee) 51,623.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 24,073.00 12,550.00 15,000.00 0.00 0.00 0.00 0.00
Alan Chang (filter this donee) 51,050.00 0.00 0.00 0.00 33,050.00 18,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Luke Stebbing (filter this donee) 50,500.00 0.00 0.00 0.00 3,000.00 18,050.00 10,500.00 9,650.00 9,300.00 0.00 0.00 0.00 0.00 0.00 0.00
Misha Gurevich (filter this donee) 50,370.00 0.00 0.00 0.00 1,500.00 9,000.00 6,000.00 4,500.00 5,520.00 7,550.00 16,300.00 0.00 0.00 0.00 0.00
Brandon Reinhart (filter this donee) 50,050.00 0.00 0.00 0.00 0.00 25,050.00 15,000.00 0.00 0.00 0.00 10,000.00 0.00 0.00 0.00 0.00
Scott Worley (filter this donee) 50,005.00 0.00 0.00 0.00 9,486.00 21,687.00 5,488.00 13,344.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Marcello Herreshoff (filter this donee) 49,110.00 0.00 0.00 0.00 0.00 12,000.00 12,000.00 12,560.00 12,550.00 0.00 0.00 0.00 0.00 0.00 0.00
Jason Joachim (filter this donee) 44,100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 44,000.00 0.00 0.00 0.00 0.00
Michael Cohen (filter this donee) 39,359.00 0.00 0.00 0.00 0.00 9,977.00 29,382.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Scott Siskind (filter this donee) 38,500.00 0.00 0.00 0.00 0.00 29,000.00 2,037.00 30.00 7,433.00 0.00 0.00 0.00 0.00 0.00 0.00
Robin Powell (filter this donee) 37,560.00 0.00 0.00 0.00 1,000.00 11,200.00 200.00 0.00 1,810.00 2,350.00 21,000.00 0.00 0.00 0.00 0.00
Austin Peña (filter this donee) 37,517.00 0.00 0.00 0.00 26,554.00 10,963.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Max Kesin (filter this donee) 37,000.00 0.00 0.00 0.00 10,000.00 20,420.00 6,580.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Sadowsky (filter this donee) 34,000.00 0.00 0.00 0.00 0.00 0.00 9,000.00 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Gustav Simonsson (filter this donee) 33,285.00 0.00 0.00 0.00 33,285.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nathaniel Soares (filter this donee) 33,230.00 0.00 0.00 0.00 0.00 10.00 0.00 0.00 33,220.00 0.00 0.00 0.00 0.00 0.00 0.00
Kelsey Piper (filter this donee) 30,730.00 0.00 0.00 0.00 30,730.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quinn Maurmann (filter this donee) 30,575.00 0.00 0.00 0.00 30,575.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jesse Liptrap (filter this donee) 29,590.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10,490.00 0.00 19,100.00 0.00 0.00 0.00 0.00
Jeremy Schlatter (filter this donee) 28,711.00 0.00 0.00 0.00 150.00 150.00 4,000.00 1.00 310.00 15,000.00 9,100.00 0.00 0.00 0.00 0.00
Tomer Kagan (filter this donee) 26,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16,500.00 10,000.00 0.00 0.00 0.00 0.00
Brian Tomasik (filter this donee) 26,010.00 0.00 0.00 0.00 0.00 0.00 0.00 2,000.00 10.00 0.00 0.00 0.00 12,000.00 12,000.00 0.00
Patrick LaVictoire (filter this donee) 25,885.00 0.00 0.00 0.00 5,000.00 20,885.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Paul Crowley (filter this donee) 25,850.00 0.00 0.00 0.00 13,400.00 12,450.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Zvi Mowshowitz (filter this donee) 25,010.00 0.00 0.00 0.00 10,000.00 10,000.00 0.00 5,010.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Martine Rothblatt (filter this donee) 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 25,000.00 0.00 0.00 0.00 0.00
Gary Basin (filter this donee) 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 25,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Mick Porter (filter this donee) 24,810.00 0.00 0.00 0.00 1,200.00 9,200.00 2,400.00 4,000.00 8,010.00 0.00 0.00 0.00 0.00 0.00 0.00
Liron Shapira (filter this donee) 24,750.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15,100.00 0.00 9,650.00 0.00 0.00 0.00 0.00
Johan Edström (filter this donee) 23,680.00 0.00 0.00 0.00 0.00 5,700.00 300.00 0.00 2,250.00 7,730.00 7,700.00 0.00 0.00 0.00 0.00
Ethan Sterling (filter this donee) 23,418.00 0.00 0.00 0.00 23,418.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Mike Anderson (filter this donee) 23,000.00 0.00 0.00 0.00 23,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Elliot Glaysher (filter this donee) 23,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23,000.00 0.00 0.00 0.00 0.00
Janos Kramar (filter this donee) 22,811.00 0.00 0.00 0.00 0.00 200.00 541.00 800.00 4,870.00 5,600.00 10,800.00 0.00 0.00 0.00 0.00
Rolf Nelson (filter this donee) 21,810.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 1,710.00 0.00 20,000.00 0.00 0.00 0.00 0.00
Bruno Parga (filter this donee) 21,743.00 0.00 0.00 0.00 10,382.00 11,361.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kevin Fischer (filter this donee) 21,230.00 0.00 0.00 0.00 1,000.00 2,000.00 0.00 3,780.00 4,270.00 4,280.00 5,900.00 0.00 0.00 0.00 0.00
Pasha Kamyshev (filter this donee) 20,200.00 0.00 0.00 0.00 20,200.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Survival and Flourishing Fund (filter this donee) 20,000.00 0.00 20,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Victoria Krakovna (filter this donee) 19,867.00 0.00 0.00 0.00 19,867.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nicolas Tarleton (filter this donee) 19,559.00 0.00 0.00 0.00 0.00 2,000.00 0.00 0.00 200.00 9,659.00 7,700.00 0.00 0.00 0.00 0.00
Simon Sáfár (filter this donee) 19,162.00 0.00 0.00 0.00 3,131.00 16,031.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jai Dhyani (filter this donee) 19,156.00 0.00 0.00 0.00 7,751.00 11,405.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Sergejs Silko (filter this donee) 18,200.00 0.00 0.00 0.00 18,200.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
John Salvatier (filter this donee) 17,718.00 0.00 0.00 0.00 0.00 0.00 9,110.00 2,000.00 10.00 0.00 6,598.00 0.00 0.00 0.00 0.00
Stanley Pecavar (filter this donee) 17,450.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 17,450.00 0.00 0.00 0.00 0.00
Benjamin Goldhaber (filter this donee) 17,250.00 0.00 0.00 0.00 0.00 17,250.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Leopold Bauernfeind (filter this donee) 16,900.00 0.00 0.00 0.00 0.00 16,900.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Wolf Tivy (filter this donee) 16,758.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16,758.00 0.00 0.00 0.00 0.00 0.00 0.00
The Maurice Amado Foundation (filter this donee) 16,000.00 0.00 0.00 0.00 0.00 16,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Sergio Tarrero (filter this donee) 15,220.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 620.00 0.00 14,600.00 0.00 0.00 0.00 0.00
Stephan T. Lavavej (filter this donee) 15,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Quixey (filter this donee) 15,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 15,000.00 0.00 0.00 0.00 0.00
Donald King (filter this donee) 15,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 9,000.00 0.00 6,000.00 0.00 0.00 0.00 0.00
Tran Bao Trung (filter this donee) 14,379.00 0.00 0.00 0.00 0.00 14,379.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Aleksei Riikonen (filter this donee) 14,372.00 0.00 0.00 0.00 0.00 0.00 0.00 130.00 0.00 242.00 14,000.00 0.00 0.00 0.00 0.00
William Morgan (filter this donee) 13,571.00 0.00 0.00 0.00 0.00 1,000.00 400.00 0.00 0.00 5,171.00 7,000.00 0.00 0.00 0.00 0.00
James Mazur (filter this donee) 13,127.00 0.00 0.00 0.00 675.00 12,452.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michal Pokorný (filter this donee) 13,000.00 0.00 0.00 0.00 1,000.00 12,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Eric Lin (filter this donee) 12,870.00 0.00 0.00 0.00 12,870.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Roy Ames (filter this donee) 12,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12,500.00 0.00 0.00 0.00 0.00
Michael Ames (filter this donee) 12,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12,500.00 0.00 0.00 0.00 0.00 0.00
Sam Eisenstat (filter this donee) 12,356.00 0.00 0.00 0.00 0.00 12,356.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Benjamin Hoffman (filter this donee) 12,332.00 0.00 0.00 0.00 0.00 100.00 0.00 3.00 12,229.00 0.00 0.00 0.00 0.00 0.00 0.00
Emma Borhanian (filter this donee) 12,000.00 0.00 0.00 0.00 0.00 12,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Plotz (filter this donee) 11,960.00 0.00 0.00 0.00 11,960.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Joshua Fox (filter this donee) 11,934.00 0.00 0.00 0.00 360.00 2,040.00 1,310.00 105.00 490.00 1,529.00 6,100.00 0.00 0.00 0.00 0.00
Louie Helm (filter this donee) 11,930.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 270.00 1,260.00 10,400.00 0.00 0.00 0.00 0.00
Kenn Hamm (filter this donee) 11,472.00 0.00 0.00 0.00 11,472.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Stephanie Zolayvar (filter this donee) 11,247.00 0.00 0.00 0.00 0.00 11,247.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jean-Philippe Sugarbroad (filter this donee) 11,200.00 0.00 0.00 0.00 0.00 11,200.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Robert and Judith Babcock (filter this donee) 11,100.00 0.00 0.00 0.00 11,100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Ryan Carey (filter this donee) 10,172.00 0.00 0.00 0.00 5,086.00 5,086.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Riley Goodside (filter this donee) 10,049.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 3,949.00 6,100.00 0.00 0.00 0.00 0.00
Gil Elbaz (filter this donee) 10,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 5,000.00 0.00 0.00 0.00 0.00
Adam Weissman (filter this donee) 10,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10,000.00 0.00 0.00 0.00 0.00
Alex Schell (filter this donee) 9,575.00 0.00 0.00 0.00 9,575.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Luke Titmus (filter this donee) 8,837.00 0.00 0.00 0.00 0.00 8,837.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Xerxes Dotiwalla (filter this donee) 8,700.00 0.00 0.00 0.00 1,350.00 7,350.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Daniel Nelson (filter this donee) 8,147.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 70.00 2,077.00 6,000.00 0.00 0.00 0.00 0.00
Paul Rhodes (filter this donee) 8,025.00 0.00 0.00 0.00 1,024.00 869.00 61.00 122.00 430.00 5,519.00 0.00 0.00 0.00 0.00 0.00
Laura and Chris Soares (filter this donee) 7,510.00 0.00 0.00 0.00 0.00 7,510.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Paul Christiano (filter this donee) 7,000.00 0.00 0.00 0.00 0.00 0.00 0.00 7,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nhat Anh Phan (filter this donee) 7,000.00 0.00 0.00 0.00 0.00 7,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alex Edelman (filter this donee) 6,932.00 0.00 0.00 0.00 1,800.00 5,132.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nader Chehab (filter this donee) 6,786.00 0.00 0.00 0.00 6,786.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
James Douma (filter this donee) 6,430.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 550.00 0.00 5,880.00 0.00 0.00 0.00 0.00
Cliff & Stephanie Hyra (filter this donee) 6,208.00 0.00 0.00 0.00 6,208.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Andrew Hay (filter this donee) 6,201.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 6,201.00 0.00 0.00 0.00 0.00
Raymond Arnold (filter this donee) 5,920.00 0.00 0.00 0.00 500.00 3,420.00 2,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Tobias Dänzer (filter this donee) 5,734.00 0.00 0.00 0.00 0.00 5,734.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Bryan Dana (filter this donee) 5,599.00 0.00 0.00 0.00 70.00 5,529.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Phil Hazelden (filter this donee) 5,559.00 0.00 0.00 0.00 0.00 5,559.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Robert and Gery Ruddick (filter this donee) 5,500.00 0.00 0.00 0.00 5,500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Jacob Falkovich (filter this donee) 5,415.00 0.00 0.00 0.00 0.00 5,065.00 300.00 50.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Frank Adamek (filter this donee) 5,250.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,250.00 0.00 0.00 0.00 0.00
Giles Edkins (filter this donee) 5,145.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 145.00 5,000.00 0.00 0.00 0.00 0.00
Jeff Bone (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00
Joshua Looks (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00
Tuxedage John Adams (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Robert Yaman (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Daniel Ziegler (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Daniel Weinand (filter this donee) 5,000.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kevin R. Fischer (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Thomas Jackson (filter this donee) 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 5,000.00 0.00 0.00 0.00 0.00 0.00 0.00
Patrick Brinich-Langlois (filter this donee) 3,000.00 0.00 0.00 0.00 0.00 3,000.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Vipul Naik (filter this donee) 500.00 0.00 0.00 0.00 500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
JP Addison (filter this donee) 500.00 0.00 0.00 0.00 0.00 0.00 500.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Tim Bakker (filter this donee) 474.72 0.00 0.00 0.00 0.00 0.00 474.72 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kyle Bogosian (filter this donee) 385.00 0.00 0.00 0.00 0.00 0.00 135.00 250.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Nick Brown (filter this donee) 199.28 0.00 0.00 0.00 0.00 0.00 119.57 79.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Mathieu Roy (filter this donee) 198.85 0.00 0.00 0.00 0.00 0.00 198.85 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Johannes Gätjen (filter this donee) 118.68 0.00 0.00 0.00 0.00 0.00 0.00 118.68 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Aaron Gertler (filter this donee) 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 0.00 0.00 0.00 0.00 0.00 0.00
Peter Hurford (filter this donee) 90.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 90.00 0.00 0.00 0.00 0.00 0.00 0.00
William Grunow (filter this donee) 74.98 0.00 0.00 0.00 0.00 0.00 37.49 37.49 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Vegard Blindheim (filter this donee) 63.39 0.00 0.00 0.00 0.00 0.00 63.39 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Alexandre Zani (filter this donee) 50.00 0.00 0.00 0.00 0.00 0.00 50.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Henry Cooksley (filter this donee) 38.12 0.00 0.00 0.00 0.00 0.00 38.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Akhil Jalan (filter this donee) 31.00 0.00 0.00 0.00 0.00 0.00 31.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Pablo Stafforini (filter this donee) 25.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 25.00 0.00 0.00 0.00 0.00 0.00
Michael Dello-Iacovo (filter this donee) 20.00 0.00 0.00 0.00 0.00 0.00 0.00 20.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Michael Dickens (filter this donee) 20.00 0.00 0.00 0.00 0.00 0.00 20.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Gwern Branwen (filter this donee) 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Total 48,071,776.52 19,594,683.50 9,006,750.00 3,302,500.00 2,145,164.00 6,754,495.00 1,316,133.14 689,164.88 1,607,877.00 669,931.00 2,421,078.00 10,000.00 267,000.00 162,000.00 125,000.00

Full list of documents in reverse chronological order (75 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Foundation Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
Our all-time largest donation, and major crypto support from Vitalik Buterin2021-05-13Colm Ó Riain Machine Intelligence Research InstituteAnonymous Vitalik Buterin Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two major donations: one (MIRI's largest donation to date) from an anonymous donor donating $15.6 million ($2.5 million per year from 2021 to 2024 and an additional $5.6 million in 2025), and 1050 ETH ($4,378,159) from Vitalik Buterin.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2020 Updates and Strategy2020-12-21Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI provides a general update and includes thoughts on strategy. On the strategy front, MIRI says that it is moving away from the strategy it announced in its 2017 post https://intelligence.org/2017/04/30/2017-updates-and-strategy/ that involved “seeking entirely new low-level foundations for optimization,” “endeavoring to figure out parts of cognition that can be very transparent as cognition,” and “experimenting with some specific alignment problems.” MIRI also provides updated thoughts on remote work during the COVID-19 pandemic and the possibility of relocating its office from Berkeley.
Our 2019 Fundraiser Review2020-02-13Colm Ó Riain Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI gives an update on its 2019 fundraiser. It goes into several reasons for the total amout of money raised in the fundraiser ($601,120) being less than the amounts raised in 2017 and 2018. Reasons listed include: (1) lower value of cryptocurrency than in 2017, (2) nondisclosed-by-default policy making it harder for potential donors to evaluate research, (3) changes to US tax law in 2018 that may encourage donation bunching, (4) fewer counterfactual matching opportunities for donations, (4) possible donor perception of diminishing returns on marginal donations, (5) variation driven by fluctuation in amounts from larger donors, (6) former earning-to-give donors moving to direct work, (7) urgent needs for funds expressed by MIRI to donors in previous years, causing a front-loading of donations in those years. The post ends by saying that although the fundraiser raised less than expected, MIRI appreciates the donor support and that they will be able to pursue the majority of their growth plans.
MIRI paid in to Epstein's network of social legitimacy2019-12-24Jeffrey Epstein Machine Intelligence Research Institute Miscellaneous commentaryAI safetyThe blog post discusses donations made by convicted sex offender Jeffrey Epstein to MIRI, and the ethics of MIRI accepting the money. It links to https://t.umblr.com/redirect?z=https%3A%2F%2Fprojects.propublica.org%2Fnonprofits%2Fdisplay_990%2F582565917%2F2011_01_EO%252F58-2565917_990_200912&t=NzAxNDk3OWUzMzkwMTQyN2YxYzY1NGY4M2EzYjk2NDY3Y2FhNDQ2OCxjMmU4MDVmNTk2OTRkMmE1MjNmMWI5OTUzMTBjMjI5OGNmMmMxMThm for proof of donation; it includes a screenshot of a tweet by Eliezer that no longer seems available. The post suggests a criterion for accepting "bad" money: if after accepting it MIRI can make sure that it confers no additional social legitimacy to the donor.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Ben Hoskin Effective Altruism ForumBen Hoskin Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
Suggestions for Individual Donors from Open Philanthropy Staff - 20192019-12-18Holden Karnofsky Open PhilanthropyChloe Cockburn Jesse Rothman Michelle Crentsil Amanda Hungerfold Lewis Bollard Persis Eskander Alexander Berger Chris Somerville Heather Youngs Claire Zabel National Council for Incarcerated and Formerly Incarcerated Women and Girls Life Comes From It Worth Rises Wild Animal Initiative Sinergia Animal Center for Global Development International Refugee Assistance Project California YIMBY Engineers Without Borders 80,000 Hours Centre for Effective Altruism Future of Humanity Institute Global Priorities Institute Machine Intelligence Research Institute Ought Donation suggestion listCriminal justice reform|Animal welfare|Global health and development|Migration policy|Effective altruism|AI safetyContinuing an annual tradition started in 2015, Open Philanthropy Project staff share suggestions for places that people interested in specific cause areas may consider donating. The sections are roughly based on the focus areas used by Open Phil internally, with the contributors to each section being the Open Phil staff who work in that focus area. Each recommendation includes a "Why we recommend it" or "Why we suggest it" section, and with the exception of the criminal justice reform recommendations, each recommendation includes a "Why we haven't fully funded it" section. Section 5, Assorted recomendations by Claire Zabel, includes a list of "Organizations supported by our Committed for Effective Altruism Support" which includes a list of organizations that are wiithin the purview of the Committee for Effective Altruism Support. The section is approved by the committee and represents their views.
MIRI’s 2019 Fundraiser2019-12-02Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its 2019 fundraiser, with a target of $1 million for fundraising. The blog post describes MIRI's projected budget, and provides more details on MIRI's activities in 2019, including (1) workshops and scaling up, and (2) research and write-ups. Regarding research, the blog post reaffirms continuation of the nondisclosure-by-default policy announced in 2018 at https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ The post is link-posted to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/LCApwxbdX4njYzgdr/miri-s-2019-fundraiser (GW, IR)
Thanks for putting up with my follow-up questions. Out of the areas you mention, I'd be very interested in ... (GW, IR)2019-09-10Ryan Carey Effective Altruism ForumFounders Pledge Open Philanthropy OpenAI Machine Intelligence Research Institute Broad donor strategyAI safety|Global catastrophic risks|Scientific research|PoliticsRyan Carey replies to John Halstead's question on what Founders Pledge shoud research. He first gives the areas within Halstead's list that he is most excited about. He also discusses three areas not explicitly listed by Halstead: (a) promotion of effective altruism, (b) scholarships for people working on high-impact research, (c) more on AI safety -- specifically, funding low-mid prestige figures with strong AI safety interest (what he calls "highly-aligned figures"), a segment that he claims the Open Philanthropy Project is neglecting, with the exception of MIRI and a couple of individuals.
New grants from the Open Philanthropy Project and BERI2019-04-01Rob Bensinger Machine Intelligence Research InstituteOpen Philanthropy Berkeley Existential Risk Initiative Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces two grants to it: a two-year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 totaling $2,112,500 from the Open Philanthropy Project, with half of it disbursed in 2019 and the other half disbursed in 2020. The amount disbursed in 2019 (of a little over $1.06 million) is on top of the $1.25 million already committed by the Open Philanthropy Project as part of the 3-year $3.75 million grant https://intelligence.org/2017/11/08/major-grant-open-phil/ The $1.06 million in 2020 may be supplemented by further grants from the Open Philanthropy Project. The grant size from the Open Philanthropy Project was determined by the Committee for Effective Altruism Support. The post also notes that the Open Philanthropy Project plans to determine future grant sizes using the Committee. MIRI expects the grant money to play an important role in decision-making as it executes on growing its research team as described in its 2018 strategy update post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/
Committee for Effective Altruism Support2019-02-27Open PhilanthropyOpen Philanthropy Centre for Effective Altruism Berkeley Existential Risk Initiative Center for Applied Rationality Machine Intelligence Research Institute Future of Humanity Institute Broad donor strategyEffective altruism|AI safetyThe document announces a new approach to setting grant sizes for the largest grantees who are "in the effective altruism community" including both organizations explicitly focused on effective altruism and other organizations that are favorites of and deeply embedded in the community, including organizations working in AI safety. The committee comprises Open Philanthropy staff and trusted outside advisors who are knowledgeable about the relevant organizations. Committee members review materials submitted by the organizations; gather to discuss considerations, including room for more funding; and submit “votes” on how they would allocate a set budget between a number of grantees (they can also vote to save part of the budget for later giving). Votes of committee members are averaged to arrive at the final grant amounts. Example grants whose size was determined by the community is the two-year support to the Machine Intelligence Research Institute (MIRI) https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 and one-year support to the Centre for Effective Altruism (CEA) https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2019
Our 2018 Fundraiser Review2019-02-11Colm Ó Riain Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI gives an update on its 2018 fundraiser. Key topics discussed include four types of donation matching programs that MIRI benefited from: (1) WeTrust Spring's ETH-matching event, (2) Facebook's Giving Tuesday event with https://donations.fb.com/giving-tuesday/ linked to, (3) Double Up Drive challenge, (4) Corporate matching.
EA orgs are trying to fundraise ~$10m - $16m (GW, IR)2019-01-06Hauke Hillebrandt Effective Altruism Forum Centre for Effective Altruism Effective Altruism Foundation Machine Intelligence Research Institute Forethought Foundation for Global Priorities Research Sentience Institute Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute Rethink Priorities EA Hotel 80,000 Hours Rethink Charity Miscellaneous commentaryThe blog post links to and discusses the spreadsheet https://docs.google.com/spreadsheets/d/10zU6gp_H_zuvlZ2Vri-epSK0_urbcmdS-5th3mXQGXM/edit which tabulates various organizations and their fundraising targets, along with quotes and links to fundraising posts. The blog post itself has three points, the last of which is that the EA community is relatively more funding-constrained again
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
MIRI’s 2018 Fundraiser2018-11-26Malo Bourgon Machine Intelligence Research InstituteDan Smith Aaron Merchak Matt Ashton Stephen Chidwick Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its 2018 end-of-year fundraising, with Target 1 of $500,000 and Target 2 of $1,200,000. It provides an overview of its 2019 budget and plans to explain the values it has worked out for Target 1 and Target 2. The post also mentions a matching opportunity sponsored by professional poker players Dan Smith, Aaron Merchak, Matt Ashton, and Stephen Chidwick, in partnership with Raising for Effective Giving (REG), which provides matching for donations to MIRI and REG up to $20,000. The post is referenced by Effective Altruism Funds in their grant write-up for a $40,000 grant to MIRI, at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi
My 2018 donations (GW, IR)2018-11-23Vipul Naik Effective Altruism ForumVipul Naik GiveWell top charities Machine Intelligence Research Institute Donor lottery Periodic donation list documentationGlobal health and development|AI safetyThe blog post describes an allocation of $2,000 to GiveWell for regranting to top charities, and $500 each to MIRI and the $500,000 donor lottery. The latter two donations are influenced by Issa Rice, who describes his reasoning at https://issarice.com/donation-history#section-3 Vipul Naik's post explains the reason for donating now rather than earlier or later, the reason for donating this amount, and the selection of recipients. The post is also cross-posted at https://vipulnaik.com/blog/my-2018-donations/ and https://github.com/vipulnaik/working-drafts/blob/master/eaf/my-2018-donations.md
2018 Update: Our New Research Directions2018-11-22Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI executive director Nate Soares explains the new research directions being followed by MIRI, and how they differ from the original Agent Foundations agenda. The post also talks about how MIRI is being cautious in terms of sharing technical details of its research, until there is greater internal clarity on what findings need to be developed further, and what findings should be shared with what group. The post ends with guidance for people interested in joining the MIRI team to further the technical agenda. The post is referenced by Effective Altruism Funds in their grant write-up for a $40,000 grant to MIRI, at https://app.effectivealtruism.org/funds/far-future/payouts/3JnNTzhJQsu4yQAYcKceSi The nondisclosure-by-default section of the post is also referenced by Ben Hoskin in https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#MIRI__The_Machine_Intelligence_Research_Institute (GW, IR) and also cited by him as one of the reasons he is not donating to MIRI this year (general considerations related to this are described at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison#Openness (GW, IR) in the same post). Issa Rice also references these concerns in his donation decision write-up for 2018 at https://issarice.com/donation-history#section-3 but nonetheless decides to allocate $500 to MIRI.
Opportunities for individual donors in AI safety (GW, IR)2018-03-12Alex Flint Effective Altruism Forum Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyAlex Flint discusses the history of AI safety funding, and suggests some heuristics for individual donors based on what he has seen to be successful in the past.
Fundraising success!2018-01-10Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces the success of its fundraiser, providing information on its top doonors, and thanking everybody who contributed.
Where the ACE Staff Members Are Giving in 2017 and Why2017-12-26Allison Smith Animal Charity EvaluatorsJon Bockman Allison Smith Toni Adleberg Sofia Davis-Fogel Kieran Greig Jamie Spurgeon Erika Alonso Eric Herboso Gina Stuessy Animal Charity Evaluators The Good Food Institute Vegan Outreach A Well-Fed World Better Eating International Encompass Direct Action Everywhere Animal Charity Evaluators Recommended Charity Fund Against Malaria Foundation Animal equality The Nonhuman Rights Project AnimaNaturalis Internacional The Humane League GiveDirectly Food Empowerment Project Mercy For Animals New Harvest StrongMinds Centre for Effective Altruism Effective Altruism Funds Machine Intelligence Research Institute Donor lottery Sentience Institute Wild-Animal Suffering Research Periodic donation list documentationAnimal welfare|AI safety|Global health and development|Effective altruismContinuing an annual tradition started in 2016, Animal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2017. Donation amounts are not disclosed, likely by policy
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20172017-12-21Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Nick Beckstead Daniel Dewey Center for International Security and Cooperation Johns Hopkins Center for Health Security Good Call Court Watch NOLA Compassion in World Farming USA Wild-Animal Suffering Research Effective Altruism Funds Donor lottery Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Berkeley Existential Risk Initiative Centre for Effective Altruism 80,000 Hours Alliance to Feed the Earth in Disasters Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Criminal justice reformOpen Philanthropy Project staff give suggestions on places that might be good for individuals to donate to. Each suggestion includes a section "Why I suggest it", a section explaining why the Open Philanthropy Project has not funded (or not fully funded) the opportunity, and links to relevant writeups.
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
I Vouch For MIRI2017-12-17Zvi Mowshowitz Zvi Mowshowitz Machine Intelligence Research Institute Single donation documentationAI safetyMowshowitz explains why he made his $10,000 donation to MIRI, and makes the case for others to support MIRI. He believes that MIRI understands the hardness of the AI safety problem, is focused on building solutions for the long term, and has done humanity a great service through its work on functional decision theory.
MIRI 2017 Fundraiser and Strategy Update (GW, IR)2017-12-15Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI provides an update on its fundraiser and its strategy in a general-interest forum for people interested in effective altruism. They say the fundraiser is already going quite well, but believe they can still use marginal funds well to expand more.
End-of-the-year matching challenge!2017-12-14Rob Bensinger Machine Intelligence Research InstituteChristian Calderon Marius van Voorden Machine Intelligence Research Institute Donee donation caseAI safetyMIRI gives an update on how its fundraising efforts are going, noting that it has met its first fundraising target, listing two major donations (Christian Calderon: $367,574 and Marius van Voorden: $59K), and highlighting the 2017 charity drive where donations up to $1 million to a list of charities including MIRI will be matched.
AI: a Reason to Worry, and to Donate2017-12-10Jacob Falkovich Jacob Falkovich Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds Single donation documentationAI safetyFalkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund).
MIRI’s 2017 Fundraiser2017-12-01Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyDocument provides cumulative target amounts for 2017 fundraiser ($625,000 Target 1, $850,000 Target 2, $1,250,000 Target 3) along with what MIRI expects to accomplish at each target level. Funds raised from the Open Philanthropy Project and an anonymous cryptocurrency donor (see https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/ for more) are identified as reasons for the greater financial security and more long-term and ambitious planning.
Claim: if you work in an AI alignment org funded by donations, you should not own much cryptocurrency, since much of your salary comes from people who do2017-11-18Daniel Filan Machine Intelligence Research Institute Miscellaneous commentaryAI safetyThe post by Daniel Filan claims that organizations working in AI risk get a large share of their donations from cryptocurrency investors, so their fundraising success is tied to the success of cryptocurrency. For better diversification, therefore, people working at such organizations should not own cryptocurrency. The post has a number of comments from Malo Bourgon of the Machine Intelligence Research Institute, which is receiving a lot of money from cryptocurrency investors in the months surrounding the post date
Superintelligence Risk Project: Conclusion2017-09-15Jeff Kaufman Machine Intelligence Research Institute Review of current state of cause areaAI safetyThis is the concluding post (with links to all earlier posts) of a month-long investigation by Jeff Kaufman into AI risk. Kaufman investigates by reading the work of, and talking with, both people who work in AI risk reduction and people who work on machine learning and AI in industry and academia, but are not directly involved with safety. His conclusion is that there likely should continue to be some work on AI risk reduction, and this should be respected by people working on AI. He is not confident about how the current level and type of work on AI risk compares with the optimal level and type of such work
A major grant from the Open Philanthropy Project2017-09-08Malo Bourgon Machine Intelligence Research InstituteOpen Philanthropy Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces that it has received a three-year grant at $1.25 million per year from the Open Philanthropy Project, and links to the announcement from Open Phil at https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and notes "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."
I’ve noticed that this misconception is still floating around2017-08-30Rob Bensinger Facebook Machine Intelligence Research Institute Reasoning supplementAI safetyPost notes an alleged popular misconception that the reason to focus on AI risk is that it is low-probability but high-impact, but MIRI researchers assign a medium-to-high probability of AI risk in the medium-term future.
My current thoughts on MIRI’s highly reliable agent design work (GW, IR)2017-07-07Daniel Dewey Effective Altruism ForumOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discusses thoughts on the MIRI work on highly reliable agent design. Dewey is looking into the subject to inform Open Philanthropy Project grantmaking to MIRI specifically and for AI risk in general; the post reflects his own opinions that could affect Open Phil decisions. See https://groups.google.com/forum/#!topic/long-term-world-improvement/FeZ_h2HXJr0 for critical discussion, in particular the comments by Sarah Constantin.
Updates to the research team, and a major donation2017-07-04Malo Bourgon Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI announces a surprise $1.01 million donation from an Ethereum cryptocurrency investor (2017-05-30) as well as updates related to team and fundraising.
Four quantiative models, aggregation, and final decision2017-05-20Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project 80,000 Hours Animal Charity Evaluators Machine Intelligence Research Institute StrongMinds Single donation documentationEffective altruism/career adviceThe post describes how the Oxford Prioritisation Project compared its four finalists (80000 Hours, Animal Charity Evaluators, Machine Intelligence Research Institute, and StrongMinds) by building quantitative models for each, including modeling of uncertainties. Based on these quantitative models, 80000 Hours was chosen as the winner. Also posted to http://effective-altruism.com/ea/1ah/four_quantiative_models_aggregation_and_final/ for comments
A model of the Machine Intelligence Research Institute2017-05-20Sindy Li Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyThe post describes a quantitative model of the Machine Intelligence Research Institute, available at https://www.getguesstimate.com/models/8789 on Guesstimate. Also posted to http://effective-altruism.com/ea/1ae/a_model_of_the_machine_intelligence_research/ for comments
2017 Updates and Strategy2017-04-30Rob Bensinger Machine Intelligence Research Institute Machine Intelligence Research Institute Donee periodic updateAI safetyMIRI provides updates on its progress as an organization and outlines its strategy and budget for the coming year. Key update is that recent developments in AI have made them increase the probability of AGI before 2035 by a little bit. MIRI has also been in touch with researchers at FAIR, DeepMind, and OpenAI.
AI Safety: Is it worthwhile for us to look further into donating into AI research?2017-03-11Qays Langan-Dathi Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Review of current state of cause areaAI safetyThe post concludes: "In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives."
Final decision: Version 02017-03-01Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Against Malaria Foundation Machine Intelligence Research Institute The Good Food Institute StrongMinds Reasoning supplementVersion 0 of a decision process for what charity to grant 10,000 UK pouds to. Result was a tie between Machine Intelligence Research Institute and StrongMinds. See http://effective-altruism.com/ea/187/oxford_prioritisation_project_version_0/ for a cross-post with comments
Konstantin Sietzy: current view, StrongMinds2017-02-21Konstantin Sietzy Oxford Prioritisation ProjectOxford Prioritisation Project StrongMinds Machine Intelligence Research Institute Evaluator review of doneeMental healthKonstantin Sietzy explains why StrongMinds is the best charity in his view. Also lists Machine Intelligence Research Institute as the runner-up
Daniel May: current view, Machine Intelligence Research Institute2017-02-15Daniel May Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Evaluator review of doneeAI safetyDaniel May evaluates the Machine Intelligence Research Institute and describes his reasons for considering it the best donation opportunity
Tom Sittler: current view, Machine Intelligence Research Institute2017-02-08Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Future of Humanity Institute Evaluator review of doneeAI safetyTom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update
Changes in funding in the AI safety field2017-02-01Sebastian Farquhar Centre for Effective Altruism Machine Intelligence Research Institute Center for Human-Compatible AI Leverhulme Centre for the Future of Intelligence Future of Life Institute Future of Humanity Institute OpenAI MIT Media Lab Review of current state of cause areaAI safetyThe post reviews AI safety funding from 2014 to 2017 (projections for 2017). Cross-posted on EA Forum at http://effective-altruism.com/ea/16s/changes_in_funding_in_the_ai_safety_field/
Belief status: off-the-cuff thoughts!2017-01-19Vipul Naik Facebook Machine Intelligence Research Institute Reasoning supplementAI safetyThe post argues that (lack of) academic endorsement of the work done by MIRI should not be an important factor in evaluating MIRI, offering three reasons. Commenters include Rob Bensinger, Research Communications Manager at MIRI.
The effective altruism guide to donating this giving season2016-12-28Robert Wiblin 80,000 Hours Blue Ribbon Study Panel on Biodefense Cool Earth Alliance for Safety and Justice Cosecha Centre for Effective Altruism 80,000 Hours Animal Charity Evaluators Compassion in World Farming USA Against Malaria Foundation Schistosomiasis Control Initiative StrongMinds Ploughshares Fund Machine Intelligence Research Institute Future of Humanity Institute Evaluator consolidated recommendation listBiosecurity and pandemic preparedness,Global health and development,Animal welfare,AI risk,Global catastrophic risks,Effective altruism/movement growthRobert Wiblin draws on a number of annual charity evaluations and reviews, as well as staff donation writeups, from sources such as GiveWell and Animal Charity Evaluators, to provide an "effective altruism guide" for 2016 Giving Season donation
Where the ACE Staff Members are Giving in 2016 and Why2016-12-23Leah Edgerton Animal Charity EvaluatorsAllison Smith Jacy Reese Toni Adleberg Gina Stuessy Kieran Grieg Eric Herboso Erika Alonso Animal Charity Evaluators Animal Equality Vegan Outreach Act Asia Faunalytics Farm Animal Rights Movement Sentience Politics Direct Action Everywhere The Humane League The Good Food Institute Collectively Free Planned Parenthood Future of Life Institute Future of Humanity Institute GiveDirectly Machine Intelligence Research Institute The Humane Society of the United States Farm Sanctuary StrongMinds Periodic donation list documentationAnimal welfare|AI safety|Global catastrophic risksAnimal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2016. Donation amounts are not disclosed, likely by policy
Suggestions for Individual Donors from Open Philanthropy Project Staff - 20162016-12-14Holden Karnofsky Open PhilanthropyJaime Yassif Chloe Cockburn Lewis Bollard Daniel Dewey Nick Beckstead Blue Ribbon Study Panel on Biodefense Alliance for Safety and Justice Cosecha Animal Charity Evaluators Compassion in World Farming USA Machine Intelligence Research Institute Future of Humanity Institute 80,000 Hours Ploughshares Fund Donation suggestion listAnimal welfare|AI safety|Biosecurity and pandemic preparedness|Effective altruism|Migration policyOpen Philanthropy Project staff describe suggestions for best donation opportunities for individual donors in their specific areas.
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Ben Hoskin Effective Altruism ForumBen Hoskin Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."
EAs write about where they give2016-12-09Julia Wise Effective Altruism ForumBlake Borgeson Eva Vivalt Ben Kuhn Alexander Gordon-Brown and Denise Melchin Elizabeth Van Nostrand Machine Intelligence Research Institute Center for Applied Rationality AidGrade Charity Science: Health 80,000 Hours Centre for Effective Altruism Tostan Periodic donation list documentationGlobal health and development, AI riskJulia Wise got submissions from multiple donors about their donation plans and put them together in a single post. The goal was to cover people outside of organizations that publish such posts for their employees
CEA Staff Donation Decisions 20162016-12-06Sam Deere Centre for Effective AltruismWilliam MacAskill Michelle Hutchinson Tara MacAulay Alison Woodman Seb Farquhar Hauke Hillebrandt Marinella Capriati Sam Deere Max Dalton Larissa Hesketh-Rowe Michael Page Stefan Schubert Pablo Stafforini Amy Labenz Centre for Effective Altruism 80,000 Hours Against Malaria Foundation Schistosomiasis Control Initiative Animal Charity Evaluators Charity Science Health New Incentives Project Healthy Children Deworm the World Initiative Machine Intelligence Research Institute StrongMinds Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Effective Altruism Foundation Sci-Hub Vote.org The Humane League Foundational Research Institute Periodic donation list documentationCentre for Effective Altruism (CEA) staff describe their donation plans. The donation amounts are not disclosed.
Why I'm donating to MIRI this year (GW, IR)2016-11-30Owen Cotton-Barratt Owen Cotton-Barratt Machine Intelligence Research Institute Single donation documentationAI safetyPrimary interest is in existential risk. Cited CoI and other reasons for not donating to own employer, Centre for Effective Altruism. Notes disagreements with MIRI, citing http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#research but highlights need for epistemic humility.
Crunch time!! The 2016 fundraiser for the AI safety group I work at, MIRI, is going a lot slower than expected2016-10-25Rob Bensinger Facebook Machine Intelligence Research Institute Donee donation caseAI safetyRob Bensinger, Research Communications Director at MIRI, takes to his personal Facebook to ask people to chip in for the MIRI fundraiser, which is going slower than he and MIRI expected, and may not meet its target. The final comment by Bensinger notes that $582,316 out of the target of $750,000 was raised, and that about $260k of that was raised after his post, so he credits the final push for helping MIRI move closer to its fundraising goals.
Ask MIRI Anything (AMA) (GW, IR)2016-10-11Rob Bensinger Machine Intelligence Research Institute Machine Intelligence Research Institute Donee AMAAI safetyRob Bensinger, the Research Communications Manager at MIRI, hosts an Ask Me Anything (AMA) on the Effective Altruism Forum during the October 2016 Fundraiser.
MIRI’s 2016 Fundraiser2016-09-16Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its single 2016 fundraiser (as opposed to previous years when it conducted two fundraisers, it is conducting just one this time, in the Fall).
Some Key Ways in Which I've Changed My Mind Over the Last Several Years2016-09-06Holden Karnofsky Open Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Reasoning supplementAI safetyIn this 16-page Google Doc, Holden Karnofsky, Executive Director of the Open Philanthropy Project, lists three issues he has changed his mind about: (1) AI safety (he considers it more important now), (2) effective altruism community (he takes it more seriously now), and (3) general properties of promising ideas and interventions (he considers feedback loops less necessary than he used to, and finding promising ideas through abstract reasoning more promising). The document is linked to and summarized in the blog post https://www.openphilanthropy.org/blog/three-key-issues-ive-changed-my-mind-about
Machine Intelligence Research Institute — General Support2016-09-06Open Philanthropy Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyOpen Phil writes about the grant at considerable length, more than it usually does. This is because it says that it has found the investigation difficult and believes that others may benefit from its process. The writeup also links to reviews of MIRI research by AI researchers, commissioned by Open Phil: http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf (the reviews are anonymized). The date is based on the announcement date of the grant, see https://groups.google.com/a/openphilanthropy.org/forum/#!topic/newly.published/XkSl27jBDZ8 for the email.
Anonymized Reviews of Three Recent Papers from MIRI’s Agent Foundations Research Agenda (PDF)2016-09-06Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyReviews of the technical work done by MIRI, solicited and compiled by the Open Philanthropy Project as part of its decision process behind a grant for general support to MIRI documented at http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support (grant made 2016-08, announced 2016-09-06).
Here are the biggest things I got wrong in my attempts at effective altruism over the last ~3 years.2016-05-24Buck Shlegeris Buck Shlegeris Open Philanthropy Vegan Outreach Machine Intelligence Research Institute Broad donor strategyGlobal health|Animal welfare|AI safetyBuck Shlegeris, reflecting on his past three years as an effective altruist, identifies two mistakes he made in his past 3 years as an effective altruist: (1) "I thought leafleting about factory farming was more effective than GiveWell top charities. [...] I probably made this mistake because of emotional bias. I was frustrated by people who advocated for global poverty charities for dumb reasons. [...] I thought that if they really had that belief, they should either save their money just in case we found a great intervention for animals in the future, or donate it to the people who were trying to find effective animal right interventions. I think that this latter argument was correct, but I didn't make it exclusively." (2) "In 2014 and early 2015, I didn't pay as much attention to OpenPhil as I should have. [...] Being wrong about OpenPhil's values is forgivable, but what was really dumb is that I didn't realize how incredibly important it was to my life plan that I understand OpenPhil's values." (3) "I wish I'd thought seriously about donating to MIRI sooner. [...] Like my error #2, this is an example of failing to realize that when there's an unknown which is extremely important to my plans but I'm very unsure about it and haven't really seriously thought about it, I should probably try to learn more about it."
Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity2016-05-06Holden Karnofsky Open PhilanthropyOpen Philanthropy Machine Intelligence Research Institute Future of Humanity Institute Review of current state of cause areaAI safetyIn this blog post that that the author says took him over over 70 hours to write (See https://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing for the statistic), Holden Karnofsky explains the position of the Open Philanthropy Project on the potential risks and opportunities from AI, and why they are making funding in the area a priority.
Concerning MIRI’s Place in the EA Movement2016-02-17Ozy Brennan Thing of Things Machine Intelligence Research Institute Miscellaneous commentaryAI safetyThe post does not directly evaluate MIRI, but highlights the importance of object-level evaluation of the quality and value of the work done by MIRI. Also thanks MIRI, LessWrong, and Yudkowsky for contributions to the growth of the effective altruist movement.
Where should you donate to have the most impact during giving season 2015?2015-12-24Robert Wiblin 80,000 Hours Against Malaria Foundation Giving What We Can GiveWell AidGrade Effective Altruism Outreach Animal Charity Evaluators Machine Intelligence Research Institute Raising for Effective Giving Center for Applied Rationality Johns Hopkins Center for Health Security Ploughshares Fund Future of Humanity Institute Future of Life Institute Centre for the Study of Existential Risk Charity Science Deworm the World Initiative Schistosomiasis Control Initiative GiveDirectly Evaluator consolidated recommendation listGlobal health and development,Effective altruism/movement growth,Rationality improvement,Biosecurity and pandemic preparedness,AI risk,Global catastrophic risksRobert Wiblin draws on GiveWell recommendations, Animal Charity Evaluators recommendations, Open Philanthropy Project writeups, staff donation writeups and suggestions, as well as other sources (including personal knowledge and intuitions) to come up with a list of places to donate
My Cause Selection: Michael Dickens2015-09-15Michael Dickens Effective Altruism ForumMichael Dickens Machine Intelligence Research Institute Future of Humanity Institute Centre for the Study of Existential Risk Future of Life Institute Open Philanthropy Animal Charity Evaluators Animal Ethics Foundational Research Institute Giving What We Can Charity Science Raising for Effective Giving Single donation documentationAnimal welfare,AI risk,Effective altruismExplanation by Dickens of giving choice for 2015. After some consideration, narrows choice to three orgs: MIRI, ACE, and REG. Finally chooses REG due to weighted donation multiplier
MIRI Fundraiser: Why now matters (GW, IR)2015-07-24Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyCross-posted at LessWrong and on the MIRI blog at https://intelligence.org/2015/07/20/why-now-matters/ -- this post occurs just two months after Soares takes over as MIRI Executive Director. It is a followup to https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/
MIRI’s 2015 Summer Fundraiser!2015-07-17Nate Soares Machine Intelligence Research Institute Machine Intelligence Research Institute Donee donation caseAI safetyMIRI announces its summer fundraiser and links to a number of documents to help donors evaluate it. This is the first fundraiser under new Executive Director Nate Soares, just a couple months after he assumed office.
Tumblr on MIRI2014-10-07Scott Alexander Slate Star Codex Machine Intelligence Research Institute Evaluator review of doneeAI safetyThe blog post is structured as a response to recent criticism of MIRI on Tumblr, but is mainly a guardedly positive assessment of MIRI. In particular, it highlights the important role played by MIRI in elevating the profile of AI risk, citing attention from Stephen Hawking, Elon Musk, Gary Drescher, Max Tegmark, Stuart Russell, and Peter Thiel.
How does MIRI Know it Has a Medium Probability of Success? (GW, IR)2013-08-01Peter Hurford LessWrong Machine Intelligence Research Institute Miscellaneous commentaryAI safetyIn this bleg, Peter Hurford asks why MIRI thinks it has a medium probability of success at achieving the goal of friendly AI (and avoiding unfriendly AI). The post attracts multiple comments from Eliezer Yudkowsky, Carl Shulman, Wei Dai, and others.
Earning to Give vs. Altruistic Career Choice Revisited (GW, IR)2013-06-01Jonah Sinick Jonah Sinick Willam MacAskill Eliezer Yudkowsky Against Malaria Foundation Machine Intelligence Research Institute GiveWell Maximum Impact Fund Miscellaneous commentaryGlobal health|AI safetyJonah Sinick gives a number of arguments against the view that earning to give is likely to be the most socially valuable path. For contrast, he considers direct work in nonprofits and in other high-impact careers. He talks about the value of direct feedback and the significant difference between what a skilled person and a less skilled person can accomplish with direct work. Sinick draws extensively on his experience working at GiveWell where he evaluated the cost-effectiveness of charities.
Evaluating the feasibility of SI's plan (GW, IR)2013-01-10Joshua Fox LessWrong Machine Intelligence Research Institute Evaluator review of doneeAI safetyThis blog post, co-authored with Kaj Sotala, gives a simplified description of the plan being followed by the Singularity Institute (SI), the former name of the Machine Intelligence Research Institute (MIRI). It is critical of SI for focusing on its "perfect" friendly AI, and suggests that more focus be given to improving the safety of existing systems in development, such as OpenCog. In a reply comment, Eliezer notes that the "heuristic safety" that the blog post suggests focusing on is difficult, that people overestimate the feasibility of heuristic safety ideas, and that trying for a safety approach that seems highly likely to succeed is the best way to guard against safety approaches that are doomed to fail. There is further discussion in the comments from Wei Dai, Gwern, and a cryptography researcher.
Thoughts on the Singularity Institute (SI) (GW, IR)2012-05-11Holden Karnofsky LessWrongOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing reasons Holden Karnofsky, co-executive director of GiveWell, does not recommend the Singularity Institute (SI), the historical name for the Machine Intelligence Research Institute. This evaluation would be the starting point for the initial position of the Open Philanthropy Project (a GiveWell spin-off grantmaker) toward MIRI, but Karnofsky and the Open Philanthropy Project would later update in favor of AI safety in general and MIRI in particular; this evolution is described in https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit
SIAI - An Examination (GW, IR)2011-05-02Brandon Reinhart LessWrongBrandon Reinhart Machine Intelligence Research Institute Evaluator review of doneeAI safetyPost discussing initial investigation into the Singularity Institute for Artificial Intelligence (SIAI), the former name of Machine Intelligence Research Institute (MIRI), with the intent of deciding whether to donate. Final takeaway is that it was a worthy donation target, though no specific donation is announced in the post. See http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/ for an earlier draft of the post (along with a number of comments that were incorporated into the official version).
Singularity Institute for Artificial Intelligence2011-04-30Holden Karnofsky GiveWellOpen Philanthropy Machine Intelligence Research Institute Evaluator review of doneeAI safetyIn this email thread on the GiveWell mailing list, Holden Karnofsky gives his views on the Singularity Institute for Artificial Intelligence (SIAI), the former name for the Machine Intelligence Research Institute (MIRI). The reply emails include a discussion of how much weight to give to, and what to learn from, the support for MIRI by Peter Thiel, a wealthy early MIRI backer. In the final email in the thread, Holden Karnofsky includes an audio recording with Jaan Tallinn, another wealthy early MIRI backer. This analysis likely influences the review https://www.lesswrong.com/posts/6SGqkCgHuNr7d4yJm/thoughts-on-the-singularity-institute-si (GW, IR) published by Karnofsky next year, as well as the initial position of the Open Philanthropy Project (a GveWell spin-off grantmaker) toward MIRI.
The Singularity Institute's Scary Idea (and Why I Don't Buy It)2010-10-29Ben Goertzel Machine Intelligence Research Institute Evaluator review of doneeAI safetyBen Goertzel, who previously worked as Director of Research at MIRI (then called the Singularity Institute for Artificial Intelligence (SIAI)) articulates its "Scary Idea" and explains why he does not believe in it. His articulation of the Scary Idea: "If I or anybody else actively trying to build advanced AGI succeeds, we're highly likely to cause an involuntary end to the human race."
Funding safe AGI2009-08-03Shane Legg Machine Intelligence Research Institute Evaluator review of doneeAI safetyShane Legg, who had previously received a $10,000 grant from the Singularity Institute for Artificial Intelligence (SIAI) and would go on to co-found DeepMind, talks about SIAI and AI safety. He says that, probably, nobody knows how to deal with the problem of constructing a safe AGI, but SIAI is, in relative terms, the best. However, he provides some suggestions on how it could encourage and monitor AI development more closely rather than trying to build everything on its own. SIAI would later change its name to the Machine Intelligence Research Institute (MIRI).
'Technology Is at the Center' Entrepreneur and philanthropist Peter Thiel on liberty and scientific progress2008-05-01Ronald Bailey Reason MagazinePeter Thiel Machine Intelligence Research Institute Methuselah Foundation Broad donor strategyAI safety|Scientific research/longevity researchIn an interview with Ronald Bailey, the science correspondent of Reason Magazine, Peter Thiel talks about his political ideology of libertarianism as well as his philanthropic activities. He talks about two areas that he is donating heavily in: accelerating a safe technological singularity (through donations to the Singulariy Institute) and anti-aging research (through donations to the Methuselah Foundation).

Full list of donations in reverse chronological order (480 donations)

Graph of top 10 donors by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 480)Donation dateCause areaURLInfluencerNotes
Anonymous MIRI cryptocurrency donor15,592,829.0012021-05-13AI safetyhttps://intelligence.org/2021/05/13/two-major-donations/-- Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: The blog post announcing the donation says that the donor "previously donated $1.01M in ETH to MIRI in 2017."

Other notes: The blog post announcing the donation says: "Their amazingly generous new donation comes in the form of 3001 MKR, governance tokens used in MakerDAO, a stablecoin project on the Ethereum blockchain. MIRI liquidated the donated MKR for $15,592,829 after receiving it. With this donation, the anonymous donor becomes our largest all-time supporter.". Currency info: donation given as 3,001.00 MKR (conversion done via donee calculation); announced: 2021-05-13.
Vitalik Buterin4,001,854.5032021-05-12--https://etherscan.io/tx/0x949bc96c02090165ccb2c180dc105ce6592514576dced024356402248844836dAI safety Intended use of funds (category): Organizational general support

Other notes: A blog post https://intelligence.org/2021/05/13/two-major-donations/ by the Machine Intelligence Research Institute the next day describes the donation and lists the USD amount as $4,378,159. Currency info: donation given as 1,050.00 ETH (conversion done via Etherscan.io); announced: 2021-05-12.
Jaan Tallinn563,000.00112020-12-17AI safetyhttps://jaan.online/philanthropy/donations.htmlOliver Habryka Eric Rogstad Donation process: Part of the Survival and Flourishing Fund's 2020 H2 grants https://survivalandflourishing.fund/sff-2020-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts." The recommended grant amount was $543,000 but the actual grant made was for $563,000.

Intended use of funds (category): Organizational general support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount recommended by the S-process is $543,0000, but the actual grant amount is $563,000 ($20,000 higher).

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fourth grant round. Grants to MIRI had also been made in the third round (2020 H1).

Other notes: Although the Survival and Flourishing Fund and Jed McCaleb also participate as donors in this round, neither of them makes a grant to MIRI.
Jaan Tallinn280,000.00172020-06-11AI safetyhttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round and the first with grants to MIRI.

Donor retrospective of the donation: A further grant from Jaan Tallinn to MIRI (see https://survivalandflourishing.fund/sff-2020-h2-recommendations in 2020 H2) suggests continued satisfaction with the grantee.

Other notes: The grant round also includes grants from the Survival and Flourishing Fund ($20,000) and Jed McCaleb ($40,000) to the same grantee (MIRI). Percentage of total donor spend in the corresponding batch of donations: 30.43%.
Survival and Flourishing Fund20,000.001012020-06-09AI safetyhttps://jaan.online/philanthropy/donations.htmlAlex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round and the first with grants to MIRI.

Other notes: The grant round also includes grants from Jaan Tallinn ($280,000) and Jed McCaleb ($40,000) to the same grantee (MIRI). Percentage of total donor spend in the corresponding batch of donations: 3.07%.
Effective Altruism Funds: Long-Term Future Fund100,000.00352020-04-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2020-long-term-future-fund-grants-and-recommendationsMatt Wage Helen Toner Oliver Habryka Adam Gleave Intended use of funds (category): Organizational general support

Other notes: In the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ MIRI mentions the grant along with a $7.7 million grant from the Open Philanthropy Project and a $300,000 grant from Berkeley Existential Risk Initiative. Percentage of total donor spend in the corresponding batch of donations: 20.48%.
Jed McCaleb40,000.00632020-04AI safetyhttps://survivalandflourishing.fund/sff-2020-h1-recommendationsSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round and the first with grants to MIRI.

Other notes: The grant round also includes grants from the Survival and Flourishing Fund ($20,000) and Jaan Tallinn ($280,000) to the same grantee (MIRI). Percentage of total donor spend in the corresponding batch of donations: 16.00%.
Berkeley Existential Risk Initiative300,000.00152020-03-02AI safetyhttp://existence.org/grants/-- Intended use of funds (category): Organizational general support

Other notes: The grant is mentioned by MIRI in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ along with a large grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 from the Open Philanthropy Project. The post says: "at the time of our 2019 fundraiser, we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower.".
Open Philanthropy7,703,750.0022020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety

Donor reason for selecting the donee: The grant page says "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" with the most similar previous grant being https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 (February 2019). Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is decided by the Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Three other grants decided by CEAS at around the same time are: Centre for Effective Altruism ($4,146,795), 80,000 Hours ($3,457,284), and Ought ($1,593,333).

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but this is likely the time when the Committee for Effective Altruism Support does its 2020 allocation.
Intended funding timeframe in months: 24

Other notes: The donee describes the grant in the blog post https://intelligence.org/2020/04/27/miris-largest-grant-to-date/ (2020-04-27) along with other funding it has received ($300,000 from the Berkeley Existential Risk Initiative and $100,000 from the Long-Term Future Fund). The fact that the grant is a two-year grant is mentioned here, but not in the grant page on Open Phil's website. The page also mentions that of the total grant amount of $7.7 million, $6.24 million is coming from Open Phil's normal funders (Good Ventures) and the remaining $1.46 million is coming from Ben Delo, co-founder of the cryptocurrency trading platform BitMEX, as part of a funding partnership https://www.openphilanthropy.org/blog/co-funding-partnership-ben-delo announced November 11, 2019. Announced: 2020-04-10.
Effective Altruism Funds: Long-Term Future Fund50,000.00532019-03-20AI safetyhttps://funds.effectivealtruism.org/funds/payouts/april-2019-long-term-future-fund-grants-and-recommendationsOliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: Grant investigation and influencer Oliver Habryka believes that MIRI is making real progress in its approach of "creating a fundamental piece of theory that helps humanity to understand a wide range of powerful phenomena" He notes that MIRI started work on the alignment problem long before it became cool, which gives him more confidence that they will do the right thing and even their seemingly weird actions may be justified in ways that are not yet obvious. He also thinks that both the research team and ops staff are quite competent

Donor reason for donating that amount (rather than a bigger or smaller amount): Habryka offers the following reasons for giving a grant of just $50,000, which is small relative to the grantee budget: (1) MIRI is in a solid position funding-wise, and marginal use of money may be lower-impact. (2) There is a case for investing in helping grow a larger and more diverse set of organizations, as opposed to putting money in a few stable and well-funded onrganizations.
Percentage of total donor spend in the corresponding batch of donations: 5.42%

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor thoughts on making further donations to the donee: Oliver Habryka writes: "I can see arguments that we should expect additional funding for the best teams to be spent well, even accounting for diminishing margins, but on the other hand I can see many meta-level concerns that weigh against extra funding in such cases. Overall, I find myself confused about the marginal value of giving MIRI more money, and will think more about that between now and the next grant round."

Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) . Despite these, Habryka recommends a relatively small grant to MIRI, because they are already relatively well-funded and are not heavily bottlenecked on funding. However, he ultimately decides to grant some amount to MIRI, giving some explanation. He says he will think more about this before the next funding round.
Berkeley Existential Risk Initiative600,000.0092019-02-26AI safetyhttp://existence.org/grants/-- Intended use of funds (category): Organizational general support

Donor retrospective of the donation: BERI would make a further to MIRI, indicating continued confidence in the grantee. The followup grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund.

Other notes: This grant is also discussed by the Machine Intelligence Research Institute (the grant recipient) at https://intelligence.org/2017/11/08/major-grant-open-phil/ along with a grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019 from the Open Philanthropy Project. Announced: 2019-05-09.
Open Philanthropy2,652,500.0052019-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019Claire Zabel Committee for Effective Altruism Support Donation process: The decision of whether to donate seems to have followed the Open Philanthropy Project's usual process, but the exact amount to donate was determined by the Committee for Effective Altruism Support using the process described at https://www.openphilanthropy.org/committee-effective-altruism-support

Intended use of funds (category): Organizational general support

Intended use of funds: MIRI plans to use these funds for ongoing research and activities related to AI safety. Planned activities include alignment research, a summer fellows program, computer scientist workshops, and internship programs.

Donor reason for selecting the donee: The grant page says: "we see the basic pros and cons of this support similarly to what we’ve presented in past writeups on the matter" Past writeups include the grant pages for the October 2017 three-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 and the August 2016 one-year support https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount decided by the Committee for Effective Altruism Support (CEAS) https://www.openphilanthropy.org/committee-effective-altruism-support but individual votes and reasoning are not public. Two other grants with amounts decided by CEAS, made at the same time and therefore likely drawing from the same money pot, are to the Centre for Effective Altruism ($2,756,250) and 80,000 Hours ($4,795,803). The original amount of $2,112,500 is split across two years, and therefore ~$1.06 million per year. https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ clarifies that the amount for 2019 is on top of the third year of three-year $1.25 million/year support announced in October 2017, and the total $2.31 million represents Open Phil's full intended funding for MIRI for 2019, but the amount for 2020 of ~$1.06 million is a lower bound, and Open Phil may grant more for 2020 later. In November 2019, additional funding would bring the total award amount to $2,652,500.

Donor reason for donating at this time (rather than earlier or later): Reasons for timing are not discussed, but likely reasons include: (1) The original three-year funding period https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 is coming to an end, (2) Even though there is time before the funding period ends, MIRI has grown in budget and achievements, so a suitable funding amount could be larger, (3) The Committee for Effective Altruism Support https://www.openphilanthropy.org/committee-effective-altruism-support did its first round of money allocation, so the timing is determined by the timing of that allocation round.
Intended funding timeframe in months: 24

Donor thoughts on making further donations to the donee: According to https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ Open Phil may increase its level of support for 2020 beyond the ~$1.06 million that is part of this grant.

Donor retrospective of the donation: The much larger followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2020 with a very similar writeup suggests that Open Phil and the Committee for Effective Altruism Support would continue to stand by the reasoning for the grant.

Other notes: The grantee, MIRI, discusses the grant on its website at https://intelligence.org/2019/04/01/new-grants-open-phil-beri/ along with a $600,000 grant from the Berkeley Existential Risk Initiative. Announced: 2019-04-01.
Vipul Naik500.003932018-12-22AI safetyhttps://forum.effectivealtruism.org/posts/dznyZNkAQMNq6HtXf/my-2018-donations (GW, IR)Issa Rice Double Up Drive Donation process: One of two year-end donations made in 2018 made by Vipul Naik but directed by Issa Rice

Intended use of funds (category): Organizational general support

Intended use of funds: Unrestricted funding to support MIRI's operations

Donor reason for selecting the donee: Donation decided on by Issa Rice. The donation to MIRI accounts for 50% of the $1,000 that Vipul Naik gave Issa Rice to direct. Rice explains his reasoning in https://issarice.com/donation-history#section-3 as follows: "$500 to Machine Intelligence Research Institute (with an additional $500 from REG’s Double Up Drive). This was mostly intended as a retrospective donation, as thanks for producing useful content and ideas, and for doing work that I consider useful in the AI safety space."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is capped by the amount of $1,000 available from Vipul Naik to Issa Rice to direct; he directed $500 out of that $1,000 because of the decision to direct the other $500 to the donor lottery
Percentage of total donor spend in the corresponding batch of donations: 50.00%

Donor reason for donating at this time (rather than earlier or later): The timing is determined by the timing of availability of the $1,000 from Vipul Naik for Issa Rice to direct. The initial $500 to direct had been made available at the end of 2017, but Rice had deferred allocating it; with the additional $500 available at the end of 2018, Rice decided to allocate the entire $1,000

Donor thoughts on making further donations to the donee: The section https://issarice.com/donation-history#section-3 says: "As of late 2018, for prospective funding, I do share concerns about MIRI’s new nondisclosure-by-default policy and also think that its “room for more funding” may not be as great as in the past (the latter is sort of hard to assess due to the former)."

Other notes: https://forum.effectivealtruism.org/posts/dznyZNkAQMNq6HtXf/my-2018-donations#The_selection_of_recipients (GW, IR) by Vipul Naik explains the financing context: "For each of the years 2017 and 2018, I had given Issa the option of assigning $500 of my money to charitable causes of his choosing (with no strict requirement that these be recognized as charities). In 2017, Issa deferred the use of the money, so he had $1,000 to allocate. Issa ultimately decided to allocate 50% of the $1,000 (i.e., $500) to the $500,000 EA Donor Lottery, and another 50% to the Machine Intelligence Research Institute (MIRI)." As the document also says: "The donation qualified for donation double from Raising for Effective Giving's Double Up Drive".
Effective Altruism Funds: Long-Term Future Fund40,000.00632018-11-29AI safetyhttps://funds.effectivealtruism.org/funds/payouts/november-2018-long-term-future-fund-grantsAlex Zhu Helen Toner Matt Fallshaw Matt Wage Oliver Habryka Donation process: Donee submitted grant application through the application form for the November 2018 round of grants from the Long-Term Future Fund, and was selected as a grant recipient

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and to MIRI's 2018 fundraiser post https://intelligence.org/2018/11/26/miris-2018-fundraiser/ saying "According to their fundraiser post, MIRI believes it will be able to find productive uses for additional funding, and gives examples of ways additional funding was used to support their work this year."

Donor reason for selecting the donee: The grant page links to MIRI's research directions post https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ and says "We believe that this research represents one promising approach to AI alignment research."

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round

Donor retrospective of the donation: The Long-Term Future Fund would make a similarly sized grant ($50,000) in its next grant round in April 2019, suggesting that it was satisfied with the outcome of the grant

Other notes: Percentage of total donor spend in the corresponding batch of donations: 41.88%.
Effective Altruism Funds: Long-Term Future Fund488,994.00132018-08-14AI safetyhttps://funds.effectivealtruism.org/funds/payouts/july-2018-long-term-future-fund-grantsNick Beckstead Donation process: The grant from the EA Long-Term Future Fund is part of a final set of grant decisions being made by Nick Beckstead (granting $526,000 from the EA Meta Fund and $917,000 from the EA Long-Term Future Fund) as he transitions out of managing both funds. Due to time constraints, Beckstead primarily relies on investigation of the organization done by the Open Philanthropy Project when making its 2017 grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017

Intended use of funds (category): Organizational general support

Intended use of funds: Beckstead writes "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electronics upgrades or childcare), due to a sense that (i) their work is otherwise much less funding constrained than it used to be, and (ii) spending like this would better reflect the value of staff time and increase staff satisfaction. However, I also told them that I was open to them using these funds to accomplish this objective indirectly (e.g. through salary increases) or using the funds for another purpose if that seemed better to them."

Donor reason for selecting the donee: The grant page references https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 for Beckstead's opinion of the donee.

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page says "The amounts I’m granting out to different organizations are roughly proportional to the number of staff they have, with some skew towards MIRI that reflects greater EA Funds donor interest in the Long-Term Future Fund." Also: "I think a number of these organizations could qualify for the criteria of either the Long-Term Future Fund or the EA Community Fund because of their dual focus on EA and longtermism, which is part of the reason that 80,000 Hours is receiving a grant from each fund."
Percentage of total donor spend in the corresponding batch of donations: 53.32%

Donor reason for donating at this time (rather than earlier or later): Timing determined by the timing of this round of grants, which is in turn determined by the need for Beckstead to grant out the money before handing over management of the fund

Donor retrospective of the donation: Even after the fund management being moved to a new team, the EA Long-Term Future Fund would continue making grants to MIRI.
Open Philanthropy150,000.00282018-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-ai-safety-retraining-programClaire Zabel Donation process: The grant is a discretionary grant, so the approval process is short-circuited; see https://www.openphilanthropy.org/giving/grants/discretionary-grants for more

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to suppport the artificial intelligence safety retraining project. MIRI intends to use these funds to provide stipends, structure, and guidance to promising computer programmers and other technically proficient individuals who are considering transitioning their careers to focus on potential risks from advanced artificial intelligence. MIRI believes the stipends will make it easier for aligned individuals to leave their jobs and focus full-time on safety. MIRI expects the transition periods to range from three to six months per individual. The MIRI blog post https://intelligence.org/2018/09/01/summer-miri-updates/ says: "Buck [Shlegeris] is currently selecting candidates for the program; to date, we’ve made two grants to individuals."

Other notes: The grant is mentioned by MIRI in https://intelligence.org/2018/09/01/summer-miri-updates/. Announced: 2018-06-27.
Misha Gurevich1,500.003492018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Nader Chehab6,786.002292018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Pasha Kamyshev20,200.00982018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Paul Crowley7,400.002192018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Bruno Parga10,382.001772018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Bryan Dana70.004472018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Buck Shlegeris5.004752018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Paul Rhodes1,017.003682018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Peter Scott30,000.00752018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Cliff & Stephanie Hyra1,000.003692018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Quinn Maurmann3,000.003052018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Daniel Weinand5,000.002722018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Raymond Arnold250.004162018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Richard Schwall59,639.00462018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Edwin Evans35,550.00672018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Robert and Gery Ruddick5,500.002582018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Eric Lin1,850.003392018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Eric Rogstad19,000.001102018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Blake Borgeson10.004692018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Robin Powell600.003872018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Ethan Sterling23,418.00912018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Ryan Carey5,086.002672018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Gustav Simonsson33,285.00692018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Scott Dickey2,000.003242018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Scott Worley9,486.002012018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Jai Dhyani6,100.002352018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
James Mazur475.003972018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Sebastian Hagen6,000.002382018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Sergejs Silko18,200.001132018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Simon Sáfár3,131.003032018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Jeremy Schlatter150.004292018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Victoria Krakovna2,801.003162018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Jonathan Weissman20,000.001012018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Joshua Fox240.004212018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Kelsey Piper30,730.00732018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Kenn Hamm11,472.001582018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Luke Stebbing3,000.003052018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Max Kesin10,000.001792018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Zvi Mowshowitz10,000.001792018AI safetyhttps://intelligence.org/topcontributors/--
Michael Blume1,100.003662018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Michael Plotz4,000.002952018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Michal Pokorný1,000.003692018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Mick Porter800.003802018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Mike Anderson23,000.00932018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Mikko Rauhala7,200.002222018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Alan Chang17,000.001202018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Alex Edelman1,800.003412018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Alex Schell9,575.002002018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Austin Peña26,554.00832018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Patrick LaVictoire5,000.002722018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Paul Crowley6,000.002382018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Paul Rhodes7.004742018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Cliff & Stephanie Hyra5,208.002632018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Quinn Maurmann27,575.00812018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Ben Hoskin31,481.00722018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/-- See post http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ assessing research. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts.".
Raymond Arnold250.004162018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Richard Schwall5,550.002552018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Eric Lin11,020.001632018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Robert and Judith Babcock11,100.001622018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Robin Powell400.004012018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Rolf Nelson100.004372018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Scott Dickey1,000.003692018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Jai Dhyani1,651.003442018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
James Mazur200.004222018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Sebastian Hagen16,384.001252018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Victoria Krakovna17,066.001192018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Xerxes Dotiwalla1,350.003562018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Joshua Fox120.004342018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Kevin Fischer1,000.003692018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Vitalik Buterin802,136.0072018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Michael Blume300.004122018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Michael Plotz7,960.002132018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Mick Porter400.004012018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Alan Chang16,050.001262018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Berkeley Existential Risk Initiative100,000.00352017-12-28AI safetyhttp://existence.org/grants/-- Intended use of funds (category): Organizational general support

Donor retrospective of the donation: BERI would make two further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund.

Other notes: Announced: 2018-01-11.
Misha Gurevich1,500.003492017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Austin Peña1,034.003672017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Paul Crowley12,450.001472017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Bruno Parga11,361.001602017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Bryan Dana66.004492017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Buck Shlegeris72,674.00422017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Paul Rhodes22.004642017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Christian Calderon367,574.00142017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Raising for Effective Giving29,140.00802017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Raymond Arnold3,420.003012017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- 2000 subtracted because it is recorded in EA Survey data.
Richard Schwall46,698.00572017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Edwin Evans30,000.00752017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Eric Rogstad55,103.00492017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Robin Powell400.004012017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Ethan Dickinson17,400.001172017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Ryan Carey5,086.002672017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Scott Dickey8,000.002122017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Scott Siskind10,000.001792017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Jacob Falkovich5,065.002692017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- 350 from EA Survey subtracted.
Jai Dhyani11,405.001592017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Scott Worley5,500.002582017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
James Mazur825.003792017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Simon Sáfár3,131.003032017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Stephanie Zolayvar11,247.001612017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Tobias Dänzer5,734.002512017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Jeremy Schlatter150.004292017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Tran Bao Trung1,240.003592017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Xerxes Dotiwalla7,350.002202017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Jonathan Weissman20,000.001012017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Joshua Fox360.004072017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Kevin R. Fischer5,000.002722017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Laura and Chris Soares7,510.002162017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Leopold Bauernfeind16,900.001222017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Luke Stebbing7,500.002172017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Luke Titmus8,837.002082017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Marius van Voorden59,251.00472017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Max Kesin10,000.001792017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Zvi Mowshowitz10,000.001792017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- The document https://thezvi.wordpress.com/2017/12/17/i-vouch-for-miri/ explains that he believes that MIRI understands the hardness of the AI safety problem, is focused on building solutions for the long term, and has done humanity a great service through its work on functional decision theory.
Mick Porter1,200.003602017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Mikko Rauhala7,200.002222017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Alex Edelman5,132.002662017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/--
Patrick Brinich-Langlois3,000.003052017-12-10AI safetyhttps://www.patbl.com/misc/other/donations/--
Tran Bao Trung4,239.002942017AI safetyhttps://web.archive.org/web/20171003083300/https://intelligence.org/topcontributors/--
Loren Merritt25,000.00842017-10AI safetyhttp://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- Total amount donated by Loren Merritt to MIRI as of 2017-12-23 is $525,000. The amount listed as of October 2017 was http://web.archive.org/web/20171003083300/https://intelligence.org/topcontributors/ so the extra $25,000 was donated between those months.
Open Philanthropy3,750,000.0042017-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017Nick Beckstead Donation process: The donor, Open Philanthropy Project, appears to have reviewed the progress made by MIRI one year after the one-year timeframe for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support ended. The full process is not described, but the July 2017 post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) suggests that work on the review had been going on well before the grant renewal date

Intended use of funds (category): Organizational general support

Intended use of funds: According to the grant page: "MIRI expects to use these funds mostly toward salaries of MIRI researchers, research engineers, and support staff."

Donor reason for selecting the donee: The reasons for donating to MIRI remain the same as the reasons for the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016, but with two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." The skeptical post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) by Daniel Dewey of Open Phil, from July 2017, is not discussed on the grant page

Donor reason for donating that amount (rather than a bigger or smaller amount): The grant page explains "We are now aiming to support about half of MIRI’s annual budget." In the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support of $500,000 made in August 2016, Open Phil had expected to grant about the same amount ($500,000) after one year. The increase to $3.75 million over three years (or $1.25 million/year) is due to the two new developments: (1) a very positive review of MIRI’s work on “logical induction” by a machine learning researcher who (i) is interested in AI safety, (ii) is rated as an outstanding researcher by at least one of Open Phil's close advisors, and (iii) is generally regarded as outstanding by the ML. (2) An increase in AI safety spending by Open Phil, so that Open Phil is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach."

Donor reason for donating at this time (rather than earlier or later): The timing is mostly determined by the end of the one-year funding timeframe of the previous grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support made in August 2016 (a little over a year before this grant)
Intended funding timeframe in months: 36

Donor thoughts on making further donations to the donee: The MIRI blog post https://intelligence.org/2017/11/08/major-grant-open-phil/ says: "The Open Philanthropy Project has expressed openness to potentially increasing their support if MIRI is in a position to usefully spend more than our conservative estimate, if they believe that this increase in spending is sufficiently high-value, and if we are able to secure additional outside support to ensure that the Open Philanthropy Project isn’t providing more than half of our total funding."

Other notes: MIRI, the grantee, blogs about the grant at https://intelligence.org/2017/11/08/major-grant-open-phil/ Open Phil's statement that due to its other large grants in the AI safety space, it is "therefore less concerned that a larger grant will signal an outsized endorsement of MIRI’s approach." is discussed in the comments on the Facebook post https://www.facebook.com/vipulnaik.r/posts/10213581410585529 by Vipul Naik. Announced: 2017-11-08.
Nhat Anh Phan7,000.002252017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Austin Peña9,929.001952017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Patrick LaVictoire20,885.00962017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Bryan Dana5,463.002612017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Paul Rhodes789.003832017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Phil Hazelden5,559.002542017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Ethan Dickinson3,000.003052017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Sam Eisenstat12,356.001482017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Scott Dickey3,000.003052017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
James Mazur1,617.003452017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Simon Sáfár1,000.003692017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Tran Bao Trung8,900.002072017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Joshua Fox360.004072017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Leif K-Brooks67,216.00432017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Max Kesin5,000.002722017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Michal Pokorný12,000.001512017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Mick Porter1,200.003602017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Alan Chang18,000.001142017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003492017AI safetyhttps://web.archive.org/web/20170929195133/https://intelligence.org/topcontributors/--
Berkeley Existential Risk Initiative100,000.00352017-09-13AI safetyhttp://existence.org/grants-- Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: The announcement page says: "Broadly, we believe these groups [Machine Intelligence Research Institute and Future of Life Institute] to have done good work in the past for reducing existential risk and wish to support their continued efforts."

Donor reason for donating at this time (rather than earlier or later): This is one of two opening grants made by BERI to begin its grants program.

Donor thoughts on making further donations to the donee: The grant page says: "Over the next few months, we may write more about our reasoning behind these and other grants." It further outlines the kinds of organizations that BERI will be granting to in the short run.

Donor retrospective of the donation: BERI would make three further grants to MIRI, indicating continued confidence in the grantee. The last grant would be in March 2020 for $300,000. By that point, BERI would have transitioned these grantmaking responsibilities to the Survival and Fluorishing Fund.

Other notes: Announced: 2017-09-25.
Nathaniel Soares10.004692017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Benjamin Goldhaber3,625.003002017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Brandon Reinhart5,000.002722017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Paul Rhodes58.004512017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Ben Hoskin5,000.002722017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/-- See post http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ assessing research. The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute.".
Daniel Ziegler5,000.002722017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Eric Rogstad46,133.00582017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Ethan Dickinson2,000.003242017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Scott Dickey2,000.003242017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Scott Worley5,677.002522017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
James Mazur10,010.001782017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
The Maurice Amado Foundation16,000.001272017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Joshua Fox360.004072017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Max Kesin2,000.003242017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Michael Blume375.004062017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Mick Porter2,000.003242017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003492017AI safetyhttps://web.archive.org/web/20170627074344/https://intelligence.org/topcontributors/--
Anonymous MIRI cryptocurrency donor1,006,549.0062017-05-30AI safetyhttps://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/-- Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: The blog post announcing the donation says that the donor "had donated roughly $50k to our research programs over many years."

Donor retrospective of the donation: A further much larger donation of $15.6 million announced at https://intelligence.org/2021/05/13/two-major-donations/ (2021-05-13) by the same donor suggests continued satisfaction with MIRI.

Other notes: Announced: 2017-07-04.
Benjamin Goldhaber1,375.003552017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Peter Scott50,000.00532017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Ben Hoskin5,000.002722017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/-- See post http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ assessing research. The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute.".
Raising for Effective Giving175,027.00252017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Robin Powell10,400.001752017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Ethan Dickinson3,000.003052017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Scott Dickey3,000.003052017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Sebastian Hagen409.004002017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Simon Sáfár1,000.003692017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Janos Kramar200.004222017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Jean-Philippe Sugarbroad200.004222017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Johan Edström2,000.003242017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Joshua Fox360.004072017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Misha Gurevich1,500.003492017AI safetyhttps://web.archive.org/web/20170412043722/https://intelligence.org/topcontributors/--
Benjamin Goldhaber12,250.001492017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Nicolas Tarleton2,000.003242017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Benjamin Hoffman100.004372017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Brandon Reinhart20,050.00992017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Ben Hoskin20,000.001012017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/-- See post http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ assessing research. The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute.".
Edwin Evans30,000.00752017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Emma Borhanian12,000.001512017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Robin Powell400.004012017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Scott Dickey17,000.001202017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Scott Siskind19,000.001102017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Jaan Tallinn60,500.00452017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Scott Worley10,510.001712017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Sebastian Hagen10,442.001742017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Simon Sáfár10,900.001682017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Jean-Philippe Sugarbroad11,000.001652017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
William Morgan1,000.003692017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Johan Edström3,700.002992017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Jonathan Weissman20,000.001012017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Joshua Fox600.003872017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Kevin Fischer2,000.003242017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Luke Stebbing10,550.001702017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Marcello Herreshoff12,000.001512017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Max Kesin3,420.003012017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Michael Blume5,050.002702017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Michael Cohen9,977.001942017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Mick Porter4,800.002912017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Mikko Rauhala6,000.002382017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Misha Gurevich3,000.003052017AI safetyhttps://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/--
Gordon Irlam20,000.001012017AI safetyhttps://www.gricf.org/2017-report.html--
Open Philanthropy500,000.00122016-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-- Donation process: The grant page describes the process in Section 1. Background and Process. "Open Philanthropy Project staff have been engaging in informal conversations with MIRI for a number of years. These conversations contributed to our decision to investigate potential risks from advanced AI and eventually make it one of our focus areas. [...] We attempted to assess MIRI’s research primarily through detailed reviews of individual technical papers. MIRI sent us five papers/results which it considered particularly noteworthy from the last 18 months: [...] This selection was somewhat biased in favor of newer staff, at our request; we felt this would allow us to better assess whether a marginal new staff member would make valuable contributions. [...] All of the papers/results fell under a category MIRI calls “highly reliable agent design”.[...] Papers 1-4 were each reviewed in detail by two of four technical advisors (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei). We also commissioned seven computer science professors and one graduate student with relevant expertise as external reviewers. Papers 2, 3, and 4 were reviewed by two external reviewers, while Paper 1 was reviewed by one external reviewer, as it was particularly difficult to find someone with the right background to evaluate it. [...] A consolidated document containing all public reviews can be found here." The link is to https://www.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf "In addition to these technical reviews, Daniel Dewey independently spent approximately 100 hours attempting to understand MIRI’s research agenda, in particular its relevance to the goals of creating safer and more reliable advanced AI. He had many conversations with MIRI staff members as a part of this process. Once all the reviews were conducted, Nick, Daniel, Holden, and our technical advisors held a day-long meeting to discuss their impressions of the quality and relevance of MIRI’s research. In addition to this review of MIRI’s research, Nick Beckstead spoke with MIRI staff about MIRI’s management practices, staffing, and budget needs.

Intended use of funds (category): Organizational general support

Intended use of funds: The grant page, Section 3.1 Budget and room for more funding, says: "MIRI operates on a budget of approximately $2 million per year. At the time of our investigation, it had between $2.4 and $2.6 million in reserve. In 2015, MIRI’s expenses were $1.65 million, while its income was slightly lower, at $1.6 million. Its projected expenses for 2016 were $1.8-2 million. MIRI expected to receive $1.6-2 million in revenue for 2016, excluding our support. Nate Soares, the Executive Director of MIRI, said that if MIRI were able to operate on a budget of $3-4 million per year and had two years of reserves, he would not spend additional time on fundraising. A budget of that size would pay for 9 core researchers, 4-8 supporting researchers, and staff for operations, fundraising, and security. Any additional money MIRI receives beyond that level of funding would be put into prizes for open technical questions in AI safety. MIRI has told us it would like to put $5 million into such prizes."

Donor reason for selecting the donee: The grant page, Section 3.2 Case for the grant, gives five reasons: (1) Uncertainty about technical assessment (i.e., despite negative technical assessment, there is a chance that MIRI's work is high-potential), (2) Increasing research supply and diversity in the important-but-neglected AI safety space, (3) Potential for improvement of MIRI's research program, (4) Recognition of MIRI's early articulation of the value alignment problem, (5) Other considerations: (a) role in starting CFAR and running SPARC, (b) alignment with effective altruist values, (c) shovel-readiness, (d) "participation grant" for time spent in evaluation process, (e) grant in advance of potential need for significant help from MIRI for consulting on AI safety

Donor reason for donating that amount (rather than a bigger or smaller amount): The maximal funding that Open Phil would give MIRI would be $1.5 million per year. However, Open Phil recommended a partial amount, due to some reservations, described on the grant page, Section 2 Our impression of MIRI’s Agent Foundations research: (1) Assessment that it is not likely relevant to reducing risks from advanced AI, especially to the risks from transformative AI in the next 20 years, (2) MIRI has not made much progress toward its agenda, with internal and external reviewers describing their work as technically nontrivial, but unimpressive, and compared with what an unsupervised graduate student could do in 1 to 3 years. Section 3.4 says: "We ultimately settled on a figure that we feel will most accurately signal our attitude toward MIRI. We feel $500,000 per year is consistent with seeing substantial value in MIRI while not endorsing it to the point of meeting its full funding needs."

Donor reason for donating at this time (rather than earlier or later): No specific timing-related considerations are discussed
Intended funding timeframe in months: 12

Donor thoughts on making further donations to the donee: Section 4 Plans for follow-up says: "As of now, there is a strong chance that we will renew this grant next year. We believe that most of our important open questions and concerns are best assessed on a longer time frame, and we believe that recurring support will help MIRI plan for the future. Two years from now, we are likely to do a more in-depth reassessment. In order to renew the grant at that point, we will likely need to see a stronger and easier-to-evaluate case for the relevance of the research we discuss above, and/or impressive results from the newer, machine learning-focused agenda, and/or new positive impact along some other dimension."

Donor retrospective of the donation: Although there is no explicit retrospective of this grant, the two most relevant followups are Daniel Dewey's blog post https://forum.effectivealtruism.org/posts/SEL9PW8jozrvLnkb4/my-current-thoughts-on-miri-s-highly-reliable-agent-design (GW, IR) (not an official MIRI statement, but Dewey works on AI safety grants for Open Phil) and the three-year $1.25 million/year grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 made in October 2017 (about a year after this grant). The more-than-doubling of the grant amount and the three-year commitment are both more positive for MIRI than the expectations at the time of the original grant

Other notes: The grant page links to commissioned reviews at http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf The grant is also announced on the MIRI website at https://intelligence.org/2016/08/05/miri-strategy-update-2016/. Announced: 2016-09-06.
Alexei Andreev525.003922016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Brandon Reinhart15,000.001322016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes26.004622016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,085.002372016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Joshua Fox560.003892016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich2,000.003242016AI safetyhttps://web.archive.org/web/20160717181643/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes5.004752016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Joshua Fox100.004372016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Max Kesin6,580.002312016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Michael Cohen29,382.00792016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich1,000.003692016AI safetyhttps://web.archive.org/web/20160226145912/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes30.004602016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Ben Hoskin32,728.00712016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Richard Schwall30,000.00752016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Edwin Evans40,000.00632016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Robin Powell200.004222016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Ethan Dickinson20,518.00972016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Scott Dickey11,000.001652016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Scott Siskind2,037.003232016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Jaan Tallinn80,000.00412016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Scott Worley5,488.002602016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,000.002382016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Janos Kramar541.003912016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Jeremy Schlatter4,000.002952016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
William Morgan400.004012016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Johan Edström300.004122016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
John Salvatier9,110.002032016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Jonathan Weissman20,000.001012016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Joshua Fox650.003852016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Luke Stebbing10,500.001722016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Marcello Herreshoff12,000.001512016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Michael Blume15,000.001322016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Mick Porter2,400.003182016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Mikko Rauhala6,000.002382016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich3,000.003052016AI safetyhttps://web.archive.org/web/20160115172820/https://intelligence.org/donortools/topdonors.php--
William Grunow37.494572016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 500.00 ZAR (conversion done on 2017-08-05 via Fixer.io).
Raymond Arnold2,000.003242016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 2,000.00 USD (conversion done on 2017-08-05 via Fixer.io).
Jacob Falkovich300.004122016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 300.00 USD (conversion done on 2017-08-05 via Fixer.io).
Blake Borgeson300,000.00152016AI safetyhttps://intelligence.org/2016/08/05/miri-strategy-update-2016/-- Second biggest donation in the history of MIRI, see also http://effective-altruism.com/ea/14u/eas_write_about_where_they_give/ for more context on overall donations for the year. Percentage of total donor spend in the corresponding batch of donations: 50.00%.
Akhil Jalan31.004592016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 31.00 USD (conversion done on 2017-08-05 via Fixer.io).
Kyle Bogosian135.004322016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 135.00 USD (conversion done on 2017-08-05 via Fixer.io).
Vegard Blindheim63.394502016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 500.00 NOK (conversion done on 2017-08-05 via Fixer.io).
Nick Brown119.574352016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 150.00 AUD (conversion done on 2017-08-05 via Fixer.io).
Alexandre Zani50.004522016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 50.00 USD (conversion done on 2017-08-05 via Fixer.io).
Loren Merritt115,000.00312016AI safetyhttp://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/-- Total amount donated up to this point is listed as $500,000. The amount listed as of November 2016 at http://web.archive.org/web/20161118163935/https://intelligence.org/topdonors/ is 4385,000. The additional $115,000 was likely raised at the end of 2016.
JP Addison500.003932016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 500.00 USD (conversion done on 2017-08-05 via Fixer.io).
Michael Sadowsky9,000.002052016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 9,000.00 USD (conversion done on 2017-08-05 via Fixer.io).
Mathieu Roy198.854272016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 250.00 CAD (conversion done on 2017-08-05 via Fixer.io).
Robert Yaman5,000.002722016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 5,000.00 USD (conversion done on 2017-08-05 via Peter Hurford).
Michael Dickens20.004652016-01--http://mdickens.me/donations/small.html--
Gordon Irlam10,000.001792016AI safetyhttps://www.gricf.org/2016-report.html--
Tim Bakker474.723982016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 400.00 EUR (conversion done on 2017-08-05 via Fixer.io).
Henry Cooksley38.124562016AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 29.00 GBP (conversion done on 2017-08-05 via Fixer.io).
Michael Dello-Iacovo20.004652015-12-16FIXMEhttp://www.michaeldello.com/donations-log/-- Affected regions: FIXME; affected countries: FIXME.
Future of Life Institute250,000.00192015-09-01AI safetyhttps://futureoflife.org/AI/2015awardees#Fallenstein-- A project grant. Project title: Benja Fallenstein. It appears like this was added on the MIRI top donors website with an amount of $250,252 on 2017-02-04: see https://web.archive.org/web/20170204024838/https://intelligence.org/topdonors/ for more.
Paul Rhodes8.004732015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Scott Dickey3,000.003052015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,113.002342015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Joshua Fox20.004652015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Kevin Fischer840.003782015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Michael Blume2,000.003242015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Mick Porter1,200.003602015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich1,000.003692015AI safetyhttps://web.archive.org/web/20150717072918/https://intelligence.org/donortools/topdonors.php--
Benjamin Hoffman3.004772015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes14.004682015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Scott Dickey3,000.003052015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Henrik Jonsson1.004782015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Thiel Foundation250,000.00192015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Scott Siskind30.004602015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Scott Worley13,344.001422015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Sebastian Hagen6,000.002382015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Jeremy Schlatter1.004782015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Joshua Fox40.004552015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Kevin Fischer1,680.003432015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Luke Stebbing9,650.001972015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Marcello Herreshoff6,560.002322015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Michael Blume2,000.003242015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Mick Porter1,200.003602015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich2,000.003242015AI safetyhttps://web.archive.org/web/20150507195856/https://intelligence.org/donortools/topdonors.php--
Paul Christiano7,000.002252015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Paul Rhodes100.004372015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Ethan Dickinson12,000.001512015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Scott Dickey4,000.002952015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Janos Kramar800.003802015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
John Salvatier2,000.003242015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Joshua Fox45.004542015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Kevin Fischer1,260.003572015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Marcello Herreshoff6,000.002382015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Zvi Mowshowitz5,010.002712015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Mick Porter1,600.003462015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Misha Gurevich1,500.003492015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
Aleksei Riikonen130.004332015AI safetyhttps://web.archive.org/web/20150117213932/https://intelligence.org/donortools/topdonors.php--
William Grunow37.494572015AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 500.00 ZAR (conversion done on 2017-08-05 via Fixer.io).
Jacob Falkovich50.004522015AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 50.00 USD (conversion done on 2017-08-05 via Fixer.io).
Blake Borgeson50,460.00512015AI safetyhttps://intelligence.org/topdonors/-- Took total donation amount and subtracted known donation of 300000 for 2016.
Brian Tomasik2,000.003242015AI safety--Luke Muehlhauser Thank you for a helpful conversation with outgoing director Luke Muehlhauser; information conveyed via private communication and published with permission.
Kyle Bogosian250.004162015AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 250.00 USD (conversion done on 2017-08-05 via Fixer.io).
Johannes Gätjen118.684362015AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 100.00 EUR (conversion done on 2017-08-05 via Fixer.io).
Nick Brown79.714462015AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 100.00 AUD (conversion done on 2017-08-05 via Fixer.io).
Michael Sadowsky25,000.00842015AI safetyhttps://github.com/peterhurford/ea-data/-- Currency info: donation given as 25,000.00 USD (conversion done on 2017-08-05 via Fixer.io).
Gordon Irlam10,000.001792015AI safetyhttps://www.gricf.org/2015-report.html--
Alexei Andreev23,280.00922014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Nathaniel Soares33,220.00702014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Nicolas Tarleton200.004222014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Benjamin Hoffman12,229.001502014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Paul Rhodes430.003992014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Daniel Nelson70.004472014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Donald King9,000.002052014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Richard Schwall106,608.00342014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Edwin Evans50,030.00522014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Robin Powell1,810.003402014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Ethan Dickinson35,490.00682014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Rolf Nelson1,710.003422014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Gary Basin25,000.00842014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Scott Dickey11,020.001632014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Guy Srinivasan6,910.002272014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Henrik Jonsson36,975.00662014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Thiel Foundation250,000.00192014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Scott Siskind7,433.002182014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Investling Group65,000.00442014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jaan Tallinn100,000.00352014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Sebastian Hagen17,154.001182014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
James Douma550.003902014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Janos Kramar4,870.002902014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Sergio Tarrero620.003862014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Stephan T. Lavavej15,000.001322014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jed McCaleb631,137.0082014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Thomas Jackson5,000.002722014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jeremy Schlatter310.004112014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Tuxedage John Adams5,000.002722014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jesse Liptrap10,490.001732014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Johan Edström2,250.003212014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Wolf Tivy16,758.001232014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
John Salvatier10.004692014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Jonathan Weissman20,010.001002014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Joshua Fox490.003962014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Kevin Fischer4,270.002932014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Liron Shapira15,100.001312014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Louie Helm270.004152014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Luke Stebbing9,300.002022014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Marcello Herreshoff12,550.001432014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Marius van Voorden7,210.002212014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Michael Blume5,140.002652014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Mick Porter8,010.002112014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Mihaly Barasz24,073.00892014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Mikko Rauhala170.004282014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Misha Gurevich5,520.002562014AI safetyhttp://archive.today/2014.10.10-021359/http://intelligence.org/topdonors/--
Aaron Gertler100.004372014-08-01AI safetyhttps://aarongertler.net/donations-all-years/-- Monthly donation.
Peter Hurford90.004452014-05-06AI safetyhttp://peterhurford.com/other/donations.html--
Brian Tomasik10.004692014AI safety---- Part of a donation drive; information conveyed via private communication and published with permission.
Gordon Irlam10,000.001792014AI safetyhttps://www.gricf.org/2014-report.html--
Alexei Andreev16,000.001272013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Nicolas Tarleton9,659.001962013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Paul Rhodes5,519.002572013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Brian Cartmell700.003842013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Chris Haley250.004162013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Daniel Nelson2,077.003222013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Edwin Evans52,500.00502013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Riley Goodside3,949.002982013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Robin Powell2,350.003192013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Gil Elbaz5,000.002722013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Scott Dickey14,500.001392013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Giles Edkins145.004312013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Guy Srinivasan8,400.002102013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Investling Group24,000.00902013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jaan Tallinn100,000.00352013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Janos Kramar5,600.002532013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jason Joachim100.004372013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jeremy Schlatter15,000.001322013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Tomer Kagan16,500.001242013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Johan Edström7,730.002142013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
William Morgan5,171.002642013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Jonathan Weissman30,280.00742013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Joshua Fox1,529.003482013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Kevin Fischer4,280.002922013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Louie Helm1,260.003572013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Marius van Voorden5,000.002722013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Michael Ames12,500.001452013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Michael Blume9,990.001932013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Mihaly Barasz12,550.001432013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Mikko Rauhala2,575.003172013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Misha Gurevich7,550.002152013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Aleksei Riikonen242.004202013AI safetyhttp://archive.today/2013.10.21-235551/http://intelligence.org/topdonors/--
Pablo Stafforini25.004632013-08-29--http://www.stafforini.com/blog/donations/--
Richard Schwall10,000.001792013AI safetyhttps://web.archive.org/web/20130115144542/http://singularity.org/topdonors/--
Thiel Foundation27,000.00822013AI safetyhttps://web.archive.org/web/20130115144542/http://singularity.org/topdonors/--
Loren Merritt245,000.00222013AI safetyhttp://web.archive.org/web/20140403110808/http://intelligence.org/topdonors/-- Total amount donated up to this point is listed as $385,000. Of this, $140,000 is accounted for by explicitly disclosed donations; the remainder is approximately attributed to 2013.
Gordon Irlam5,000.002722013AI safetyhttps://www.gricf.org/2013-report.html--
Loren Merritt20,000.001012012-12-07AI safetyhttp://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/7zt4-- Donation is announced in response to the post http://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/ for the MIRI 2012 winter fundraiser.
Alexei Andreev24,800.00882012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
Investling Group18,000.001142012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
William Morgan1,200.003602012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
Aleksei Riikonen14,000.001402012AI safetyhttps://web.archive.org/web/20121118064729/http://singularity.org:80/topdonors/--
Nicolas Tarleton500.003932012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Brandon Reinhart10,000.001792012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/-- See 2011 LessWrong post announcing reasoning for first MIRI donation: http://lesswrong.com/lw/5il/siai_an_examination/.
Brian Cartmell46,000.00592012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Chris Haley50,000.00532012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Daniel Nelson6,000.002382012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Quixey15,000.001322012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Edwin Evans57,000.00482012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Gil Elbaz5,000.002722012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Giles Edkins5,000.002722012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Guy Srinivasan43,000.00612012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Henrik Jonsson1,549.003472012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Investling Group11,000.001652012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jaan Tallinn109,000.00332012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Janos Kramar1,200.003602012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jason Joachim44,000.00602012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Stanley Pecavar17,450.001162012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jeff Bone5,000.002722012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jeremy Schlatter9,100.002042012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Tomer Kagan10,000.001792012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Jesse Liptrap100.004372012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
William Morgan5,800.002502012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Johan Edström800.003802012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
John Salvatier6,598.002302012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Joshua Fox100.004372012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Kevin Fischer5,900.002482012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Liron Shapira9,650.001972012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Martine Rothblatt25,000.00842012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Michael Blume10,800.001692012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Michael Roy Ames12,500.001452012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Mihaly Barasz15,000.001322012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Mikko Rauhala8,600.002092012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Misha Gurevich2,300.003202012AI safetyhttps://web.archive.org/web/20120918094656/http://singularity.org:80/topdonors/--
Andrew Hay6,201.002332012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Nicolas Tarleton7,200.002222012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Brian Cartmell100,000.00352012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Chris Haley10,000.001792012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Donald King6,000.002382012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Edwin Evans180,000.00242012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Richard Schwall161,000.00262012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Riley Goodside6,100.002352012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Elliot Glaysher23,000.00932012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Robin Powell21,000.00952012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Rolf Nelson20,000.001012012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Frank Adamek5,250.002622012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Scott Dickey48,000.00562012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Henrik Jonsson16,000.001272012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Thiel Foundation570,000.00102012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/-- The cumulative donation by the Thiel Foundation at this point is $1.1 million, and we have records of $530,000 donated up to 2009, so this entry is for the remaining $570,000 that was likely spread between 2010 and 2012.
Investling Group191,000.00232012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Jaan Tallinn155,000.00272012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
James Douma5,880.002492012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Janos Kramar9,600.001992012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Sergio Tarrero14,600.001382012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Jesse Liptrap19,000.001102012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Johan Edström6,900.002282012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Jonathan Weissman41,000.00622012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Joshua Fox6,000.002382012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Joshua Looks5,000.002722012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Louie Helm10,400.001752012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Mikko Rauhala16,000.001272012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Misha Gurevich14,000.001402012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Adam Weissman10,000.001792012AI safetyhttps://web.archive.org/web/20120719220051/http://singularity.org:80/topdonors/--
Loren Merritt110,000.00322012AI safetyhttp://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/7zt4-- Donation is announced in response to the post http://lesswrong.com/lw/ftg/2012_winter_fundraiser_for_the_singularity/ for the MIRI 2012 winter fundraiser.
Loren Merritt10,000.001792011-08-25AI safetyhttp://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/4p1x-- Donation is announced in response to the post http://lesswrong.com/lw/78s/help_fund_lukeprog_at_siai/ by Eliezer Yudkowsky asking for help to fund Luke Muehlhauser at MIRI (then called SIAI, the Singularity Institute for Artificial Intelligence.
Gwern Branwen----2009-09-22AI safetyhttps://www.lesswrong.com/posts/XSqYe5Rsqq4TR7ryL/the-finale-of-the-ultimate-meta-mega-crossover#6T24xqkJotPQTJL5R (GW, IR)-- Gwern mentions that he is a past donor to MIRI in this discussion thread, but gives neither a date nor an amount.
Brian Tomasik12,000.001512009AI safety/suffering reductionhttp://reducing-suffering.org/my-donations-past-and-present/-- Donation earmarked for suffering reduction work that would not have happened counterfactually; at the time, the organization was called the Singularity Institute for Artificial Intelligence (SIAI). Employer match: Microsoft matched 12,000.00; Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Thiel Foundation255,000.00182009AI safetyhttps://projects.propublica.org/nonprofits/display_990/582565917/2011_01_EO%2F58-2565917_990_200912--
Brian Tomasik12,000.001512008AI safetyhttp://reducing-suffering.org/my-donations-past-and-present/-- Percentage of total donor spend in the corresponding batch of donations: 100.00%.
Thiel Foundation150,000.00282008AI safetyhttps://projects.propublica.org/nonprofits/display_990/582565917/2011_01_EO%2F58-2565917_990_200912--
Thiel Foundation125,000.00302007AI safetyhttps://projects.propublica.org/nonprofits/display_990/582565917/2011_01_EO%2F58-2565917_990_200912--