This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donor information for the donor Jacob Falkovich in our system.
Full donor page for donor Jacob Falkovich
|Donors list page||https://intelligence.org/topdonors/|
|Transparency and financials page||https://intelligence.org/transparency/|
|Donation case page||http://effective-altruism.com/ea/12n/miri_update_and_fundraising_case/|
|Open Philanthropy Project grant review||http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support|
|Charity Navigator page||https://www.charitynavigator.org/index.cfm?bay=search.profile&ein=582565917|
|Timelines wiki page||https://timelines.issarice.com/wiki/Timeline_of_Machine_Intelligence_Research_Institute|
|Org Watch page||https://orgwatch.issarice.com/?organization=Machine+Intelligence+Research+Institute|
|Key people||Eliezer Yudkowsky|Nate Soares|Luke Muehlhauser|
This entity is also a donor.
Full donee page for donee Machine Intelligence Research Institute
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Total||2017||2016||2015|
|AI safety (filter this donor)||3||5,415.00||5,065.00||300.00||50.00|
Graph of spending by cause area and year (incremental, not cumulative)
Graph of spending by cause area and year (cumulative)
|Amount (current USD)||Amount rank (out of 3)||Donation date||Cause area||URL||Influencer||Notes|
|5,065.00||1||AI safety||https://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/||--||350 from EA Survey subtracted.|
|Title (URL linked)||Publication date||Author||Publisher||Affected donors||Affected donees||Document scope||Cause area||Notes|
|AI: a Reason to Worry, and to Donate||2017-12-10||Jacob Falkovich||Jacob Falkovich||Machine Intelligence Research Institute Future of Life Institute Center for Human-Compatible AI Berkeley Existential Risk Initiative Future of Humanity Institute Effective Altruism Funds||Single donation documentation||AI safety||Falkovich explains why he thinks AI safety is a much more important and relatively neglected existential risk than climate change, and why he is donating to it. He says he is donating to MIRI because he is reasonably certain of the importance of their work on AI aligment. However, he lists a few other organizations for which he is willing to match donations up to 0.3 bitcoins, and encourages other donors to use their own judgment to decide among them: Future of Life Institute, Center for Human-Compatible AI, Berkeley Existential Risk Initiative, Future of Humanity Institute, and Effective Altruism Funds (the Long-Term Future Fund)|