This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donor information for the donor Effective Altruism Funds: Long-Term Future Fund in our system.
This entity is also a donee.
Full donor page for donor Effective Altruism Funds: Long-Term Future Fund
We do not have any donee information for the donee Orpheus Lummis in our system.
This entity is also a donor.
Full donee page for donee Orpheus Lummis
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Total||2019|
|AI safety (filter this donor)||1||10,000.00||10,000.00|
Skipping spending graph as there is fewer than one year’s worth of donations.
|Amount (current USD)||Amount rank (out of 1)||Donation date||Cause area||URL||Influencer||Notes|
|10,000.00||1||AI safety/upskilling||https://app.effectivealtruism.org/funds/far-future/payouts/6vDsjtUyDdvBa3sNeoNVvl||Oliver Habryka Alex Zhu Matt Wage Helen Toner Matt Fallshaw||Donation process: Donee submitted grant application through the application form for the April 2019 round of grants from the Long-Term Future Fund, and was selected as a grant recipient (23 out of almost 100 applications were accepted)
Intended use of funds (category): Living expenses during research project
Intended use of funds: Grant for upskilling in contemporary AI techniques, deep RL and AI safety, before pursuing a ML PhD. Notable planned subprojects: (1) Engaging with David Krueger’s AI safety reading group at Montreal Institute for Learning Algorithms (2) Starting & maintaining a public index of AI safety papers, to help future literature reviews and to complement https://vkrakovna.wordpress.com/ai-safety-resources/ as a standalone wiki-page (eg at http://aisafetyindex.net ) (3) From-scratch implementation of seminal deep RL algorithms (4) Going through textbooks: Goodfellow Bengio Courville 2016, Sutton Barto 2018 (5) Possibly doing the next AI Safety camp (6) Building a prioritization tool for English Wikipedia using NLP, building on the literature of quality assessment (https://paperpile.com/shared/BZ2jzQ) (7) Studying the AI Alignment literature
Donor reason for selecting the donee: Grant investigator and main influencer Oliver Habryka is impressed with the results of the AI Safety Unconference organized by Lummis after NeurIPS with Long-Term Future Fund money. However, he is not confident of the grant, writing: "I don’t know Orpheus very well, and while I have received generally positive reviews of their work, I haven’t yet had the time to look into any of those reviews in detail, and haven’t seen clear evidence about the quality of their judgment." Habryka also favors more time for self-study and reflection, and is excited about growing the Montral AI alignment community. Finally, Habryka thinks the grant amount is small and is unlikely to have negative consequences
Donor reason for donating that amount (rather than a bigger or smaller amount): Likely to be the amount requested by the donee in the application (this is not stated explicitly by either the donor or the donee). The small amount is also one reason grant investigator Oliver Habryka is comfortable making the grant despite not investigating thoroughly
Percentage of total donor spend in the corresponding batch of donations: 1.08%
Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round
Other notes: The grant reasoning is written up by Oliver Habryka and is available at https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions (GW, IR) The comments on the post do not discuss this specific grant.