, ,
This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2025. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donor information for the donor Tom Sittler in our system.
No donations recorded so far, so not printing the statistics table!
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
Cause area | Number of donations | Number of donees | Total |
---|---|---|---|
Total | 0 | 0 | 0.00 |
Skipping spending graph as there is at most one year’s worth of donations.
Sorry, we couldn't find any subcause area information.
Donee | Cause area | Metadata | Total |
---|---|---|---|
Total | -- | -- | 0.00 |
Skipping spending graph as there is at most one year’s worth of donations.
Sorry, we couldn't find any influencer information.
Sorry, we couldn't find any disclosures information.
Sorry, we couldn't find any country information.
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
Four quantiative models, aggregation, and final decision | 2017-05-20 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | 80,000 Hours Animal Charity Evaluators Machine Intelligence Research Institute StrongMinds | Single donation documentation | Effective altruism/career advice | The post describes how the Oxford Prioritisation Project compared its four finalists (80000 Hours, Animal Charity Evaluators, Machine Intelligence Research Institute, and StrongMinds) by building quantitative models for each, including modeling of uncertainties. Based on these quantitative models, 80000 Hours was chosen as the winner. Also posted to http://effective-altruism.com/ea/1ah/four_quantiative_models_aggregation_and_final/ for comments | |
How much does work in AI safety help the world? Probability distribution version | 2017-04-26 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Review of current state of cause area | AI safety | Tom Sittler discusses a model created by the Global Priorities Project (GPP) to assess the value of work in AI safety. He has converted the model to a Guesstimate model availabe at https://www.getguesstimate.com/models/8697 and wants comments. Also cross-posted to http://effective-altruism.com/ea/19r/how_much_does_work_in_ai_safety_help_the_world/ looking for comments | ||
Final decision: Version 0 | 2017-03-01 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Against Malaria Foundation Machine Intelligence Research Institute The Good Food Institute StrongMinds | Reasoning supplement | Version 0 of a decision process for what charity to grant 10,000 UK pouds to. Result was a tie between Machine Intelligence Research Institute and StrongMinds. See http://effective-altruism.com/ea/187/oxford_prioritisation_project_version_0/ for a cross-post with comments | ||
Tom Sittler: current view, Machine Intelligence Research Institute | 2017-02-08 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute Future of Humanity Institute | Evaluator review of donee | AI safety | Tom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update | |
Tom Sittler: Assumptions of arguments for existential risk reduction | 2017-01-27 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Review of current state of cause area | AI safety | The abstract reads: "I review an informal argument for existential risk reduction as the top priority. I argue the informal argument, or at least some renditions of it, are vulnerable to two objections: (i) The far future may not be good, and we are making predictions based on very weak evidence when we estimate whether it will be good (ii) reductions in existential risk over the next century are much less valuable than equivalent increases in the probability that humanity will have a very long future." |
Sorry, we couldn't find any donations!
Sorry, we couldn't find any similar donors.