This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donor information for the donor Oxford Prioritisation Project in our system.
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Overall | 1 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 |
Effective altruism | 1 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 | 12,934 |
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
Cause area | Number of donations | Number of donees | Total | 2017 |
---|---|---|---|---|
Effective altruism (filter this donor) | 1 | 1 | 12,934.00 | 12,934.00 |
Total | 1 | 1 | 12,934.00 | 12,934.00 |
Skipping spending graph as there is fewer than one year’s worth of donations.
If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.
For the meaning of “classified” and “unclassified”, see the page clarifying this.
Subcause area | Number of donations | Number of donees | Total | 2017 |
---|---|---|---|---|
Effective altruism/movement growth | 1 | 1 | 12,934.00 | 12,934.00 |
Classified total | 1 | 1 | 12,934.00 | 12,934.00 |
Unclassified total | 0 | 0 | 0.00 | 0.00 |
Total | 1 | 1 | 12,934.00 | 12,934.00 |
Skipping spending graph as there is fewer than one year’s worth of donations.
Donee | Cause area | Metadata | Total | 2017 |
---|---|---|---|---|
80,000 Hours (filter this donor) | Career coaching/life guidance | FB Tw WP Site | 12,934.00 | 12,934.00 |
Total | -- | -- | 12,934.00 | 12,934.00 |
Skipping spending graph as there is fewer than one year’s worth of donations.
Sorry, we couldn't find any influencer information.
Sorry, we couldn't find any disclosures information.
Sorry, we couldn't find any country information.
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|
Oxford Prioritisation Project Review | 2017-10-12 | Tom Sittler Jacob Laguerros | Effective Altruism Forum | Oxford Prioritisation Project | Evaluator retrospective | The two main people who coordinated the Oxford Prioritisation Project look back on the experience, and highlight the major lessons for themselves and others | ||
Four quantiative models, aggregation, and final decision | 2017-05-20 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | 80,000 Hours Animal Charity Evaluators Machine Intelligence Research Institute StrongMinds | Single donation documentation | Effective altruism/career advice | The post describes how the Oxford Prioritisation Project compared its four finalists (80000 Hours, Animal Charity Evaluators, Machine Intelligence Research Institute, and StrongMinds) by building quantitative models for each, including modeling of uncertainties. Based on these quantitative models, 80000 Hours was chosen as the winner. Also posted to http://effective-altruism.com/ea/1ah/four_quantiative_models_aggregation_and_final/ for comments |
A model of the Machine Intelligence Research Institute | 2017-05-20 | Sindy Li | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | The post describes a quantitative model of the Machine Intelligence Research Institute, available at https://www.getguesstimate.com/models/8789 on Guesstimate. Also posted to http://effective-altruism.com/ea/1ae/a_model_of_the_machine_intelligence_research/ for comments |
A model of Animal Charity Evaluators | 2017-05-20 | Dominik Peters Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Animal Charity Evaluators | Evaluator review of donee | Animal welfare/charity evaluator | This post describes a quantitative model of Animal Charity Evaluators, available at https://repl.it/HWJZ/23 on repl.it. It continues a previous post http://effective-altruism.com/ea/19g/charity_evaluators_first_model_and_open_questions/ that includes some open questions. Also posted to http://effective-altruism.com/ea/1af/a_model_of_animal_charity_evaluators_oxford/ for comments |
A model of 80,000 Hours | 2017-05-14 | Sindy Li | Oxford Prioritisation Project | Oxford Prioritisation Project | 80,000 Hours | Evaluator review of donee | Effective altruism/movement growth | The post supplements a model built in Guesstimate to estimate the impact of 80000 Hours. The model is a Monte Carlo model. Also posted to http://effective-altruism.com/ea/1a1/a_model_of_80000_hours_oxford_prioritisation/ for comments |
A model of StrongMinds | 2017-05-13 | Lovisa Tenberg Konstantin Sietzy | Oxford Prioritisation Project | Oxford Prioritisation Project | StrongMinds | Evaluator review of donee | Mental health | The post describes a detailed, annotated translation in Guesstimate at https://www.getguesstimate.com/models/8753 of a model of StrongMinds built in 2016 by James Snowden of the Centre for Effective Altruism. The blog post serves as an appendix to the Guesstimate. Also posted at http://effective-altruism.com/ea/1a0/a_model_of_strongminds_oxford_prioritisation/ for comments |
How much does work in AI safety help the world? Probability distribution version | 2017-04-26 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Review of current state of cause area | AI safety | Tom Sittler discusses a model created by the Global Priorities Project (GPP) to assess the value of work in AI safety. He has converted the model to a Guesstimate model availabe at https://www.getguesstimate.com/models/8697 and wants comments. Also cross-posted to http://effective-altruism.com/ea/19r/how_much_does_work_in_ai_safety_help_the_world/ looking for comments | |
Charity evaluators: a first model and open questions | 2017-04-25 | Dominik Peters Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | GiveWell Animal Charity Evaluators | Review of current state of cause area | Charity evaluator | The abstract says: "We describe a simple simulation model for the recommendations of a charity evaluator like GiveWell or ACE. The model captures some real-world phenomena, such as initial overconfidence in impact estimates. We are unsure how to choose the parameters of the underlying distributions, and are happy to receive feedback on this." See http://effective-altruism.com/ea/19g/charity_evaluators_first_model_and_open_questions/ for a cross-post with comments |
Modelling the Good Food Institute | 2017-04-18 | Dominik Peters | Oxford Prioritisation Project | Oxford Prioritisation Project | The Good Food Institute | Evaluator review of donee | Animal welfare/meat alternatives | The summary says: "We have attempted to build a quantitative model to estimate the impact of the Good Food Institute (GFI). We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals. In this post, I explain some of the modelling approaches we tried, and why we are not satisfied with them." |
AI Safety: Is it worthwhile for us to look further into donating into AI research? | 2017-03-11 | Qays Langan-Dathi | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute | Review of current state of cause area | AI safety | The post concludes: "In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives." |
Should we make a grant to a meta-charity? | 2017-03-11 | Daniel May | Oxford Prioritisation Project | Oxford Prioritisation Project | Giving What We Can 80,000 Hours Raising for Effective Giving | Review of current state of cause area | Effective altruism/movement growth/fundraising | The summary says: "I introduce the concept of meta-charity, discuss some considerations for OxPrio, and look into how meta-charities evaluate their impact, and the reliability of these figures for our purposes (finding the most cost-effective organisation to donate £10,000 today). I then look into the room for more funding for a few meta-charities, and finally conclude that these are worth seriously pursuing further." See http://effective-altruism.com/ea/189/daniel_may_should_we_make_a_grant_to_a/ for a cross-post that has comments |
Final decision: Version 0 | 2017-03-01 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Against Malaria Foundation Machine Intelligence Research Institute The Good Food Institute StrongMinds | Reasoning supplement | Version 0 of a decision process for what charity to grant 10,000 UK pouds to. Result was a tie between Machine Intelligence Research Institute and StrongMinds. See http://effective-altruism.com/ea/187/oxford_prioritisation_project_version_0/ for a cross-post with comments | |
Konstantin Sietzy: current view, StrongMinds | 2017-02-21 | Konstantin Sietzy | Oxford Prioritisation Project | Oxford Prioritisation Project | StrongMinds Machine Intelligence Research Institute | Evaluator review of donee | Mental health | Konstantin Sietzy explains why StrongMinds is the best charity in his view. Also lists Machine Intelligence Research Institute as the runner-up |
Sindy li: current view, Against Malaria Foundation | 2017-02-19 | Sindy Li | Oxford Prioritisation Project | Oxford Prioritisation Project | Against Malaria Foundation Schistosomiasis Control Initiative Drugs for Neglected Diseases Initiative | Evaluator review of donee | Global health | Sindy Li provides her best guess as to the best opportunity for the Oxford Prioritisation Project, saying it is the Against Malaria Foundation. Her analysis relies on the GiveWell cost-effectiveness estimates. She identifies mental health as another area (citing https://oxpr.io/blog/2017/2/28/lovisa-tengberg-current-view-strongminds by Lovisa Tengberg and https://oxpr.io/blog/2017/2/28/konstantin-sietzy-current-view-strongminds by Konstantin Sietzy) that she might look into more |
Daniel May: current view, Machine Intelligence Research Institute | 2017-02-15 | Daniel May | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute | Evaluator review of donee | AI safety | Daniel May evaluates the Machine Intelligence Research Institute and describes his reasons for considering it the best donation opportunity |
Daniel May: "Open Science: little room for more funding." | 2017-02-15 | Daniel May | Oxford Prioritisation Project | Oxford Prioritisation Project Laura and John Arnold Foundation Open Philanthropy | Review of current state of cause area | Scientific research | The summary states: "I consider open science as a cause area, by reviewing Open Phil’s published work, as well as some popular articles and research, and assessing the field for scale, neglectedness, and tractability. I conclude that the best giving opportunities will likely be filled by foundations such as LJAF and Open Phil, and recommend that the Oxford Prioritisation Project focusses elsewhere." Also available as a Google Doc at https://docs.google.com/document/d/13wsMAugRacu52EPZo6-7NJh4QuYayKyIbjChwU0KsVU/edit?usp=sharing and at the Effective Altruism Forum at http://effective-altruism.com/ea/17g/daniel_may_open_science_little_room_for_more/ (10 comments) | |
Lovisa Tengberg: current view, StrongMinds | 2017-02-14 | Lovisa Tengberg | Oxford Prioritisation Project | Oxford Prioritisation Project | StrongMinds Against Malaria Foundation | Evaluator review of donee | Mental health | Lovisa Tengberg evaluates StrongMinds and argues that it could be the best donation opportunities. Other candidates mentioned, all in the area of mental health, are Alderman Foundation, AEGIS Foundation, and Network for Empowerment and Progressive Initiative |
Laurie Pycroft: "CRISPR biorisk as an Oxford Prioritisation Project topic" | 2017-02-13 | Laurie Pycroft | Oxford Prioritisation Project | Oxford Prioritisation Project | Review of current state of cause area | Biosecurity and pandemic preparedness | The article explores CRISPR biorisk as a potential funding opportunity for the Oxford Prioritisation Project | |
Another brick in the wall? | 2017-02-13 | Tom Sittler Konstantin Sietzy Jacob Lagerros | Oxford Prioritisation Project | Oxford Prioritisation Project | Broad donor strategy | The summary begins: "Should the Oxford Prioritisation Project focus on donation opportunities that are ‘the right size’? Is it important to find a £10,000 funding gap for a specific purchase, (or by way of analogy, £10,000-shaped lego bricks)?" The conclusion is that Lego bricks are unlikely to be relevant | ||
Tom Sittler: current view, Machine Intelligence Research Institute | 2017-02-08 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Machine Intelligence Research Institute Future of Humanity Institute | Evaluator review of donee | AI safety | Tom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update |
Tom Sittler: Assumptions of arguments for existential risk reduction | 2017-01-27 | Tom Sittler | Oxford Prioritisation Project | Oxford Prioritisation Project | Review of current state of cause area | AI safety | The abstract reads: "I review an informal argument for existential risk reduction as the top priority. I argue the informal argument, or at least some renditions of it, are vulnerable to two objections: (i) The far future may not be good, and we are making predictions based on very weak evidence when we estimate whether it will be good (ii) reductions in existential risk over the next century are much less valuable than equivalent increases in the probability that humanity will have a very long future." |
Graph of top 10 donees by amount, showing the timeframe of donations
Donee | Amount (current USD) | Amount rank (out of 1) | Donation date | Cause area | URL | Influencer | Notes |
---|---|---|---|---|---|---|---|
80,000 Hours | 12,934.00 | 1 | Effective altruism/movement growth | https://oxpr.io/blog/2017/5/20/four-quantiative-models-aggregation-and-final-decision | -- | Donation process: The donation is the outcome of the Oxford Prioritisation Project, a months-long group project that looks at a number of donation targets to find the best one. The donation amount of 10,000 GBP was pre-determined. 80,000 Hours is the ultimate winner and gets the entire amount Intended use of funds (category): Organizational general support Donor reason for selecting the donee: The selection of 80,000 Hours as the target for the donation is a result of a lengthy process of deliberation and comparison. The final stage of comparison includes four charities: 80,000 Hours, Machine Intelligence Research Institute, StrongMinds, and Animal Charity Evaluators. The final comparison is carried out though a quantitative analysis summarized at https://oxpr.io/blog/2017/5/20/expected-value-estimates-we-cautiously-took-literally The post describing the model for 80,000 Hours is at https://oxpr.io/blog/2017/5/13/a-model-of-80000-hours Donor reason for donating that amount (rather than a bigger or smaller amount): Amount (of 10,0000 GBP) determined at the outset of the Oxford Prioritisation Project, as the donation amount that the project seeks to allocate Donor reason for donating at this time (rather than earlier or later): Timing determined by the end of the time period for the Oxford Prioritisation Project Donor retrospective of the donation: See https://forum.effectivealtruism.org/posts/JfDW9LfcMFGXhLxTC/a-model-of-80-000-hours-oxford-prioritisation-project (GW, IR) for a retrospective on the Oxford Prioritisation Project. 80,000 Hours is mentioned only once in the retrospective: "We would guess that the real costs of the £10,000 grant were low. At the outset, the probability was quite high that the money would eventually be granted to a high-impact organisation, with a cost-effectiveness not several times smaller than CEA’s counterfactual use of the money3. In fact, the grant was given to 80,000 Hours." Announced: 2017-05-09. |
Sorry, we couldn't find any similar donors.