Oxford Prioritisation Project donations made

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice (see his commits) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; please do not share this data without consulting with Vipul Naik. We expect to have completed the first round of development by the end of December 2018. See the about page for more details.

Table of contents

Basic donor information

We do not have any donor information for the donor Oxford Prioritisation Project in our system.

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2017
Effective altruism (filter this donor) 1 1 12,934.00 12,934.00
Total 1 1 12,934.00 12,934.00

Skipping spending graph as there is fewer than one year’s worth of donations.

Donation amounts by donee and year

Donee Cause area Metadata Total 2017
80000 Hours (filter this donor) Career coaching/life guidance FB Tw WP Site 12,934.00 12,934.00
Total -- -- 12,934.00 12,934.00

Skipping spending graph as there is fewer than one year’s worth of donations.

Donation amounts by influencer and year

Sorry, we couldn't find any influencer information.

Donation amounts by disclosures and year

Sorry, we couldn't find any disclosures information.

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (21 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Oxford Prioritisation Project Review2017-10-12Tom Sittler Jacob Laguerros Effective Altruism ForumOxford Prioritisation Project Evaluator retrospectiveThe two main people who coordinated the Oxford Prioritisation Project look back on the experience, and highlight the major lessons for themselves and others
Four quantiative models, aggregation, and final decision2017-05-20Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project 80000 Hours Animal Charity Evaluators Machine Intelligence Research Institute StrongMinds Single donation documentationEffective altruism/career adviceThe post describes how the Oxford Prioritisation Project compared its four finalists (80000 Hours, Animal Charity Evaluators, Machine Intelligence Research Institute, and StrongMinds) by building quantitative models for each, including modeling of uncertainties. Based on these quantitative models, 80000 Hours was chosen as the winner. Also posted to http://effective-altruism.com/ea/1ah/four_quantiative_models_aggregation_and_final/ for comments
A model of the Machine Intelligence Research Institute2017-05-20Sindy Li Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Evaluator review of doneeAI riskThe post describes a quantitative model of the Machine Intelligence Research Institute, available at https://www.getguesstimate.com/models/8789 on Guesstimate. Also posted to http://effective-altruism.com/ea/1ae/a_model_of_the_machine_intelligence_research/ for comments
A model of Animal Charity Evaluators2017-05-20Dominik Peters Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Animal Charity Evaluators Evaluator review of doneeAnimal welfare/charity evaluatorThis post describes a quantitative model of Animal Charity Evaluators, available at https://repl.it/HWJZ/23 on repl.it. It continues a previous post http://effective-altruism.com/ea/19g/charity_evaluators_first_model_and_open_questions/ that includes some open questions. Also posted to http://effective-altruism.com/ea/1af/a_model_of_animal_charity_evaluators_oxford/ for comments
A model of 80,000 Hours2017-05-14Sindy Li Oxford Prioritisation ProjectOxford Prioritisation Project 80000 Hours Evaluator review of doneeEffective altruism/movement growthThe post supplements a model built in Guesstimate to estimate the impact of 80000 Hours. The model is a Monte Carlo model. Also posted to http://effective-altruism.com/ea/1a1/a_model_of_80000_hours_oxford_prioritisation/ for comments
A model of StrongMinds2017-05-13Lovisa Tenberg Konstantin Sietzy Oxford Prioritisation ProjectOxford Prioritisation Project StrongMinds Evaluator review of doneeMental healthThe post describes a detailed, annotated translation in Guesstimate at https://www.getguesstimate.com/models/8753 of a model of StrongMinds built in 2016 by James Snowden of the Centre for Effective Altruism. The blog post serves as an appendix to the Guesstimate. Also posted at http://effective-altruism.com/ea/1a0/a_model_of_strongminds_oxford_prioritisation/ for comments
How much does work in AI safety help the world? Probability distribution version2017-04-26Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Review of current state of cause areaAI riskTom Sittler discusses a model created by the Global Priorities Project (GPP) to assess the value of work in AI safety. He has converted the model to a Guesstimate model availabe at https://www.getguesstimate.com/models/8697 and wants comments. Also cross-posted to http://effective-altruism.com/ea/19r/how_much_does_work_in_ai_safety_help_the_world/ looking for comments
Charity evaluators: a first model and open questions2017-04-25Dominik Peters Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project GiveWell Animal Charity Evaluators Review of current state of cause areaCharity evaluatorThe abstract says: "We describe a simple simulation model for the recommendations of a charity evaluator like GiveWell or ACE. The model captures some real-world phenomena, such as initial overconfidence in impact estimates. We are unsure how to choose the parameters of the underlying distributions, and are happy to receive feedback on this." See http://effective-altruism.com/ea/19g/charity_evaluators_first_model_and_open_questions/ for a cross-post with comments
Modelling the Good Food Institute2017-04-18Dominik Peters Oxford Prioritisation ProjectOxford Prioritisation Project The Good Food Institute Evaluator review of doneeAnimal welfare/meat alternativesThe summary says: "We have attempted to build a quantitative model to estimate the impact of the Good Food Institute (GFI). We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals. In this post, I explain some of the modelling approaches we tried, and why we are not satisfied with them."
AI Safety: Is it worthwhile for us to look further into donating into AI research?2017-03-11Qays Langan-Dathi Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Review of current state of cause areaAI riskThe post concludes: "In conclusion my answer to my main point is, yes. There is a good chance that AI risk prevention is the most cost effective focus area for saving the most amount of lives with or without regarding future human lives."
Should we make a grant to a meta-charity?2017-03-11Daniel May Oxford Prioritisation ProjectOxford Prioritisation Project Giving What We Can 80000 Hours Raising for Effective Giving Review of current state of cause areaEffective altruism/movement growth/fundraisingThe summary says: "I introduce the concept of meta-charity, discuss some considerations for OxPrio, and look into how meta-charities evaluate their impact, and the reliability of these figures for our purposes (finding the most cost-effective organisation to donate £10,000 today). I then look into the room for more funding for a few meta-charities, and finally conclude that these are worth seriously pursuing further." See http://effective-altruism.com/ea/189/daniel_may_should_we_make_a_grant_to_a/ for a cross-post that has comments
Final decision: Version 02017-03-01Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Against Malaria Foundation Machine Intelligence Research Institute The Good Food Institute StrongMinds Reasoning supplementVersion 0 of a decision process for what charity to grant 10,000 UK pouds to. Result was a tie between Machine Intelligence Research Institute and StrongMinds. See http://effective-altruism.com/ea/187/oxford_prioritisation_project_version_0/ for a cross-post with comments
Konstantin Sietzy: current view, StrongMinds2017-02-21Konstantin Sietzy Oxford Prioritisation ProjectOxford Prioritisation Project StrongMinds Machine Intelligence Research Institute Evaluator review of doneeMental healthKonstantin Sietzy explains why StrongMinds is the best charity in his view. Also lists Machine Intelligence Research Institute as the runner-up
Sindy li: current view, Against Malaria Foundation2017-02-19Sindy Li Oxford Prioritisation ProjectOxford Prioritisation Project Against Malaria Foundation Schistosomiasis Control Initiative Drugs for Neglected Diseases Initiative Evaluator review of doneeGlobal healthSindy Li provides her best guess as to the best opportunity for the Oxford Prioritisation Project, saying it is the Against Malaria Foundation. Her analysis relies on the GiveWell cost-effectiveness estimates. She identifies mental health as another area (citing https://oxpr.io/blog/2017/2/28/lovisa-tengberg-current-view-strongminds by Lovisa Tengberg and https://oxpr.io/blog/2017/2/28/konstantin-sietzy-current-view-strongminds by Konstantin Sietzy) that she might look into more
Daniel May: current view, Machine Intelligence Research Institute2017-02-15Daniel May Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Evaluator review of doneeAI riskDaniel May evaluates the Machine Intelligence Research Institute and describes his reasons for considering it the best donation opportunity
Daniel May: "Open Science: little room for more funding."2017-02-15Daniel May Oxford Prioritisation ProjectOxford Prioritisation Project Laura and John Arnold Foundation Open Philanthropy Project Review of current state of cause areaScientific researchThe summary states: "I consider open science as a cause area, by reviewing Open Phil’s published work, as well as some popular articles and research, and assessing the field for scale, neglectedness, and tractability. I conclude that the best giving opportunities will likely be filled by foundations such as LJAF and Open Phil, and recommend that the Oxford Prioritisation Project focusses elsewhere." Also available as a Google Doc at https://docs.google.com/document/d/13wsMAugRacu52EPZo6-7NJh4QuYayKyIbjChwU0KsVU/edit?usp=sharing and at the Effective Altruism Forum at http://effective-altruism.com/ea/17g/daniel_may_open_science_little_room_for_more/ (10 comments)
Lovisa Tengberg: current view, StrongMinds2017-02-14Lovisa Tengberg Oxford Prioritisation ProjectOxford Prioritisation Project StrongMinds Against Malaria Foundation Evaluator review of doneeMental healthLovisa Tengberg evaluates StrongMinds and argues that it could be the best donation opportunities. Other candidates mentioned, all in the area of mental health, are Alderman Foundation, AEGIS Foundation, and Network for Empowerment and Progressive Initiative
Laurie Pycroft: "CRISPR biorisk as an Oxford Prioritisation Project topic"2017-02-13Laurie Pycroft Oxford Prioritisation ProjectOxford Prioritisation Project Review of current state of cause areaBiosecurity and pandemic preparednessThe article explores CRISPR biorisk as a potential funding opportunity for the Oxford Prioritisation Project
Another brick in the wall?2017-02-13Tom Sittler Konstantin Sietzy Jacob Lagerros Oxford Prioritisation ProjectOxford Prioritisation Project Broad donor strategyThe summary begins: "Should the Oxford Prioritisation Project focus on donation opportunities that are ‘the right size’? Is it important to find a £10,000 funding gap for a specific purchase, (or by way of analogy, £10,000-shaped lego bricks)?" The conclusion is that Lego bricks are unlikely to be relevant
Tom Sittler: current view, Machine Intelligence Research Institute2017-02-08Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Machine Intelligence Research Institute Future of Humanity Institute Evaluator review of doneeAI riskTom Sittler explains why he considers the Machine Intelligence Research Institute the best donation opportunity. Cites http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity http://effective-altruism.com/ea/14c/why_im_donating_to_miri_this_year/ http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ and mentions Michael Dickens model as a potential reason to update
Tom Sittler: Assumptions of arguments for existential risk reduction2017-01-27Tom Sittler Oxford Prioritisation ProjectOxford Prioritisation Project Review of current state of cause areaAI riskThe abstract reads: "I review an informal argument for existential risk reduction as the top priority. I argue the informal argument, or at least some renditions of it, are vulnerable to two objections: (i) The far future may not be good, and we are making predictions based on very weak evidence when we estimate whether it will be good (ii) reductions in existential risk over the next century are much less valuable than equivalent increases in the probability that humanity will have a very long future."

Full list of donations in reverse chronological order (1 donations)

DoneeAmount (current USD)Donation dateCause areaURLInfluencerNotes
80000 Hours12,934.002017-05-09Effective altruism/movement growthhttps://oxpr.io/blog/2017/5/20/four-quantiative-models-aggregation-and-final-decision-- Announced: 2017-05-09.

Similarity to other donors

The following table uses the Jaccard index and cosine similarity to compare the similarity of donors. We are showing the top 8 donors by the Jaccard index because we show up to 30 donors and show only donors with at least one donee in common.

Donor Number of distinct donees Number of donees in common (intersection) Union size Jaccard similarity Cosine similarity Weighted cosine similarity
Mark Barnes 3 1 3 0.3333 0.5774 0.3107
Sophia Cyna 3 1 3 0.3333 0.5774 0.0782
Haseeb Qureshi 4 1 4 0.25 0.5 0.6769
Raymond Arnold 5 1 5 0.2 0.4472 0.0988
Saulius Šimčikas 8 1 8 0.125 0.3536 0.1359
Patrick Brinich-Langlois 9 1 9 0.1111 0.3333 0.7326
Jeff Kaufman and Julia Wise 15 1 15 0.0667 0.2582 0.0638
Open Philanthropy Project 192 1 192 0.0052 0.0722 0.0255