Luke Muehlhauser money moved

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2023. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Full list of documents in reverse chronological order (4 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeNotes
Hi, I’m Luke Muehlhauser. AMA about Open Philanthropy’s new report on consciousness and moral patienthood2017-06-28Luke Muehlhauser Effective Altruism ForumOpen Philanthropy Dyrevernalliansen Albert Schweitzer Foundation for Our Contemporaries Eurogroup for Animals Reasoning supplementLuke Muehlhauser hosts an Ask Me Anything (AMA) on the Effective Altruism Forum about his recently published report https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood (2017-06-06). The post gets 61 comments.
2017 Report on Consciousness and Moral Patienthood2017-06-06Luke Muehlhauser Open PhilanthropyOpen Philanthropy Dyrevernalliansen Albert Schweitzer Foundation for Our Contemporaries Eurogroup for Animals Reasoning supplementThe writeup announced at https://www.openphilanthropy.org/blog/new-report-consciousness-and-moral-patienthood provides an overview of the findings of Luke Muehlhauser on moral patienthood -- a broad subject covering what creatures are the subject of moral concern. As described at https://www.openphilanthropy.org/blog/radical-empathy Open Phil identifies with radical empathy, extending concern to beings considered of moral concern, even if they are not traditionally subjects of empathy and concern. See https://www.facebook.com/groups/effective.altruists/permalink/1426329927423360/ for a discussion of the post on the Effective Altruism Facebook group, and see http://effective-altruism.com/ea/1c3/hi_im_luke_muehlhauser_ama_about_open/ for a related AMA. The writeup influenced the Open Philanthropy Project Farm Animal Welfare Officer Lewis Bollard to investigate and donate in the domain of fish welfare; see http://effective-altruism.com/ea/1c3/hi_im_luke_muehlhauser_ama_about_open/b8o for a comment clarifying this effect.
A conversation with Lewis Bollard, February 23, 20172017-02-23Lewis Bollard Luke Muehlhauser Open PhilanthropyOpen Philanthropy Review of current state of cause areaFarm animal welfare program officer Lewis Bollard speaks with Luke Muehlhauser, investigator into moral patienthood, on the history of the animal rights and welfare movements as well as recent developments.
Brian Tomasik, Research Lead, Foundational Research Institute on October 6, 20162016-10-06Brian Tomasik Luke Muehlhauser Open PhilanthropyOpen Philanthropy Reasoning supplementConversation as part of research by Muehlhauser into moral patienthood, that would culminate in the writeup https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood published in 2017.

Full list of donations in reverse chronological order (21 donations)

DonorDoneeAmount (current USD)Donation dateCause areaURLNotes
Open PhilanthropyCenter for Security and Emerging Technology38,920,000.002021-08AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021 Intended use of funds (category): Organizational general support

Intended use of funds: The grant page says: "CSET is a think tank, incubated by our January 2019 support, dedicated to policy analysis at the intersection of national and international security and emerging technologies. This funding is intended to augment our original support for CSET, particularly for its work on security and artificial intelligence."

Other notes: Intended funding timeframe in months: 36.
Open PhilanthropyRethink Priorities315,500.002021-03Animal welfare/moral patienthood/researchhttps://www.openphilanthropy.org/focus/us-policy/farm-animal-welfare/rethink-priorities-moral-patienthood-moral-weight-research Donation process: The donation process is not explicitly described, but hints are provided. One of the grant investigators is Luke Muehlhauser, who is not usually involved with animal welfare grants, but had previously produced a report https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood on consciousness and moral patienthood that the grant page links to.

Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support research related to moral patienthood and moral weight."

Donor reason for selecting the donee: The grant page says: "We believe the research outputs may help us compare future opportunities within farm animal welfare, prioritize across causes, and update our assumptions informing our worldview diversification work." It links to the blog post https://www.openphilanthropy.org/blog/worldview-diversification from 2016.

Other notes: This is a total across two grants. Intended funding timeframe in months: 24.
Open PhilanthropyCenter for Security and Emerging Technology8,000,000.002021-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "This funding is intended to augment our original support for CSET, particularly for its work on the intersection of security and artificial intelligence."

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-security-and-emerging-technology-general-support-august-2021 for a much larger amount suggests continued satisfaction with the grantee.
Open PhilanthropyMassachusetts Institute of Technology275,344.002020-11AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/massachusetts-institute-of-technology-ai-trends-and-impacts-research Intended use of funds (category): Direct project expenses

Intended use of funds: The grant page says "The research will consist of projects to learn how algorithmic improvement affects economic growth, gather data on the performance and compute usage of machine learning methods, and estimate cost models for deep learning projects."
Open PhilanthropyCenter for a New American Security24,350.002020-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work exploring possible projects related to AI governance."

Donor reason for selecting the donee: No explicit reason is provided for the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre) suggesting a broader endorsement.

Donor reason for donating at this time (rather than earlier or later): No explicit reason is provided for the timing of the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre).
Open PhilanthropyCenter for a New American Security116,744.002020-10AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-and-security-projects Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support work by Paul Scharre on projects related to AI and security."

Donor reason for selecting the donee: No explicit reason is provided for the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre) suggesting a broader endorsement.

Donor reason for donating at this time (rather than earlier or later): No explicit reason is provided for the timing of the donation, but another donation https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-a-new-american-security-ai-governance-projects is made at around the same time, to the same donee and with the same earmark (Paul Scharre).
Open PhilanthropyCenter for Strategic and International Studies118,307.002020-09AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor retrospective of the donation: The increase in grant amount in May 2021, from $75,245 to $118,307, suggests that Open Phil was satisfied with initial progress on the grant.

Other notes: The grant amount was updated in May 2021. The original amount was $75,245.
Open PhilanthropyCenter for International Security and Cooperation67,000.002020-09AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.
Open PhilanthropyRice, Hadley, Gates & Manuel LLC25,000.002020-09AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rice-hadley-gates-manuel-ai-risk Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to explore possible projects related to AI accident risk in the context of technology competition."

Donor reason for selecting the donee: No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.

Donor reason for donating at this time (rather than earlier or later): No specific reasons are provided, but two other grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-strategic-and-international-studies-ai-accident-risk-and-technology-competition and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/center-for-international-security-and-cooperation-ai-accident-risk-and-technology-competition made at about the same time for the same intended use suggests interest from the donor in this particular use case at this time.
Open PhilanthropyAndrew Lohn15,000.002020-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/andrew-lohn-paper-machine-learning-model-robustness Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to write a paper on machine learning model robustness for safety-critical AI systems."

Donor reason for selecting the donee: Nothing is specified, but the grantee's work had previously been funded by Open Phil via the RAND Corporation for AI assurance methods.
Open PhilanthropyThe Wilson Center496,540.002020-06AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to organize additional in-depth AI policy seminars as part of its seminar series."

Donor reason for selecting the donee: The grant page says "We continue to believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor reason for donating that amount (rather than a bigger or smaller amount): No reason is given for the amount. The grant is a little more than the original $368,440 two-year grant so it is likely that the additional amount is expected to double the frequency of AI policy seminars.

Donor reason for donating at this time (rather than earlier or later): The grant is a top-up rather than a renewal; the previous two-year grant was made in February 2020. No specific reasons for timing are given.
Open PhilanthropyJohns Hopkins University55,000.002020-03AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/johns-hopkins-kaplan-menard Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support the initial research of Professors Jared Kaplan and Brice Ménard on principles underlying neural network training and performance."
Open PhilanthropyGood Judgment Inc.40,000.002020-03Biosecurity and pandemic preparedness/COVID-19https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/good-judgment-inc-covid-19-forecasting Intended use of funds (category): Direct project expenses

Intended use of funds: Grant to expand "efforts to aggregate, publish, and track forecasts about the COVID-19 outbreak with the hope that the forecasts can help improve planning by health security professionals and the broader public, limit the spread of the virus, and save lives. The forecasts are aggregated each day from the most accurate 1-2% of forecasters from a large-scale, government-funded series of forecasting tournaments, plus an annual uptake of a handful of top performers from the nearly 40,000 forecasters on Good Judgement Open." The predictions are at https://goodjudgment.io/covid/dashboard/ and the reasoning is explained more in https://www.openphilanthropy.org/blog/forecasting-covid-19-pandemic

Donor reason for selecting the donee: The grant is made at around the time the COVID-19 pandemic is being acknowledged worldwide, and just as Open Phil is ramping up grantmaking in the area. The grant investigator, Luke Muehlhauser, has generally been interested in forecasting. Most other COVID-19 grants are investigated by Jacob Trefethen.

Donor reason for donating that amount (rather than a bigger or smaller amount): Amount likely determined by project cost

Donor reason for donating at this time (rather than earlier or later): Timing determmined by the breaking out of the COVID-19 pandemic
Intended funding timeframe in months: 1

Donor thoughts on making further donations to the donee: The blog post https://www.openphilanthropy.org/blog/forecasting-covid-19-pandemic says: "We may commission additional forecasts related to COVID-19 in the coming months, and we welcome suggestions of well-formed questions for which regularly updated forecasts would be especially helpful to public health professionals and the broader public."

Other notes: Announced: 2020-03-17.
Open PhilanthropyWestExec540,000.002020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/westexec-report-on-assurance-in-machine-learning-systems Intended use of funds (category): Direct project expenses

Intended use of funds: Contractor agreement "to support the production and distribution of a report on advancing policy, process, and funding for the Department of Defense’s work on test, evaluation, verification, and validation for deep learning systems."

Donor retrospective of the donation: The increases in grant amounts suggest that the donor was satisfied with initial progress.

Other notes: The grant amount was updated in October and November 2020 and again in May 2021. The original grant amount had been $310,000. Announced: 2020-03-20.
Open PhilanthropyThe Wilson Center368,440.002020-02AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-february-2020 Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to continue support for a series of in-depth AI policy seminars."

Donor reason for selecting the donee: The grat page says: "We continue to believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor reason for donating that amount (rather than a bigger or smaller amount): The amount is similar to the previous grant of $400,000 over a similar time period (two years).

Donor reason for donating at this time (rather than earlier or later): The grant is made almost two years after the original two-year grant, so its timing is likely determined by the original grant running out.
Intended funding timeframe in months: 24

Donor retrospective of the donation: The followup grant https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 suggests that the donor was satisfied with the outcome of the grant.
Open PhilanthropyRAND Corporation30,751.002020-01AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/rand-corporation-research-on-the-state-of-ai-assurance-methods Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support exploratory research by Andrew Lohn on the state of AI assurance methods."

Donor retrospective of the donation: A few months later, Open Phil would make a grant directly to Andrew Lohn for machine learning robustness research, suggesting that they were satisfied with the outcoms from this grant.

Other notes: Announced: 2020-03-19.
Open PhilanthropyCenter for Security and Emerging Technology55,000,000.002019-01Security/Biosecurity and pandemic preparedness/Global catastrophic risks/AI safetyhttps://www.openphilanthropy.org/giving/grants/georgetown-university-center-security-and-emerging-technology Intended use of funds (category): Organizational general support

Intended use of funds: Grant via Georgetown University for the Center for Security and Emerging Technology (CSET), a new think tank led by Jason Matheny, formerly of IARPA, dedicated to policy analysis at the intersection of national and international security and emerging technologies. CSET plans to provide nonpartisan technical analysis and advice related to emerging technologies and their security implications to the government, key media outlets, and other stakeholders.

Donor reason for selecting the donee: Open Phil thinks that one of the key factors in whether AI is broadly beneficial for society is whether policymakers are well-informed and well-advised about the nature of AI’s potential benefits, potential risks, and how these relate to potential policy actions. As AI grows more powerful, calls for government to play a more active role are likely to increase, and government funding and regulation could affect the benefits and risks of AI. Thus: "Overall, we feel that ensuring high-quality and well-informed advice to policymakers over the long run is one of the most promising ways to increase the benefits and reduce the risks from advanced AI, and that the team put together by CSET is uniquely well-positioned to provide such advice." Despite risks and uncertainty, the grant is described as worthwhile under Open Phil's hits-based giving framework

Donor reason for donating that amount (rather than a bigger or smaller amount): The large amount over an extended period (5 years) is explained at https://www.openphilanthropy.org/blog/questions-we-ask-ourselves-making-grant "In the case of the new Center for Security and Emerging Technology, we think it will take some time to develop expertise on key questions relevant to policymakers and want to give CSET the commitment necessary to recruit key people, so we provided a five-year grant."

Donor reason for donating at this time (rather than earlier or later): Likely determined by the timing that the grantee plans to launch. More timing details are not discussed
Intended funding timeframe in months: 60

Other notes: Donee is entered as Center for Security and Emerging Technology rather than as Georgetown University for consistency with future grants directly to the organization once it is set up. Founding members of CSET include Dewey Murdick from the Chan Zuckerberg Initiative, William Hannas from the CIA, and Helen Toner from the Open Philanthropy Project. The grant is discussed in the broader context of giving by the Open Philanthropy Project into global catastrophic risks and AI safety in the Inside Philanthropy article https://www.insidephilanthropy.com/home/2019/3/22/why-this-effective-altruist-funder-is-giving-millions-to-ai-security. Announced: 2019-02-28.
Open PhilanthropyThe Wilson Center400,000.002018-07AI safetyhttps://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series Intended use of funds (category): Direct project expenses

Intended use of funds: Grant "to support a series of in-depth AI policy seminars."

Donor reason for selecting the donee: The grant page says: "We believe the seminar series can help inform AI policy discussions and decision-making in Washington, D.C., and could help identify and empower influential experts in those discussions, a key component of our AI policy grantmaking strategy."

Donor retrospective of the donation: The followup grants https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-february-2020 and https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/wilson-center-ai-policy-seminar-series-june-2020 suggest that the donor was satisfied with the outcome of the grant.

Other notes: Intended funding timeframe in months: 24; announced: 2018-08-01.
Open PhilanthropyUrban Institute165,833.002017-12History of philanthropyhttps://www.openphilanthropy.org/research/history-of-philanthropy/urban-institute-history-of-philanthropy-project Grant to support a series of literature reviews and case studies on the history of philanthropy. The work will be led primarily by Benjamin Soskis, a research associate at the Urban Institute, who has previously produced case studies for our history of philanthropy project. Announced: 2018-02-22.
Open PhilanthropyUniversity of Pennsylvania1,550,000.002017-07Forecastinghttps://www.openphilanthropy.org/giving/grants/university-pennsylvania-philip-tetlock-making-conversations-smarter-faster Grant to support development and pre-testing of the “Making Conversations Smarter, Faster” project. The work is led by Professors Philip Tetlock and Barbara Mellers of University of Pennsylvania, and Professor Emeritus Daniel Kahneman of Princeton University. The original grant amount was $1.3 million, and $250,000 was added in February 2019. Announced: 2017-08-25.
Brian TomasikMachine Intelligence Research Institute2,000.002015AI safety-- Thank you for a helpful conversation with outgoing director Luke Muehlhauser; information conveyed via private communication and published with permission.

Donation amounts by donee and year

Donee Donors influenced Cause area Metadata Total 2021 2020 2019 2018 2017 2015
Center for Security and Emerging Technology Open Philanthropy (filter this donor) 101,920,000.00 46,920,000.00 0.00 55,000,000.00 0.00 0.00 0.00
University of Pennsylvania Open Philanthropy (filter this donor) FB Tw WP Site 1,550,000.00 0.00 0.00 0.00 0.00 1,550,000.00 0.00
The Wilson Center Open Philanthropy (filter this donor) FB Tw WP Site 1,264,980.00 0.00 864,980.00 0.00 400,000.00 0.00 0.00
WestExec Open Philanthropy (filter this donor) 540,000.00 0.00 540,000.00 0.00 0.00 0.00 0.00
Rethink Priorities Open Philanthropy (filter this donor) Cause prioritization Site 315,500.00 315,500.00 0.00 0.00 0.00 0.00 0.00
Massachusetts Institute of Technology Open Philanthropy (filter this donor) FB Tw WP Site 275,344.00 0.00 275,344.00 0.00 0.00 0.00 0.00
Urban Institute Open Philanthropy (filter this donor) FB Tw WP Site 165,833.00 0.00 0.00 0.00 0.00 165,833.00 0.00
Center for a New American Security Open Philanthropy (filter this donor) 141,094.00 0.00 141,094.00 0.00 0.00 0.00 0.00
Center for Strategic and International Studies Open Philanthropy (filter this donor) 118,307.00 0.00 118,307.00 0.00 0.00 0.00 0.00
Center for International Security and Cooperation Open Philanthropy (filter this donor) WP 67,000.00 0.00 67,000.00 0.00 0.00 0.00 0.00
Johns Hopkins University Open Philanthropy (filter this donor) FB Tw WP Site 55,000.00 0.00 55,000.00 0.00 0.00 0.00 0.00
Good Judgment Inc. Open Philanthropy (filter this donor) 40,000.00 0.00 40,000.00 0.00 0.00 0.00 0.00
RAND Corporation Open Philanthropy (filter this donor) FB Tw WP Site 30,751.00 0.00 30,751.00 0.00 0.00 0.00 0.00
Rice, Hadley, Gates & Manuel LLC Open Philanthropy (filter this donor) 25,000.00 0.00 25,000.00 0.00 0.00 0.00 0.00
Andrew Lohn Open Philanthropy (filter this donor) 15,000.00 0.00 15,000.00 0.00 0.00 0.00 0.00
Machine Intelligence Research Institute Brian Tomasik (filter this donor) AI safety FB Tw WP Site CN GS TW 2,000.00 0.00 0.00 0.00 0.00 0.00 2,000.00
Total ---- -- 106,525,809.00 47,235,500.00 2,172,476.00 55,000,000.00 400,000.00 1,715,833.00 2,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donor and year for influencer Luke Muehlhauser

Donor Donees Total 2021 2020 2019 2018 2017 2015
Open Philanthropy (filter this donee) Andrew Lohn (filter this donee), Center for a New American Security (filter this donee), Center for International Security and Cooperation (filter this donee), Center for Security and Emerging Technology (filter this donee), Center for Strategic and International Studies (filter this donee), Good Judgment Inc. (filter this donee), Johns Hopkins University (filter this donee), Massachusetts Institute of Technology (filter this donee), RAND Corporation (filter this donee), Rethink Priorities (filter this donee), Rice, Hadley, Gates & Manuel LLC (filter this donee), The Wilson Center (filter this donee), University of Pennsylvania (filter this donee), Urban Institute (filter this donee), WestExec (filter this donee) 106,523,809.00 47,235,500.00 2,172,476.00 55,000,000.00 400,000.00 1,715,833.00 0.00
Brian Tomasik (filter this donee) Machine Intelligence Research Institute (filter this donee) 2,000.00 0.00 0.00 0.00 0.00 0.00 2,000.00
Total -- 106,525,809.00 47,235,500.00 2,172,476.00 55,000,000.00 400,000.00 1,715,833.00 2,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here