Global Catastrophic Risk Institute donations received

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donee information

ItemValue
Country United States
Facebook page gcrinstitute
Websitehttps://gcrinstitute.org/
Donate pagehttp://gcrinstitute.org/donate/
Twitter usernameGCRInstitute
Org Watch pagehttps://orgwatch.issarice.com/?organization=Global+Catastrophic+Risk+Institute
Key peopleSeth Baum
Launch date2011

Donee donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 12 30,000 52,641 5 16,043 20,000 23,647 25,000 30,000 50,000 60,000 60,000 90,000 209,000
Global catastrophic risks 10 48,000 58,670 5 5 16,043 23,647 30,000 48,000 50,000 60,000 60,000 90,000 209,000
1 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000 20,000
Existential risks 1 25,000 25,000 25,000 25,000 25,000 25,000 25,000 25,000 25,000 25,000 25,000 25,000 25,000

Donation amounts by donor and year for donee Global Catastrophic Risk Institute

Donor Total 2021 2020 2019 2018 2017 2016
Jaan Tallinn (filter this donee) 347,000.00 257,000.00 90,000.00 0.00 0.00 0.00 0.00
Survival and Flourishing Fund (filter this donee) 90,000.00 0.00 0.00 90,000.00 0.00 0.00 0.00
Gordon Irlam (filter this donee) 76,043.00 0.00 0.00 0.00 60,000.00 0.00 16,043.00
Jed McCaleb (filter this donee) 50,000.00 0.00 50,000.00 0.00 0.00 0.00 0.00
Berkeley Existential Risk Initiative (filter this donee) 25,000.00 0.00 0.00 0.00 25,000.00 0.00 0.00
Effective Altruism Grants (filter this donee) 23,647.47 0.00 0.00 0.00 0.00 23,647.47 0.00
Donor lottery (filter this donee) 20,000.00 0.00 0.00 0.00 20,000.00 0.00 0.00
Patrick Brinich-Langlois (filter this donee) 5.00 0.00 0.00 0.00 5.00 0.00 0.00
Total 631,695.47 257,000.00 140,000.00 90,000.00 105,005.00 23,647.47 16,043.00

Full list of documents in reverse chronological order (11 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesAffected influencersDocument scopeCause areaNotes
2021 AI Alignment Literature Review and Charity Comparison (GW, IR)2021-12-23Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Survival and Flourishing Fund FTX Future Fund Future of Humanity Institute Future of Humanity Institute Centre for the Governance of AI Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Google Deepmind Anthropic Alignment Research Center Redwood Research Ought AI Impacts Global Priorities Institute Center on Long-Term Risk Centre for Long-Term Resilience Rethink Priorities Convergence Analysis Stanford Existential Risk Initiative Effective Altruism Funds: Long-Term Future Fund Berkeley Existential Risk Initiative 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the sixth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the post is structured similarly to the previous year's post https://forum.effectivealtruism.org/posts/K7Z87me338BQT3Mcv/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) but has a few new features. The author mentions that he has several conflicts of interest that he cannot individually disclose. He also starts collecting "second preferences" data this year for all the organizations he talks to, which is where the organization would like to see funds go, other than itself. The Long-Term Future Fund is the clear winner here. He also announces that he's looking for a research assistant to help with next year's post given the increasing time demands and his reduced time availability. His final rot13'ed donation decision is to donate to the Long-Term Future Fund so that sufficiently skilled AI safety researchers can make a career with LTFF funding; his second preference for donations is BERI. Many other organizations that he considers to be likely to be doing excellent work are either already well-funded or do not provide sufficient disclosure.
2020 AI Alignment Literature Review and Charity Comparison (GW, IR)2020-12-21Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk OpenAI Berkeley Existential Risk Initiative Ought Global Priorities Institute Center on Long-Term Risk Center for Security and Emerging Technology AI Impacts Leverhulme Centre for the Future of Intelligence AI Safety Camp Future of Life Institute Convergence Analysis Median Group AI Pulse 80,000 Hours Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/pTYDdcag9pTzFQ7vw/2020-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fifth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post is structured very similar to the previous year's post. It has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document. See https://www.lesswrong.com/posts/uEo4Xhp7ziTKhR6jq/reflections-on-larks-2020-ai-alignment-literature-review (GW, IR) for discussion of some aspects of the post by Alex Flint.
2019 AI Alignment Literature Review and Charity Comparison (GW, IR)2019-12-19Larks Effective Altruism ForumLarks Effective Altruism Funds: Long-Term Future Fund Open Philanthropy Survival and Flourishing Fund Future of Humanity Institute Center for Human-Compatible AI Machine Intelligence Research Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk Ought OpenAI AI Safety Camp Future of Life Institute AI Impacts Global Priorities Institute Foundational Research Institute Median Group Center for Security and Emerging Technology Leverhulme Centre for the Future of Intelligence Berkeley Existential Risk Initiative AI Pulse Survival and Flourishing Fund Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the fourth post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous year's post is at https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) The post has sections on "Research" and "Finance" for a number of organizations working in the AI safety space, many of whom accept donations. A "Capital Allocators" section discusses major players who allocate funds in the space. A lengthy "Methodological Thoughts" section explains how the author approaches some underlying questions that influence his thoughts on all the organizations. To make selective reading of the document easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.
EA orgs are trying to fundraise ~$10m - $16m (GW, IR)2019-01-06Hauke Hillebrandt Effective Altruism Forum Centre for Effective Altruism Effective Altruism Foundation Machine Intelligence Research Institute Forethought Foundation for Global Priorities Research Sentience Institute Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute Rethink Priorities EA Hotel 80,000 Hours Rethink Charity Miscellaneous commentaryThe blog post links to and discusses the spreadsheet https://docs.google.com/spreadsheets/d/10zU6gp_H_zuvlZ2Vri-epSK0_urbcmdS-5th3mXQGXM/edit which tabulates various organizations and their fundraising targets, along with quotes and links to fundraising posts. The blog post itself has three points, the last of which is that the EA community is relatively more funding-constrained again
EA Giving Tuesday Donation Matching Initiative 2018 Retrospective (GW, IR)2019-01-06Avi Norowitz Effective Altruism ForumAvi Norowitz William Kiely Against Malaria Foundation Malaria Consortium GiveWell Effective Altruism Funds Alliance to Feed the Earth in Disasters Effective Animal Advocacy Fund The Humane League The Good Food Institute Animal Charity Evaluators Machine Intelligence Research Institute Faunalytics Wild-Aniaml Suffering Research GiveDirectly Center for Applied Rationality Effective Altruism Foundation Cool Earth Schistosomiasis Control Initiative New Harvest Evidence Action Centre for Effective Altruism Animal Equality Compassion in World Farming USA Innovations for Poverty Action Global Catastrophic Risk Institute Future of Life Institute Animal Charity Evaluators Recommended Charity Fund Sightsavers The Life You Can Save One Step for Animals Helen Keller International 80,000 Hours Berkeley Existential Risk Initiative Vegan Outreach Encompass Iodine Global Network Otwarte Klatki Charity Science Mercy For Animals Coalition for Rainforest Nations Fistula Foundation Sentience Institute Better Eating International Forethought Foundation for Global Priorities Research Raising for Effective Giving Clean Air Task Force The END Fund Miscellaneous commentaryThe blog post describes an effort by a number of donors coordinated at https://2018.eagivingtuesday.org/donations to donate through Facebook right after the start of donation matching on Giving Tuesday. Based on timestamps of donations and matches, donations were matched till 14 seconds after the start of matching. Despite the very short time window of matching, the post estimates that $469,000 (65%) of the donations made were matched
Where ACE Staff Are Giving In 2018 and Why2018-12-21Erika Alonso Animal Charity EvaluatorsSofia Davis-Fogel Toni Adleberg Erika Alonso Gina Stuessy Kathryn Asher Jamie Spurgeon Trent Grassian Melissa Guzikowski Albert Schweitzer Foundation for Our Contemporaries Otwarte Klatki Animal Equality Encompass Sinergia Animal Mercy For Animals Compassion in World Farming USA The Humane League L214 International Rescue Committee New York Public Library Give Power Animal Charity Evaluators Recommended Charity Fund The Good Food Institute Effective Animal Advocacy Fund StrongMinds Global Catastrophic Risk Institute New Harvest We Animals Against Malaria Foundation GiveWell top charities Multidisciplinary Association for Psychedelic Studies Beckley Foundation Christopher Sebastian Animal Aid GiveDirectly Periodic donation list documentationAnimal welfare|Global catastrophic risks|Global health and developmentContinuing an annual tradition started in 2016, Animal Charity Evaluators (ACE) staff describe where they donated or plan to donate in 2018. Unlike 2017, there is no mention of the Effective Altruism Funds, with most funds-style donations going to the ACE-run Recommended Charity Fund and Effective Animal Advocacy Fund. Donation amounts are not disclosed, likely by policy
2018 AI Alignment Literature Review and Charity Comparison (GW, IR)2018-12-17Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Center for Human-Compatible AI Centre for the Study of Existential Risk Global Catastrophic Risk Institute Global Priorities Institute Australian National University Berkeley Existential Risk Initiative Ought AI Impacts OpenAI Effective Altruism Foundation Foundational Research Institute Median Group Convergence Analysis Review of current state of cause areaAI safetyCross-posted to LessWrong at https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison (GW, IR) This is the third post in a tradition of annual blog posts on the state of AI safety and the work of various organizations in the space over the course of the year; the previous two blog posts are at https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) and https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison (GW, IR) The post has a "methodological considerations" section that discusses how the author views track records, politics, openness, the research flywheel, near vs far safety research, other existential risks, financial reserves, donation matching, poor quality research, and the Bay Area. The number of organizations reviewed is also larger than in previous years. Excerpts from the conclusion: "Despite having donated to MIRI consistently for many years as a result of their highly non-replaceable and groundbreaking work in the field, I cannot in good faith do so this year given their lack of disclosure. [...] This is the first year I have attempted to review CHAI in detail and I have been impressed with the quality and volume of their work. I also think they have more room for funding than FHI. As such I will be donating some money to CHAI this year. [...] As such I will be donating some money to GCRI again this year. [...] As such I do not plan to donate to AI Impacts this year, but if they are able to scale effectively I might well do so in 2019. [...] I also plan to start making donations to individual researchers, on a retrospective basis, for doing useful work. [...] This would be somewhat similar to Impact Certificates, while hopefully avoiding some of their issues.
Summary of GCRI’s 2018-2019 Accomplishments, Plans, and Fundraising2018-12-12Seth Baum Global Catastrophic Risk Institute Global Catastrophic Risk Institute Donee periodic updateGlobal catastrophic risksThis is the summary post for a series of blog posts detailing the 2018 accomplishments, 2019 plans in terms of topics covered and organization development, and the 2018 and 2019 finances. Individual posts dealing with these topics are linked. A teaser post linking to this summary post is posted to the Effective Altruism Forum at https://forum.effectivealtruism.org/posts/J3T6mPN9JmY7htF5f/global-catastrophic-risk-institute-2018-2019-updates (GW, IR) on 2018-12-17
2017 Donor Lottery Report (GW, IR)2018-11-12Adam Gleave Effective Altruism ForumDonor lottery Alliance to Feed the Earth in Disasters Global Catastrophic Risk Institute AI Impacts Wild-Animal Suffering Research Single donation documentationGlobal catastrophic risks|AI safety|Animal welfareThe write-up documents Adam Gleave’s decision process for where he donated the money for the 2017 donor lottery. Adam won one of the two blocks of $100,000 for 2017
2017 AI Safety Literature Review and Charity Comparison (GW, IR)2017-12-20Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute Global Catastrophic Risk Institute Centre for the Study of Existential Risk AI Impacts Center for Human-Compatible AI Center for Applied Rationality Future of Life Institute 80,000 Hours Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. It is an annual refresh of https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison (GW, IR) -- a similar post published a year before it. The conclusion: "Significant donations to the Machine Intelligence Research Institute and the Global Catastrophic Risks Institute. A much smaller one to AI Impacts."
2016 AI Risk Literature Review and Charity Comparison (GW, IR)2016-12-13Larks Effective Altruism ForumLarks Machine Intelligence Research Institute Future of Humanity Institute OpenAI Center for Human-Compatible AI Future of Life Institute Centre for the Study of Existential Risk Leverhulme Centre for the Future of Intelligence Global Catastrophic Risk Institute Global Priorities Project AI Impacts Xrisks Institute X-Risks Net Center for Applied Rationality 80,000 Hours Raising for Effective Giving Review of current state of cause areaAI safetyThe lengthy blog post covers all the published work of prominent organizations focused on AI risk. References https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support#sources1007 for the MIRI part of it but notes the absence of information on the many other orgs. The conclusion: "The conclusion: "Donate to both the Machine Intelligence Research Institute and the Future of Humanity Institute, but somewhat biased towards the former. I will also make a smaller donation to the Global Catastrophic Risks Institute."

Full list of donations in reverse chronological order (12 donations)

Graph of top 10 donors (for donations with known year of donation) by amount, showing the timeframe of donations

Graph of donations and their timeframes
DonorAmount (current USD)Amount rank (out of 12)Donation dateCause areaURLInfluencerNotes
Jaan Tallinn48,000.0062021-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2021-h1-recommendationsSurvival and Flourishing Fund Ben Hoskin Katja Grace Oliver Habryka Adam Marblestone Donation process: Part of the Survival and Flourishing Fund's 2021 H1 grants based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fifth grant round; all previous grant rounds included grants to this grantee.

Other notes: Although Jed McCaleb also participates in this grant round as a funder, he does not make any grants to this grantee. Percentage of total donor spend in the corresponding batch of donations: 0.50%.
Jaan Tallinn209,000.0012021-01-12Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Oliver Habryka Eric Rogstad Donation process: Part of the Survival and Flourishing Fund's 2020 H2 grants https://survivalandflourishing.fund/sff-2020-h2-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Recommenders specified marginal utility functions for funding each application, and adjusted those functions through discussions with each other as the round progressed. Similarly, funders specified and adjusted different utility functions for deferring to each Recommender. In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this is SFF's fourth grant round; each of the grant rounds has included grants to the grantee.

Other notes: Although the Survival and Flourishing Fund and Jed McCaleb also participate as funders in this grant round as funders, neither of them makes any grants to this grantee. The grant is made via Social and Environmental Entrepreneurs. Percentage of total donor spend in the corresponding batch of donations: 12.83%.
Jaan Tallinn90,000.0022020-06-09Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants https://survivalandflourishing.fund/sff-2020-h1-recommendations based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; the grantee had also received grants in the first two rounds but from SFF (not from Tallinn).

Other notes: The grant round also includes a grant from Jed McCaleb of $50,000. The Survival and Flourishing Fund also participates as a funder in this round but does not make a grant to the grantee. The grant is made via Social and Environmental Entrepreneurs. Percentage of total donor spend in the corresponding batch of donations: 9.78%.
Jed McCaleb50,000.0052020-04Global catastrophic riskshttps://survivalandflourishing.fund/sff-2020-h1-recommendationsSurvival and Flourishing Fund Alex Zhu Andrew Critch Jed McCaleb Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2020 H1 grants based on the S-process (simulation process). A request for grants was made at https://forum.effectivealtruism.org/posts/wQk3nrGTJZHfsPHb6/survival-and-flourishing-grant-applications-open-until-march (GW, IR) and open till 2020-03-07. The S-process "involves allowing the recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn, Jed McCaleb, and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this 2020 H1 round of grants is SFF's third round; the grantee had also received grants in the first two rounds but from SFF (not from McCaleb, who did not participate in either round).

Other notes: The grant round also includes a grant from Jaan Tallinn of $90,000. The Survival and Flourishing Fund also participates as a funder in this round but does not make a grant to the grantee. The grant is made via Social and Environmental Entrepreneurs. Percentage of total donor spend in the corresponding batch of donations: 20.00%.
Survival and Flourishing Fund30,000.0072019-12-05Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlAlex Flint Alex Zhu Andrew Critch Eric Rogstad Oliver Habryka Donation process: Part of the Survival and Flourishing Fund's 2019 Q4 grants https://survivalandflourishing.fund/sff-2019-q4-recommendations based on the S-process (simulation process) that "involves allowing the Recommenders and funders to simulate a large number of counterfactual delegation scenarios using a spreadsheet of marginal utility functions. Funders were free to assign different weights to different Recommenders in the process; the weights were determined by marginal utility functions specified by the funders (Jaan Tallinn and SFF). In this round, the process also allowed the funders to make some final adjustments to decide on their final intended grant amounts."

Intended use of funds (category): Organizational general support

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; this November 2019 round of grants is SFF's second round. A grant had also been made in the first grant round.

Other notes: Jaan Tallinn also participates as a funder in this grant round, but makes no grants to the grantee in this grant round. The grant is made via Social and Environmental Entrepreneurs. Percentage of total donor spend in the corresponding batch of donations: 2.76%; announced: 2019-12-15.
Survival and Flourishing Fund60,000.0032019-09-05Global catastrophic riskshttps://jaan.online/philanthropy/donations.htmlAlex Flint Andrew Critch Eric Rogstad Donation process: Part of the founding batch of grants for the Survival and Flourishing Fund made in August 2019. The fund is partly a successor to part of the grants program of the Berkeley Existential Risk Initiative (BERI) that handled grantmaking by Jaan Tallinn; see http://existence.org/tallinn-grants-future/ As such, this grant to GCRI may represent a followup to past grants by BERI in support of GCRI

Intended use of funds (category): Organizational general support

Donor reason for selecting the donee: This grant may represent a followup to past grants by BERI in support of GCRI

Donor reason for donating at this time (rather than earlier or later): Timing determined by timing of grant round; the Survival and Flourishing Fund is making its first round of grants in August 2019

Other notes: The Survival and Flourishing Fund website gives the name of the grantee as 'Global Catastrophic Risks Institute'. The grant is made via Social and Environmental Entrepreneurs. Percentage of total donor spend in the corresponding batch of donations: 6.82%; announced: 2019-08-29.
Patrick Brinich-Langlois5.00122018-11-27Global catastrophic riskshttps://www.patbl.com/misc/other/donations/--
Donor lottery20,000.00102018-11-12--https://forum.effectivealtruism.org/posts/SYeJnv9vYzq9oQMbQ/2017-donor-lottery-report (GW, IR)Adam Gleave The blog post explaining the donation contains an extensive discussion of the Global Catastrophic Risk Institute (GCRI). Highlight: "Overall I am moderately excited about supporting the work of GCRI and in particular Seth Baum. I am pessimistic about room for growth, with recruitment being a major challenge, similar to that faced by AI Impacts. [...] At their current budget level, additional funding is a factor for whether Seth continues to work at GCRI full-time. Accordingly I would recommend donations sufficient to ensure Seth can continue his work. I would encourage donors to consider funding GCRI to scale beyond this, but to first obtain more information regarding their long-term plans and recruitment strategy." Earlier in the post: "If I had an additional $100k to donate, I would first check AI Impacts current recruitment situation; if there are promising hires that are bottlenecked on funding, I would likely allocate it there. Otherwise, I would split it equally between ALLFED and GCRI.". Percentage of total donor spend in the corresponding batch of donations: 20.00%.
Berkeley Existential Risk Initiative25,000.0082018-01-24Existential riskshttps://web.archive.org/web/20180731180958/http://existence.org:80/grants https://web.archive.org/web/20180921215949/http://existence.org/organization-grants/-- For general support; grant via Social and Environmental Entrepreneurs.
Gordon Irlam60,000.0032018Global catastrophic riskshttps://www.gricf.org/2018-report.html--
Effective Altruism Grants23,647.4792017-09-29Global catastrophic riskshttps://docs.google.com/spreadsheets/d/1iBy--zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk-- Research modeling risks and writing policy/strategy posts. His work will be based at the Global Catastrophic Risk Institute. He will also do self-study based on recommendations from the likes of the Center for Strategic and International Studies and 80,000 Hours' syllabi. See http://effective-altruism.com/ea/1fc/effective_altruism_grants_project_update/ for more context about the grant program. Currency info: donation given as 17,650.00 GBP (conversion done on 2017-09-29 via Bloomberg).
Gordon Irlam16,043.00112016Global catastrophic riskshttps://www.gricf.org/2016-report.html-- c/o Social & Environmental Entrepreneurs.