This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of March 2022. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
|Best overview URL||https://www.patbl.com/plan/|
|Effective Altruism Forum username||Patrick|
|Effective Altruism Hub username||patrick-brinich-langlois|
|Regularity with which donor updates donations data||continuous updates|
|Regularity with which Donations List Website updates donations data (after donor update)||continuous updates|
|Lag with which donor updates donations data||days|
|Lag with which Donations List Website updates donations data (after donor update)||months|
|Data entry method on Donations List Website||Manual (no scripts used)|
Brief history: Patrick Brinich-Langlois is a web programmer based in the San Francisco Bay Area who has adopted the "earning to give" philosophy. He has been donating regularly since 2012 and a fairly large amount per year since 2014. His plan page says: "I will attempt to maximize the expected balance of happiness over suffering. I'll do this by becoming a software developer and donating much of my earnings to effective charities. [...] I'll focus my efforts on improving skills related to software development, especially web development. This will come at the expense of activities related to effective altruism, such as contributing to online discussions and working on volunteer projects, unless those would also improve my software-development skills."
Brief notes on broad donor philosophy and major focus areas: Patrick Brinich-Langlois has focused on a mix of animal welfare, long-term future charities, and growing the effective altruism movement. His interests in early years were a mix of animal welfare and effective altruism meta; starting in the latter half of 2017, his interests shifted from animal welfare to AI safety, other global catastrophic risks, and the long-term future. His interest in effective altruism (meta) has sustained
Miscellaneous notes: Patrick Brinich-Langlois has articulated his giving philosophy as mostly utilitarian/consequentualist. In a Facebook comment https://www.facebook.com/pbrinichlanglois/posts/10152496560422961?comment_id=10152668336987961&comment_tracking=%7B%22tn%22%3A%22R%22%7D he says "I don't have especially well-thought-out philosophical positions. Good is happiness less suffering, but I don't want to have to define those as well. Seems like a rabbit hole. Best to leave that to the experts." He is also the creator of Skillshare.im (which earned him a LinkedIn recommendation from Ozzie Gooen) and has been vegetarian since 2001 https://www.facebook.com/632817960/posts/10152108368287961/
Full donor page for donor Patrick Brinich-Langlois
|Timelines wiki page||https://timelines.issarice.com/wiki/Timeline_of_Berkeley_Existential_Risk_Initiative|
|Org Watch page||https://orgwatch.issarice.com/?organization=Berkeley+Existential+Risk+Initiative|
|Key people||Andrew Critch|Gina Stuessy|Michael Keenan|
|Notes||Launched to provide fast-moving support to existing existential risk organizations. Works closely with Machine Intelligence Research Institute, Center for Human-Compatible AI, Centre for the Study of Existential Risk, and Future of Humanity Institute. People working at it are closely involved with MIRI and the Center for Applied Rationality|
This entity is also a donor.
Full donee page for donee Berkeley Existential Risk Initiative
|Cause area||Count||Median||Mean||Minimum||10th percentile||20th percentile||30th percentile||40th percentile||50th percentile||60th percentile||70th percentile||80th percentile||90th percentile||Maximum|
If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.
Note: Cause area classification used here may not match that used by donor for all cases.
|Cause area||Number of donations||Total||2019|
|AI safety (filter this donor)||1||7,497.00||7,497.00|
Skipping spending graph as there is fewer than one year’s worth of donations.
Graph of all donations, showing the timeframe of donations
|Amount (current USD)||Amount rank (out of 1)||Donation date||Cause area||URL||Influencer||Notes|