This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of July 2024. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.
We do not have any donee information for the donee Ben Pace in our system.
Cause area | Count | Median | Mean | Minimum | 10th percentile | 20th percentile | 30th percentile | 40th percentile | 50th percentile | 60th percentile | 70th percentile | 80th percentile | 90th percentile | Maximum |
---|
Donor | Total |
---|---|
Total | 0.00 |
Title (URL linked) | Publication date | Author | Publisher | Affected donors | Affected donees | Affected influencers | Document scope | Cause area | Notes |
---|---|---|---|---|---|---|---|---|---|
A model I use when making plans to reduce AI x-risk (GW, IR) | 2018-01-18 | Ben Pace | LessWrong | Berkeley Existential Risk Initiative | Review of current state of cause area | AI safety | The author describes his implicit model of AI risk, with four parts: (1) Alignment is hard, (2) Getting alignment right accounts for most of the variance in whether an AGI system will be positive for humanity, (3) Our current epistemic state regarding AGI timelines will continue until we are close (<2 years from) to having AGI, and (4) Given timeline uncertainty, it is best to spend marginal effort on plans that assume / work in shorter timelines. There is a lot of discussion in the comments |
Sorry, we couldn't find any donations!