Raymond Arnold donations made (filtered to cause areas matching AI safety)

This is an online portal with information on donations that were announced publicly (or have been shared with permission) that were of interest to Vipul Naik. The git repository with the code for this portal, as well as all the underlying data, is available on GitHub. All payment amounts are in current United States dollars (USD). The repository of donations is being seeded with an initial collation by Issa Rice as well as continued contributions from him (see his commits and the contract work page listing all financially compensated contributions to the site) but all responsibility for errors and inaccuracies belongs to Vipul Naik. Current data is preliminary and has not been completely vetted and normalized; if sharing a link to this site or any page on this site, please include the caveat that the data is preliminary (if you want to share without including caveats, please check with Vipul Naik). We expect to have completed the first round of development by the end of December 2019. See the about page for more details. Also of interest: pageview data on analytics.vipulnaik.com, tutorial in README, request for feedback to EA Forum.

Table of contents

Basic donor information

We do not have any donor information for the donor Raymond Arnold in our system.

Donor donation statistics

Cause areaCountMedianMeanMinimum10th percentile 20th percentile 30th percentile 40th percentile 50th percentile 60th percentile 70th percentile 80th percentile 90th percentile Maximum
Overall 4 250 1,480 250 250 250 250 250 250 2,000 2,000 3,420 3,420 3,420
AI safety 4 250 1,480 250 250 250 250 250 250 2,000 2,000 3,420 3,420 3,420

Donation amounts by cause area and year

If you hover over a cell for a given cause area and year, you will get a tooltip with the number of donees and the number of donations.

Note: Cause area classification used here may not match that used by donor for all cases.

Cause area Number of donations Number of donees Total 2018 2017 2016
AI safety (filter this donor) 4 1 5,920.00 500.00 3,420.00 2,000.00
Total 4 1 5,920.00 500.00 3,420.00 2,000.00

Graph of spending by cause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by cause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by subcause area and year

If you hover over a cell for a given subcause area and year, you will get a tooltip with the number of donees and the number of donations.

For the meaning of “classified” and “unclassified”, see the page clarifying this.

Subcause area Number of donations Number of donees Total 2018 2017 2016
AI safety 4 1 5,920.00 500.00 3,420.00 2,000.00
Classified total 4 1 5,920.00 500.00 3,420.00 2,000.00
Unclassified total 0 0 0.00 0.00 0.00 0.00
Total 4 1 5,920.00 500.00 3,420.00 2,000.00

Graph of spending by subcause area and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by subcause area and year (cumulative)

Graph of spending should have loaded here

Donation amounts by donee and year

Donee Cause area Metadata Total 2018 2017 2016
Machine Intelligence Research Institute (filter this donor) AI safety FB Tw WP Site CN GS TW 5,920.00 500.00 3,420.00 2,000.00
Total -- -- 5,920.00 500.00 3,420.00 2,000.00

Graph of spending by donee and year (incremental, not cumulative)

Graph of spending should have loaded here

Graph of spending by donee and year (cumulative)

Graph of spending should have loaded here

Donation amounts by influencer and year

Sorry, we couldn't find any influencer information.

Donation amounts by disclosures and year

Sorry, we couldn't find any disclosures information.

Donation amounts by country and year

Sorry, we couldn't find any country information.

Full list of documents in reverse chronological order (1 documents)

Title (URL linked)Publication dateAuthorPublisherAffected donorsAffected doneesDocument scopeCause areaNotes
Announcing AlignmentForum.org Beta2018-07-10Raymond Arnold LessWrong LessWrong 2.0 LaunchAI safetyPost describes the beta launch of alignmentforum.org, a forum devoted to questions of AI alignment, built on top of the same codebase and infrastructure as the new LessWrong. It is a natural successor to agentfoundations.org

Full list of donations in reverse chronological order (4 donations)

DoneeAmount (current USD)Amount rank (out of 4)Donation dateCause areaURLInfluencerNotes
Machine Intelligence Research Institute250.0032018AI safetyhttps://web.archive.org/web/20180407192941/https://intelligence.org/topcontributors/--
Machine Intelligence Research Institute250.0032018AI safetyhttps://web.archive.org/web/20180117010054/https://intelligence.org/topcontributors/--
Machine Intelligence Research Institute3,420.0012017AI safetyhttps://web.archive.org/web/20171223071315/https://intelligence.org/topcontributors/-- 2000 subtracted because it is recorded in EA Survey data.
Machine Intelligence Research Institute2,000.0022016AI safetyhttps://github.com/peterhurford/ea-data/--

Similarity to other donors

Sorry, we couldn't find any similar donors.