Reddit

3 posts
✚ That Time My Chart Was on the Reddit Front Page and Everyone Hated It (The Process #56)

Something I made was on the front page of Reddit. Cool. The problem: thousands of people downvoted it. Here's what I learned. Read More

0 0
Celebrity name spelling test

Colin Morris culled common misspellings on Reddit and made the data available on GitHub. For The Pudding, Russell Goldenberg and Matt Daniels took it a step further so that you too can see how bad you are at spelling celebrity names. Tags: Pudding, Reddit, spelling

0 0
Looking for common misspellings

Some words are harder to spell than others, and on the internet, sometimes people indicate the difficulty by following their uncertainty with “(sp?)”. Colin Morris collected all the words in reddit threads with this uncertainty. Download the data on GitHub. Tags: Reddit, spelling

0 0
Growth of Subreddits

As of September 2018, there were 892 million comments for the year so far, spread out over 355,939 subreddits. Here's how it got to this point. Read More

0 0
A story of humanity in the pixels of a Reddit April Fool’s experiment

On April Fool’s Day, Reddit launched a blank canvas that users could add a colored pixel every few minutes. It ran for 72 hours, and the evolution of the space as a whole was awesome. What if you look more closely at the individual images, edits, and battles for territory? Even more interesting. sudoscript looks closer, breaking participants into three groups — the creators, protectors, and destroyers — who fight...

0 0
Time-lapse of community-edited pixels

For April Fool’s Day, Reddit ran a subreddit, r/place, that let users edit pixels in a 1,000 by 1,000 blank space for 72 hours. Users could only edit one pixel every ten minutes, which forced patience and community effort. This is the time-lapse of the effort. Kind of great. It’s fun to watch the edits of thousands converge. It’s a complete hodgepodge but it all fit together in the relatively...

0 0
Subreddit math with r/The_Donald helps show topic breakdowns

Trevor Martin for FiveThirtyEight used latent semantic analysis to do math with subreddits, specifically r/The_Donald. We’ve adapted a technique that’s used in machine learning research — called latent semantic analysis — to characterize 50,323 active subreddits based on 1.4 billion comments posted from Jan. 1, 2015, to Dec. 31, 2016, in a way that allows us to quantify how similar in essence one subreddit is to another. At its heart,...

0 0
I’m doing a Reddit AMA

I'm doing a Reddit AMA tomorrow hosted by the DataIsBeautiful subreddit. It'll be at 1:30pm EST on August 27, 2015. In case you're unfamiliar with the AMA (ask me anything), it's just a fun Q&A thing, where you ask me questions on Reddit, and I pause to think of something good to say. I might type some answers. Ask me about visualization, data, blogging, graduate school, my hate of commuting,...

0 0
I’m doing a Reddit AMA

I'm doing a Reddit AMA tomorrow hosted by the DataIsBeautiful subreddit. It'll be at 1:30pm EST on August 27, 2015. In case you're unfamiliar with the AMA (ask me anything), it's just a fun Q&A thing, where you ask me questions on Reddit, and I pause to think of something good to say. I might type some answers. Ask me about visualization, data, blogging, graduate school, my hate of commuting,...

0 0
Download data for 1.7 billion Reddit comments

There's been all sorts of weird stuff going on at Reddit lately, but who's got time for that when you can download 1.6 billion comments left on Reddit, since 2007 through May 2015? This is an archive of Reddit comments from October of 2007 until May of 2015 (complete month). This reflects 14 months of work and a lot of API calls. This dataset includes nearly every publicly available Reddit...

0 0