January 10, 2019: In 2017, we covered a study that suggested information overload may be responsible for the viral spread of faulty information. The study was based on a mix of modeling of artificial “agents” that forwarded information to their peers, and real-world data obtained from Twitter. In attempting to follow up on their own work, the researchers who produced it discovered two problems: a software bug in their analysis pipeline, and a graph that was produced using invalid data.
Combined, these suggest the model they favored—that high- and low-quality information were equally likely to spread—wasn’t valid. While this doesn’t alter the empirical data they obtained, it does influence their analysis of it, so they have chosen to retract the paper.
The retraction highlights one of the frequently overlooked aspects of scientific reproducibility. Problems with published work are frequently identified not by repeating the exact same experiments, but by attempts to build or expand upon them.
The original story follows. Credit to Retraction Watch for identifying the retraction.
Original story follows
Once upon a time, it wasn’t crazy to think that social media would allow great ideas and high-quality information to float to the top while the dross would be drowned in the noise. After all, when you share something, you presumably do so because you think it’s good. Everybody else probably thinks what they’re sharing is good, too, even if their idea of “good” is different. But it’s obvious that poor-quality information ends up being extremely popular. Why?
That popularity might be a product of people’s natural limitations: in the face of a flood of information and finite attention, poor quality discrimination ends up being a virtual certainty. That’s what a simulation of social media suggests, at least.