Information overload makes social media a swamp of fake news

Once upon a time, it wasn’t crazy to think that social media would allow great ideas and high-quality information to float to the top while the dross would be drowned in the noise. After all, when you share something, you presumably do so because you think it’s good. Everybody else probably thinks what they’re sharing is good, too, even if their idea of “good” is different. But it’s obvious that poor-quality information ends up being extremely popular. Why?

That popularity might be a product of people’s natural limitations: in the face of a flood of information and finite attention, poor quality discrimination ends up being a virtual certainty. That’s what a simulation of social media suggests, at least.

A group of researchers from the Shanghai Institute of Technology, Indiana University, and Yahoo wanted to investigate the tradeoffs that happen on social media. Their simulated social network allowed them to tweak different parameters to see what would happen.

You want a level of information you can deal with…

In the simulation, agents sit in social networks, connected to other agents that are close to them. Agents can pass messages to each other through these networks. These messages have a rating representing their quality. Because quality is a slippery, subjective thing that’s difficult to get a computer to understand, the simulation is pretty loose about what “quality” means; the ratings might represent truth, originality, or some other value, and all the agents in the model agree on the rating. That’s obviously a drastic simplification of the real world, but having a simple value makes it easy to observe how quality affects sharing.

Agents can make up their own messages or share messages sent to them by their neighbors. If a message is high quality, agents are more likely to pass it along. The model let the researchers tweak the rate of message creation, up to where it would simulate information overload: if agents are creating a high volume of new messages, the other agents could get overwhelmed with information. If not much new is being created, existing information gets bounced around much more.

The amount of information an agent can manage can also be tweaked. Each agent has a memory that can hold only a certain number of the most recent messages produced by neighbors. If that attention span is large, an agent can look through a large number of messages and only share the highest-quality ones. If the memory is small, the menu of messages to share is much smaller.

Playing around with these numbers allowed the researchers to observe how many times a message was shared between its introduction and eventual fade. They found that, when the system had low information overload, higher-quality messages had a much greater chance of popularity. But when information overload was high, quality didn’t make that much of a difference anymore. So, in the face of information overload, the network as a whole was worse at discriminating quality.

Attention also played a role: with higher attention, messages didn’t suddenly go viral; their popularity grew more slowly over time. Only the highest-quality messages had bouts of sudden popularity. But with lower attention, poor-quality messages had a greater chance of attaining viral fame.

…but also a healthy diversity of ideas

In an ideal world, you wouldn’t want just high-quality information, but also a diversity of thinking. Having a lot of messages competing for attention and popularity is probably a healthy thing for a thriving marketplace of ideas. The model found that a low information load might lead to better quality discrimination, but it also leads to low idea diversity.

Is there a place where the tradeoff reaches a good balance? According to the model, yes: you can have high diversity and good quality discrimination, but only if attention is also high. “When [attention] is large, there is a region where the network can sustain very high diversity with relatively little loss of discriminative power,” the researchers write.

Real-world data makes it look even worse

Models always rely on simplifying some assumptions about the real world. One of the most important assumptions with this research is that everyone produces new ideas at the same rate and has the same attention span. Obviously, that isn’t true, so the researchers went looking for some way to make these parameters more realistic.

The researchers used data from Twitter to estimate information overload, looking at a million users’ rate of tweeting vs. retweeting. Different people had different ratios, which the researchers plugged into their model. For attention, they looked at 10 million scrolling sessions from Tumblr, counting the number of times a user stopped while scrolling through their feed. This was a proxy for how many items the user paid close attention to. These numbers, also plugged into the model, gave the agents varying attention spans that closely mimicked the real world.

The result of adding greater realism to the simulation were appalling: the network got really, really bad at picking out the highest-quality messages to go viral. “This finding suggests that the heterogeneous information load and attention of the real world lead to a market that is incapable of discriminating information on the basis of quality,” the researchers write.

The difficult thing in models like this is checking that they definitely apply to the real world. It’s one thing to plug in numbers taken from the real world; it’s another to assume that Twitter or Facebook really do behave the same way as these simulated networks.

A real-world check is difficult in a model like this, because a real-world measure of quality is hard to find. Still, the researchers did a rough-and-ready check by using data from an article-quality rating scheme. They compared the number of times high-rated articles were shared on social media compared to poor-quality articles, and there was no difference: both were just as likely to go viral. That suggests that the real world is just as bad at discriminating quality as the simulated network.

There’s still more work to be done on models like these. For instance, this simulation doesn’t capture the echo chambers that exist on social media, so the role those chambers might play is not clear.

So, what do we do to improve the situation? The researchers suggest that limiting the amount of content in social media feeds might be a start; they recommend controlling bot abuse, although it’s not obvious that this would drastically reduce the information firehose we all face on a daily basis. Trying to maintain a high level of skepticism about the information that drifts into your path might be the only defense for now. “Our main finding,” write the researchers, “is that survival of the fittest is far from a foregone conclusion where information is concerned.”

Nature Human Behaviour, 2016. DOI: 10.1038/s41562-017-0132

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.