After IP's excellent original thread had a chance to rattle around in my brain for a bit, I recently encountered a very interesting blog post detailing a scientific study whose relevance will become clear.
From the post:
There's been a lot of concern as of late about "algorithms," and rightfully so. But while it's clear that all the large tech companies are engaged in tampering with the flow of information, this study demonstrates that that isn't the central problem. Read that last paragraph paragraph again, and then this one from the article itself:
Constant hand-wringing is done over "misinformation" (including in the article, with its obnoxiously leftist slant). But this study shows pretty clearly that the root problem isn't algo or propaganda per-say.
It's information overload.
As people get exposed to more and more info, their ability to engage with it goes down, and so they prioritize low-quality info (much of it not even political) because that's all they have the mental bandwidth for. This process can be accelerated by bots, but they're only catalyzing a reaction that's already taking place, like gasoline on a fire. People who weren't there will have a hard time believing it, but >reddit
was originally a place where you could have deep, thoughtful, text-only discussions with people. Then it devolved into a place to share cat pictures. Why? IT GOT TOO BIG. And low quailty shit rose to the top.
Consider what this means. Maybe the Dead Internet isn't a conscious plot or sinister AI. Maybe it's an emergent property, something that happens naturally in an information sharing network when people are forced to deal with more info than they can handle. Maybe the Internet feels dead and hollow because the sheer SIZE is sucking the oxygen out of the room, creating an environment that's toxic to high-quality, thoughtful content. The traditional framing of Eternal September is that communities that get too big fail because large communities acquire members faster than they can internalize culture. But maybe this is overthinking it. Maybe the problem is just the size itself. Force people to process to much info, and you get garbage.
And with this in mind, maybe the solution is less information OVERALL. Taking the study's methods as inspiration, this could be done with just a random number generator: if you have a feed of 1000 items, make a system to pick ten or 20 at random. Trim it down. Cut back the firehose of content. And interestingly, this is a solution that both sides might agree to, as long as it was truly random. Wouldn't even have to trust Silicon Valley: you could implement this across various sites with a greasemonkey script.
Of course most people will never do this on their own: it has to happen on the site level to make a big impact, and that's unlikely because SV likes ad money above all else. But it does shift the discussion from debates over what information should be allowed to simply helping people process information OVERALL.
In short: What happens when social media crams more info into peoples heads than they can handle?
This:
From the post:
Researchers at Indiana University Bloomington have built simulations of social networks, in which each node is an imaginary person with a random set of connections to other nodes. To model the real world, each simulated person has a limited attention span — they can only process a certain number of incoming messages. The messages are assumed to vary randomly in quality, and like in the real world, each person can choose to rebroadcast a message, or not. In the simulation, the decision whether to rebroadcast is random, rather than being driven by "virality" or cognitive bias, so the simulation is an optimistic one.
It turns out that message propagation follows a power law: the probability of a meme being shared a given number of times is roughly proportional to an inverse power of that number. This turns out to mean that as the number of messages in the network rises, the quality of those which propagate falls.
There's been a lot of concern as of late about "algorithms," and rightfully so. But while it's clear that all the large tech companies are engaged in tampering with the flow of information, this study demonstrates that that isn't the central problem. Read that last paragraph paragraph again, and then this one from the article itself:
Bots...greatly reduce the quality of information in a social network. In one computer simulation, OSoMe researches included bots (modeled as agents that tweet only memes of zero quality and retweet only one another) in the social network. They found that...when the percentage of bot infiltration exceeds 1, poor-quality information propagates throughout the network. In real social networks, just a few early upvotes by bots can make a fake news item become viral.
Constant hand-wringing is done over "misinformation" (including in the article, with its obnoxiously leftist slant). But this study shows pretty clearly that the root problem isn't algo or propaganda per-say.
It's information overload.
As people get exposed to more and more info, their ability to engage with it goes down, and so they prioritize low-quality info (much of it not even political) because that's all they have the mental bandwidth for. This process can be accelerated by bots, but they're only catalyzing a reaction that's already taking place, like gasoline on a fire. People who weren't there will have a hard time believing it, but >reddit

Consider what this means. Maybe the Dead Internet isn't a conscious plot or sinister AI. Maybe it's an emergent property, something that happens naturally in an information sharing network when people are forced to deal with more info than they can handle. Maybe the Internet feels dead and hollow because the sheer SIZE is sucking the oxygen out of the room, creating an environment that's toxic to high-quality, thoughtful content. The traditional framing of Eternal September is that communities that get too big fail because large communities acquire members faster than they can internalize culture. But maybe this is overthinking it. Maybe the problem is just the size itself. Force people to process to much info, and you get garbage.
And with this in mind, maybe the solution is less information OVERALL. Taking the study's methods as inspiration, this could be done with just a random number generator: if you have a feed of 1000 items, make a system to pick ten or 20 at random. Trim it down. Cut back the firehose of content. And interestingly, this is a solution that both sides might agree to, as long as it was truly random. Wouldn't even have to trust Silicon Valley: you could implement this across various sites with a greasemonkey script.
Of course most people will never do this on their own: it has to happen on the site level to make a big impact, and that's unlikely because SV likes ad money above all else. But it does shift the discussion from debates over what information should be allowed to simply helping people process information OVERALL.
In short: What happens when social media crams more info into peoples heads than they can handle?
This:
