What Happened to "Social" Media?
The once-disruptive technology is mainstream, and we should worry about what that means
About 15 years ago, Wendy Moe and I spent a fair amount of time investigating what was being called “user-generated content.” At the time, we were talking about blogs, discussion forums, and product reviews. Pretty much anything that you would find online that was created by an Internet user rather than a company. The UGC label didn’t stick, and today we talk about it more broadly as “social media.” But, that’s not really what we have. And recent choices that have been made by companies and regulators have served to highlight that for us.
Before we dive into this, let’s start with acknowledging a reality. We are not the true customers of the social media platforms. The real customers of the social media platforms are advertisers. Someone has to pay for the servers that are used to allow millions of people to connect with each other. So, rather than ask consumers to pay for it, it turns out that marketers are more than willing to pay for the ability to reach well-defined groups of customers. And it’s not just that we aren’t their customers. For each person scrolling on X, Instagram, or Facebook—the time that you choose to spend online, endlessly scrolling—that attention is being monetized by social media platforms. Access to us is the very product that they are selling.
That probably sounds a bit more ominous than I intended, but that’s the bottom line. More broadly, the entire digital ecosystem has been built on consumers’ data and attention, but that’s a topic for another day.
Are we alright with this trade? We get to connect with people that we know and people with common interests, and in exchange, advertisers try to persuade us to buy things. It’s the same idea that underpinned television in the early days — free content in exchange for an opportunity to persuade people. I think when social media was getting off the ground, this was a reasonable price to pay.
But, what’s happened on these platforms in recent years?
From Personalization to Polarization
One of the promises that social media platforms made to businesses was that they could enable more efficient marketing spending. How? By allowing consumer segments to be identified with greater precision, based on data that consumers had allowed to be collected about them. Profiles could be built based on demographics, geographic locations, and even the set of pages that you liked. If you were looking for religiously inclined, well-educated women with children, you could target them with ads. Want to reach people in long-distance relationships with an upcoming anniversary? Enough of these people could be identified so that you could focus your campaign on them.
It makes perfect sense from a business standpoint to want to focus your advertising dollars on the people who you know are more inclined to actual purchase your products. But, the personalization that became commonplace for digital marketing didn’t stop with the ad dollars. It was built into the algorithms that dictate our entire online experience.
It turns out that certain types of content tend to resonate better with us. And, the content that does a better job at engaging us? It tends to evoke strong emotions from us. It’s not just on social media. The same traits that tend to make content go viral are associated with increased engagement. By showing people more of the content that engages them, the longer they stay on the platform. And, you guessed it, the more ad revenue the platforms can generated.
But, what about the content coming from other people? It turns out that social media is just as polarized as other aspects of society (in the US, at least). People tend to gravitate toward those who are like them, and not interact much with people different from them. The result is a social network that looks like the graph below.
“Red” actors (nodes) connect to other red actors, and “blue” actors connect to other blue actors. There isn’t too much cross-pollination of ideas. And while it’s pretty obvious in a political context, this hyper-partisanship is probably creeping into other aspects of society. Instead of exposing us to new ideas and broadening the information that we’re exposed to, we get pulled into echo chambers and increasingly divided.
There’s more content out there, being created by more people. But, none of us are seeing all of it. Before the digital age, we had a limited number of sources from which we’d acquire information. And, while our choices have proliferated, one of the consequences of this is that we don’t have a common set of information any more. Aaron Sorkin’s The Newsroom summed it up.
Sidenote: If you’re a fan of Aaron Sorkin, there will surely be more references to his work in my writing. And, don’t worry, if that’s too far to one extreme for you, there will also likely be references to Ayn Rand.
Social media hasn’t given us the much-hoped for platform where all of us can get up on the virtual soapbox and reach a mass audience. Rather, the algorithms used have ensured that we *might* reach people who tend to agree with us.
A Social Experiment is Coming Home to Roost
So besides contributing to the age of “alternative facts” and a post-truth world, what else can we thank social media platforms for?
Increased body image issues among young women. Some countries have stepped up to combat this by requiring disclosures on manipulated images, but the US is not one of them.
A proliferation of hate speech aimed at minorities, which has sadly often turned violent. Try explaining to a curious child why someone shot and killed several people just because of their race or religion.
The surgeon general has warned about its impact on youth mental health.
Meta recently announced that it will no longer fact check content online. First off, let’s recognize that fact-checking the enormous volume of content that is posted daily is a massive undertaking. Even under the best circumstances (content that is clearly true vs. false, or appropriate vs. inappropriate), this is a tall order. Yes, we now have AI that could in theory do this (and it would certainly be more humane than subjecting human fact-checkers to content that could be psychologically unsettling), that means that the AI has to get it right most of the time.
But, is punting on truth the right decision? I get that there is a free speech argument to be made. But, online forums already tend to devolve into cesspools that show us the worst of humanity. And plenty of research has highlighted the challenges of combatting misinformation, including how quickly it spreads on social media. Do we really think that removing a safeguard is suddenly going to improve content? I’m not just concerned about the veracity of online discourse, but without common sense guardrails, we are going to see these platforms bring out the worst in people.
While I’m not overly hopeful about what this means from the perspective of civil discourse, I think the market will sort itself out. If consumers are troubled by it, you don’t have to spend time on those platforms. Pick a different platform, or put down your device and do something offline with actual other people. If brands care, which they should only to the extent that their customers and/or shareholders care, they will move their money elsewhere.
A New Set of Gatekeepers
I read Tim Wu’s The Master Switch (Amazon link here for convenience, no commission for me) years ago. I don’t remember what prompted me to pick this book up this book, but I think it’s even more relevant today. I may be misremembering the details, but the part that stuck with me was the way in which a small handful of organizations had a stranglehold on the the propagation of information.
Digital media did change the game. It became easier for non-behemoths to create and disseminate content. You’d still have to contend with finding an audience, but we’ve seen new outlets emerge. As proof of that, look no further than The New York Times, which recognized Buzzfeed as a competitor in its 2014 Innovation Report.
But, while legacy media has been forced to adapt, we haven’t see the full-fledged free flow of information. Instead, it looks like we’ve traded one set of gatekeepers for another. Algorithms drive everything today; it’s what makes TikTok so successful, and simultaneously so dangerous. And while these algorithms are focused on driving user engagement, what are the consequences on other outcomes? For example, could a focus on driving engagement result in users being exposed to angrier content? Research says yes. But, what if it’s not just emotions? What if an organization wanted to push a particular agenda? Last year, Pew Research Center reported that more than 50% of Amercians report sometimes getting their news from social media. What is to stop them from choosing to stifle information that doesn’t benefit them? While we often worry about government overreach, these organizations have a huge impact on informing society. It’s not clear what checks exist on these organizations, and that’s assuming a government could move swiftly enough to do something about it.
Without Social, It’s Just Media
Changes to social media have been fast and furious over the last week, let alone the last few years. But, Meta’s deployment (and subsequent deletion) of AI bots to engage with users has to take the cake. Part of Facebook’s mission statement was to “bring the world closer together.” How does adding fake conversational partners bring people closer together? I can understand how it might boost engagement on the platforms. But, there’s no way that adding AI-generated slop to a feed connects me in a meaningful way with other people.
Th decision to deploy AI bots tells us everything that we need to know about “social” media platforms. They are hyper-focused on engagement, not on building connections between people. To the extent that more human connections results in more businesses and consumers using the platform, they’re not opposed to this. But, I suppose when you have a good chunk of the world’s population on your platforms, there’s only so much growth through customer acquisition that you’re going to get. If we’re going to get users to spend more time on the platforms, we can’t count on other users to create content to incentivize that.
I’m one of those dinosaurs that spends time doomscrolling on Facebook. Despite the changes to the newsfeed algorithm over the years, I can’t say that I’m getting shown content that is more meaningful and resonates with me. Why? Two possibilities. First, Meta would rather show me other content because there is a better chance that it will engage me compared to what me connections post. But, given how many pages are recommended to me and ads that I see, the more likely scenario is that my connections aren’t producing enough content to populate my feed. Aside from occasional updates, I’m not seeing content from social connections. I’m seeing algorithmically curated content, with a sprinkling of what Facebook started out as.
Social media has gone through a metamorphasis. Let’s recognize it for what it has become. While it might have gotten its start based on social ties and connecting people, social media is just “the media.” It is mainstream. It has the power to entertain us, persuade us, and infuriate us. Social media is crucial to the success of businesses today. But, it’s much bigger than that, as it’s become a primary means of remaining informed. And, we’ve given them a free pass for what is shared on their platform (thanks a lot, Section 230). Social media is embedded in our society, and it’s not going away any time soon. Parents allowing their children to create social media accounts is probably a more significant rite of passage than getting a driver’s license.
I wonder, how might online discourse be different if the platforms were financially responsible for any harms to which their algorithms and the content they host contributed to? Or, if engagement were not the only metric that was prioritized? The academic in me sees a lot of opportunity to explore these hypotheticals. The cynic in me doesn’t see meaningful change coming any time soon, at least in the US. Maybe we will take a page from Australia? Or mandate digital literacy to minimize the harmful effects that it can have? We tried allowing technology to evolve on its own, and to say it was a mixed bag is being very generous.
Maybe it’s time to try something different?