It’s hard to pinpoint just when it all went wrong. One day I was coming home to a roommate setting up a Makeoutclub profile on the household desktop, and then I blinked and found myself 20 years in the future, a middle-aged mom who has wasted three hours of a precious solitary Tuesday morning scrolling through nonsense on Instagram.
I suspect I’m not alone here. Somewhere along the line we started offering up more and more of our time to social media. It became so cozy and familiar, so comfortably ubiquitous, that we hardly noticed our agency slipping away. My phone became a leaden tyrant in my pocket, something I was obligated to “check” innumerable times a day. And the information and sentiments I ingested with each passing glance became increasingly hostile, urgent, and preachy. It became, in a word, exhausting. But it wasn’t always this way; I know, because I was there. So how did we get here?
In the beginning
In the beginning, social media was a pleasant little interlude. Some older Gen Xers were there at the birth of ‘virtual communities’, the ancestors of modern social media: DDial (a mid-’80s chat server), listservs (the first mailing lists), Usenet (an early forum established in 1980), IRC instant messaging. They met people and exchanged ideas. Trolls emerged almost immediately, but the small-scale nature of these communities, coupled with the time and technical knowledge needed to join them, were a natural barrier to bad actors.
By the time Friendster showed up in 2003, I was already on some message boards and had started getting my blogging feet wet on Xanga, a journaling site similar to LiveJournal. I remember a college friend saying, “Hey, are you guys on Friendster yet? Let me send you an invite…” and it was off to the races. I relished filling out my profile on these sites: which pithy quotes to include? Which profile picture taken from just the right angle? At a time of life when I owned nothing, it felt auspicious to create my own little postage stamp lawn on the ‘net. Yet when Myspace began to take hold in the hearts of my social circle, it had already become A Lot. I remember grousing a bit about having to set up another thing when I was still quite cozy over on Friendster.
Portrait of the artist as a young Myspacer (by far not the most ridiculous angle I used over the years)
Even while I collected “friends” on these platforms and fussed around with the order of my Top 8, the primary function of social media was to keep in touch with people I already knew. Most of the posting and messages revolved around rehashing something that had happened in person, or making plans for getting together. The entire point of being in this virtual world was to process and enrich our real lives.
After Myspace had been around for a couple of years, I began to feel a tightening in the pace and obligation of online life. I used to spend hours keeping track of goings-on around town so I could publish a weekly bulletin about where to be and at what time. As a Type A person, I didn't conceive of this as anything more than a way of ensuring that my close friends could manage to run into me at some point. But, one night, a relative stranger thanked me for “doing all that work” so they could show up to all the right places every weekend. It was my first faint glimmer that all my effort could be funneled into a source of revenue for Big Tech. My participation, my posting; pouring myself out into these bits and bytes; this was, almost without my realizing it, becoming work. But I was too busy rushing into a metal show at a vintage shop to notice.
The burden had its benefits. I loved that my high school friends and I were able to skip the small talk at our ten-year reunion because we had all been keeping in pretty decent touch on social media. On a night when we only had three or four hours together, it made a real difference that we were able to dive right into the good stuff. And these earliest forays often had an air of serendipity. I remember a group of us piling into a friend’s room for a mid-party session of Chatroulette, a video-chat site that generates random pairings and quickly became infamous for pornographic content. A sense of possibility crackled in the air as we waited for it to sort us into tete-a-tetes, and I still remember the stranger’s “Uh, wow…” when they saw a room full of revelers, faces jostling to get into the webcam frame. Joy, chaos and unpredictability: it felt like we were living in the future.
What changed? 1: Increased dosage
Most of my early social media experiences were bite-sized, simply because I needed to be near a computer to have them. I checked Friendster briefly once or twice a day; it didn’t take long, given the constraints of my friends list (mostly people I knew in real life) and the chronological feed of posts. Once I had read it all, that was it.
But social networks must either grow or perish. The more savvy emerging companies, including Facebook and Twitter, became determined to lure more of the population online, in more places, more of the time. Seeing my friends’ parents and corporate brands infiltrating Facebook was like seeing my local pizza joint overrun with tourists. What had once been a fairly digestible, personalized feed of party photos and life milestones became a maelstrom of content that had little, if anything, to do with me. Ads screaming about things I didn’t want were stacked up alongside brands I had halfheartedly “liked” and clickbait articles from news outlets.
What I was seeing was the manifestation of a new commercial challenge for tech companies: ‘engagement’. This used to mean simply clicks or signups. But, over time, it became apparent that advertisers were prepared to pay for different kinds of engagement; and what advertisers are prepared to pay for, tech companies will measure and incentivize and optimize. This straightforward rubric, once operationalised at scale, drove a host of downstream effects.
To get people to spend more time on their site or app -- to make their features “stickier” -- digital product teams started to perfect what’s known as “persuasive technology”, which “is honed to tap into our psychology and push us towards certain behaviors.” Examples listed by the Center for Humane Tech include notifications (which “mimic naturally occurring signs of danger”), “the possibility of new comments or "likes"”, and “design features like infinite scroll (where when you reach the bottom of the page and more content loads automatically).” The inventors of some of these features, including Aza Raskin (creator of infinite scroll) and Chris Wetherell (creator of the retweet), have spoken out about their regret and have even become activists for more controlled tech usage.
Social media couldn’t obtain total ubiquity until the advent of the smartphone. You can’t carry a PC to the dinner table; you can’t take a gaming console into the bathtub. Even the Gameboy, an early hero of ‘portable’ gaming, didn’t fit into any of my pockets. But, after 2012, getting back into the feed became as easy as raising your hand to chest-level. It became necessary to invent terms like “doomscrolling”, which describes a state of abyss-gazing, scrolling through whatever content presents itself even while acknowledging its utter uselessness.
Were we meant to be “social” at this dosage level? Are we even able to make our own decisions about it anymore? Even when people recognize that they’re spending more time on TikTok than they consciously want to, they still struggle to delete apps or put the phone down. A prison of one’s own making is still a prison.
What changed? 2: Wilder content
When we look at what the internet was originally invented for, we can see how far we’ve travelled. It was initially used for sharing information for academic and military purposes, and then developed mass appeal as a vehicle for connecting based on interests (Geocities! Livejournal! Blogger!). Now, it appears to be about communities -- everyone from video gamers to Steven Universe fanartists -- sharing things based on common hatred, and bonding over pastimes like ‘brigading’ (a coordinated attack) and doxxing (releasing personal information). Meanwhile, spaces for casual interaction have withered. We invite our neighbors to clambakes and take out their trash without ever thinking to ask about their political positions, but online, posturing and politicizing are hard to avoid.
Why did this happen? Because shock and fear are powerful emotions; the amygdala clicks faster than the prefrontal cortex. Positive and useful content -- which absolutely does exist -- doesn’t drive as much engagement. Which, as we’ve seen, means it also doesn’t drive advertising, and therefore profit. The more shocking / scary / anger-inducing the content is, the more attention it gets, and the harder it’s pushed through our feeds. It’s simple supply and demand. As the New York Times put it in their 2017 article about Twitter cofounder Evan Williams:
“The trouble with the internet, Mr. Williams says, is that it rewards extremes. Say you’re driving down the road and see a car crash. Of course you look. Everyone looks. The internet interprets behavior like this to mean everyone is asking for car crashes, so it tries to supply them.”
This curdling of content is a big part of why I spend less time on social media nowadays. When the simple act of picking up my phone became a stomach-turning experience, I turned off notifications and stopped actively checking apps. But as much as this helped me personally, it only served to push content another millimeter toward the extremes. When reasonable people disengage, platforms can more easily select for and optimize for highly politicized voices. In the absence of steadying voices, it’s easy to assume you have everyone’s broad agreement, no matter how extreme the idea.
What changed? 3: Our algorithms, ourselves
At this point you may be thinking, “So what? If social media is addictive it’s because people choose to be on it, and if the content is bad it’s bad because that’s what the people want.” Well, not exactly. We stopped being entirely in charge of the content we see a long time ago.
A recommendation that’s actually labeled (I didn’t choose the teen vampire drama life, it chose me)
If you’re not sure how algorithms affect your experience, take a look at Instagram, where you will probably be defaulted to an algorithmic feed. If you tap the logo at the top of the screen and select “Following”, you will see something completely different -- the feeds you have actually chosen to follow. On Facebook, tapping “Menu” at the bottom right of the screen, then “Feeds,” takes you to a row of options for viewing just your Favorites, just your Friends, etc. Hey look! Here are the actual human beings you followed once upon a time. All their posts are there, quietly waiting behind the screen of ads and suggested posts and hot takes and just generally “higher-value” content (from the tech companies’ perspective) in your default feed. If there’s somebody there that you miss, you’re glad to see, or you wished you kept in better touch with, you’re experiencing the gap between what you actually want to see and what the algorithms think you want.
An algorithm was introduced to Facebook’s feed in 2009, and all the other major social media sites eventually followed suit. Tumblr (I worked in customer support at Tumblr from 2011-2019) was a notable holdout; our post order remained chronological for several years—a point of pride amongst traditional internet enthusiasts like myself. But eventually the pressure to grow engagement forced us to cave. We began injecting categories such as “What you missed” and “In your orbit” into the dashboard in the mid-to-late 2010s. Initially these recommended posts were labeled as such, but further experiments encouraged ‘injection without labels’, and users were opted into a “Best stuff first” view by default. What does “best” mean? Who can say? Users who were used to a chronological dashboard complained about it in droves. This became a real bear for us support staffers, because it was hard for us to even tell which experimental version of the dashboard a given user was seeing.
People think there are brainwashers and mustache twirlers behind all this, but it’s really not that emotional. Maybe the big execs really do spend time screaming into a conference phone about algorithms while rubbing their hands together expectantly. But in truth this is about ordinary tech workers, people in a conference room looking at a graph together as their boss says “Oh this thing make line go up, let’s do more of that.” So they do more of that thing: a little tweak here, a little tweak there, and then watch the numbers for a week or two. Go back again, tweak it some more, watch the numbers again. For years. Don’t get me wrong: those regular people are aware that their decisions affect other people, and care deeply about it, and discuss it thoughtfully at length. But the discussion often goes something like this:
Me: “This user doesn’t want to see X.”
Them: “But they’ve historically liked a lot of similar posts to X.”
Me: “But they don’t have anything in their following list or followed tags to indicate being interested in X.”
Them: “Well, a lot of people they interact with are interested in X. And they’re clicking on posts about X more often than other posts, so they really do want X. What they really want isn’t what they say they want; it’s what they do.”
Me: “So we’re gonna decide for them what they want?”
Them: [shrugs and carries on with their day]
The problem here is the assumption that what you do the most of, you should keep on doing. Imagine a teen who’s battling an eating disorder trying to avoid certain posts and having them served to her anyway because, you know, she’s “historically liked a lot of similar posts.”
“The idea was that social media would give us a fine-tuned sort of control over what we looked at. What resulted was a situation where we—first as individuals, and then inevitably as a collective—are essentially unable to exercise control at all. Facebook’s goal of showing people only what they were interested in seeing resulted, within a decade, in the effective end of shared civic reality.”
— Jia Tolentino, “The I in the Internet,” from her book Trick Mirror
Some people think they’re immune to algorithms, or that a simple awareness of what drives social media feeds is an antidote to its influence. Awareness certainly helps, and can influence you to scrounge up some healthy skepticism once in a while. But there is a deeper connection with our brains here that we’re not fully the masters of:
“Researchers in France exposed young women either to media photographs of very thin women, or to media photographs of average-sized women. They found that the young women exposed to images of very thin women became more anxious about their own body and appearance. But here’s the surprising thing: the images were flashed on the screen for just 20 milliseconds—too fast for the women to become consciously aware of what they had seen. The authors conclude that social comparison takes place outside awareness and affects explicit self-evaluations. This means that the frequent reminders girls give each other that social media is not reality are likely to have only a limited effect, because the part of the brain that is doing the comparisons is not governed by the part of the brain that knows, consciously, that they are seeing only edited highlight reels.”
— Jonathan Haidt, The Anxious Generation
The garbage pump abides
Sometimes I wonder: if I had realized just how huge social media was going to be, and just how large it would loom in the day-to-day life and spending of the average American, would I have reacted differently to that conversation about Myspace in the street? Could I have parlayed my skills at gathering and summarizing and photographing and being around fun people into some kind of an influencer career, instead of just plodding along with all the rest of us working schmucks? But in the end, I’m glad it didn’t turn out that way for me. From what I can tell, people who make a living online off of their personalities have jobs that never stop, and cameras that never turn off. As an introvert, I wouldn’t have lasted ten minutes (or ten product promos).
For those of us who are left consuming the firehose of content, though, why does it even matter?
Here’s Jonathan Haidt again, in The Anxious Generation:
“Soon before his death in 1662, the French philosopher Blaise Pascal wrote a paragraph that can be paraphrased as ‘There is a god-shaped hole in every human heart.’ I believe he was right. … Although we disagree about its origins, we agree about its implications. There is a hole, an emptiness in us all, that we strive to fill. If it doesn’t get filled with something noble and elevated, modern society will quickly pump it full of garbage.”
The important thing isn’t whether I’ll spend another hour avoiding chores while scrolling, or whether I’ll buy that cute witch t-shirt. The thing I try to keep foremost in my mind, as I navigate social media, is what I’m not doing; what I’m missing out on. Am I choosing not to put my feet up on the dashboard as my husband drives us to yet another beautiful hiking spot? Am I missing a chance to feel my hands in the soil as I free my zinnias from the weedy vines that choke them? Am I choosing something better than hearing the sound of my kids screaming as I chase them down the hall with threats of tickling?
Terrifyingly, how I spend my time is up to me. We can keep arguing (and the media does) about whether social media is evil or harmless, but in the end even harmless luxuries have a price. The question I’ll keep asking myself is whether the price is too high.
Let us indulge in some self-promoting cross-posting by linking this essay to our ongoing series about the screen works of Aaron Sorkin, starting with his script for the Mark Zuckerberg biopic ‘Social Network’
I also touched on a lot of my thinking about this here: https://open.substack.com/pub/brainclub/p/why-i-finally-left-twitter-73ac2356baed?utm_source=share&utm_medium=android&r=1rtyf
Amazing amazing. I'm writing a zine about this precise topic at the moment so bookmarking the hell out of this post.