My Social Dilemma (Part Three): The Unintended Consequences of Algorithms
I remember being in the grocery store checkout lines as a kid and seeing the magazine rack. You would have news magazines and then the tabloid ones on the same rack. News ones…BORING. The tabloid ones really caught your eye.
Sure, they were mostly out in left field, untrue and totally garbage for your brain. But they captivated you to the point your brain wouldn’t want to engage with the more substantive content from the TIME magazine sitting next to it. Over time, I wouldn’t even notice the more “boring” content and eventually even got to the point where I was banner blind to the tabloids. With time, I was de-sensitized to the sensationalism….until the editors came out with a much more outrageous story from time to time.
I feel like that’s what social algorithms do now to some extent. Compounded with the current news cycle, what happened to my brain walking down the grocery checkout aisle as a kid is manifesting itself on our social media news feeds, especially Facebook. To me, it feels like algorithms are slowly normalizing the fringe.
What’s an Algorithm?
Algorithms were designed to simultaneously deliver relevant content to users and keep their attentions on the screens longer. The platforms could then monetize your attention to advertisers.
Who doesn’t want a personalized experience? Isn’t the whole point of all this technology supposed to make information easier to find? Why not let an automated system do the work of sorting through irrelevant stuff for me?
At a surface level, that’s correct. Sure, algorithms are a way to help maximize profits for the platforms. But if it’s providing a better user experience for me, isn’t it worth the trade off?
It would be, except nobody really knows understands how these continued to evolve. It’s like the line in Jurassic Park “Your scientists were so preoccupied if they could that they didn’t stop to think about if they should.”
If you take this scene from Jurassic Park and swap their mentions of dinosaurs with either algorithms and AI, it has a lot of relevance today.
While humans created algorithms to improve user experience (aka make their platforms more addictive), those same humans unleashed something they don’t truly understand or honestly have too much control over.
So What’s Happening?
A tool that was designed to deliver us content that would (in theory) most pique our interest has actually incentivized uncivil online behavior. With time, we’re seeing that spill over into real world interactions.
The original intent was for something like a wedding announcements, births, job changes or other large life event status updates would see the light of day more. Updates that would
get more comments so
you would be more likely to see them and
eliminate FOMO for the user because they saw information that was important to them and their network.
Those engagement actions like comments, likes and clicks deliver signals that say “hey give me more of this!”
In theory, that makes sense. But it’s an overly optimistic and overly simplified way of analyzing human behavior online.
The downstream effect is that posts that incite rage or other negative emotions also send the same signals. To an algorithm, there is no nuance. It rewards a long comment thread on a post of someone telling you how proud you are of someone as much as a long comment thread on how disgusted you are.
At the end of the day, those negative emotions drive more engagement. We’re drawn to it more. The more extreme, the better.
Normalizing the Extreme
Just like the tabloids I saw in the grocery store checkout line, we’re more drawn to the extreme and sensationalized version of events.
Moderation and the mundane make for poor clickbait.
Because of our tendencies to reward fringe views with engagement (positive or negative), we’re served up more of that content. Eventually, we’re mostly served up caricatures of “the other side.” We only see the most fringe versions of them. After repeated exposure to the fringe elements of something, we become desensitized to it.
If we’re desensitized to something, we no longer consider it fringe. It’s just normal. The fringe elements of a side end up becoming, in some eyes, the mainstream representatives of it.
If you’ve ever wondered why some people on the left view all Republicans as white nationalists or people on the right believe all democrats what to cause anarchy and burn down America, it’s because of this phenomena. We apply broad stroke judgements on large groups based on the fringe actions of a minority because…that’s all we see. With time, it’s the only content social news feed algorithms deliver us.
Creating Our Own Versions of Reality
Compound our online engagement activity with other demographic information we’ve shared with these platforms (age, gender, geography, etc), we slowly construct our own realities whether we realize it or not. Which is scary. They dive into this a bit in The Social Dilemma.
We are already starting to polarize but algorithms have accelerated it.
If we are slowly building our own realities without actually realizing it, how can we have nuanced conversations with others we don’t see eye to eye with? How can we solve complex problems? How is it possible to engage with someone in good faith if all they’ve heard about you or “your side” is framed in a negative context?
It’s no wonder that we have a hard time nailing down a common understanding of truth when we’re able to unintentionally construct our own truth bubbles.
When we only hear from people we agree with, we have a harder time trusting people who belong to that other group.
Trust is a cornerstone for having a civil society. Without it we have nothing.
Okay, Drew. I get it. Algorithms are bad. Sort of. Now what?
There’s a lot of ways to intentionally get out of your thought bubbles. It takes a lot of effort but it is doable. We’ll get to that in another post.
Where I want to go from here is marketing’s role in all of this.
I have had an occupation that requires me to get the right messages in front of the right people in the most effective way possible. Being good at my job means having a surface level understanding of how algorithms work and how to drive consumer actions on posts to increase engagement and eyeballs.
Being effective at my job means, in a way, knowing how to gain the system.
So how much responsibility should marketers shoulder for all of this? Can I look myself in the mirror knowing I get paid to participate and indirectly help fund all of this?
The answers to those are “not as much as you think” and “I can but only when I’m shaving.”
More on that in the next post.