They didn’t make it about the content
It’s hard to find anyone who doesn’t believe that too much social media is a bad thing. Social media companies now urge you to take breaks, and even Apple’s Tim Cook, whose ultra-popular iPhone hosts all these platforms’ apps, wants people to look less at their phones and more at other human beings. Now, though, the courts have weighed in, and Meta and YouTube just lost a landmark case that could take casual concern and make it tangible in the form of fundamental changes to how we view and use social media.
On Wednesday, an LA Superior Court Jury ruled in favor of a 20-year-old plaintiff who claimed that Meta and YouTube were negligent and that their platforms caused her mental health issues. It’s one of many such cases popping up around the US, and less than 24 hours earlier, Meta lost a case in New Mexico, which claimed that Meta’s apps failed to protect users from online predators. Even though these cases are not connected in the courts, the pair of them may point to some rapidly changing attitudes toward social media and its use.
This LA case, which originally also included TikTok and Snap (they settled out of court) is more notable since, by not focusing on the content that might have led to harm but on how the systems are built (algorithms that keep you engaged, endless scrolls, notificaitons to return), the case skirted around the US’s long-standing Section 230 (part of the US 1996 Communications Decency Act) that eseneritally protects these platforms that third-party individuals’ post on them. So, unlike a publisher that might be liable for a story in their newspaper, YouTube is not directly liable for false and inflammatory remarks made in a YouTube video.
In this case, the content that might have influenced the plaintiff’s body dysmorphia is immaterial. What matters is that Instagram and YouTube felt inescapable because of how they work.
Let’s not be naive
Whether or not that’s true (yes, the jury believes it is), what we should be able to agree on is that the algorithms in Instagram, YouTube, and TikTok are powerful, and personalized to even your most fleeting interests. They’re not just measuring if you actively click on a like button. They can see time spent, where you paused, commented, and other metrics that tell them how you feel about that content. More positive signals will lead to you seeing more of that content, even if it’s not good for you. Just last year, Instagram added a tool to let you curate your own algorithm.
If you have notifications on, these platforms will reach out and try to pull you back in. There’s also the social construct around them. Our cultural language is now intertwined with social media; to be off of it is to be wildly out of touch. That’s not something a teen feels they can afford to do.
One of the questions here, and I don’t know that this case makes it clear, is if Meta and YouTube are being held negligent for not knowing what their systems were doing or if they willfully designed systems, as some reports have alleged, and algorithms that would keep coming back and on platform as long as possible, serving you whatever content engaged you most and not discerning between healthy and unhealthy content.
Cases like this make me feel for the affected teens, and how they must’ve felt trapped by the content and their response to it. I know that Meta and YouTube argued that the harm this young woman felt was tied to her home life and not their platforms. I would guess that played a part.
The role parents play
Which leads me to think about parents and guardians. Before any of us understood the impact of these platforms, most of us threw up our hands and “let teens be teens.” Social media wasn’t for us, although now many adults are just as addicted to it as teens.
I’ve often counseled parents on how they can’t leave their kids alone with phones, tablets, and social media. These tiny screens are doorways to a vast and unknowable world, often featuring content, ideas, and people they are not ready to handle.
Post-millennial teens (let’s just call them ‘digital natives’) are often smarter than their parents about technology, running rings around their rules and creating fake Instagrams (Finstagrams) to hide what they were really doing on the platform: parents saw the main Instagram account, their friends saw their real lives in Finstagram.
It took more than a decade for Meta, YouTube, and others to admit that these platforms needed to offer parents some modicum of control.
They moved slowly at first, but in the last few years, Meta has become particularly aggressive, even applying AI to identify potential teens on the Instagram and then automatically shifting them into more limited access (yes, I’ve heard of adults who’ve been swept up in this automation, but then I have to ask, why does an AI think you’re a teen? What are you doing on there?).
What’s next
Meta is appealing in New Mexico and will surely appeal this case, as well. But a loss like this could be the beginning of a landslide where Meta, YouTube, TikTok, and others suffer more losses and have no choice but to rewrite algorithms (what if they have to insert content they know you’ll dislike after every fifth post?), pause or limit auto scrolling for everyone, and remove everyone under 18 from these platforms.
That’s also all unlikely but not impossible. It’s hard to imagine these platforms emerging unscathed. The sentiment has changed. Action will be warranted.
At the same time, they cannot afford to lose their teen user bases. Meta and YouTube need these younger users because they will eventually become their adult customers with buying power. Most of Meta’s revenue still comes from advertising, which is somewhat effective on kids and teens, but far more impactful for adults with money in their pockets.
Change is coming, but I can’t conceive yet how it will manifest; I just know it’s coming.
