Anyone who was around the early days of social media remembers the horrible content that circulated. Our feeds are better now, in no small part due to the millions of dollars spent by large corporate social media networks like Facebook and Twitter on content moderation.
But who are the people, the actual individuals, moderating the content and protecting billions of people from the trauma of graphic atrocities and abuse? And are they being fairly compensated and taken care of if they develop PTSD, depression and a litany of other mental health conditions related to content moderation?
It goes without saying that knowing that the person you’re talking to is actually the person you think you’re talking to is important. That is why Twitter created the infamous verified blue check mark: People wanted an assurance that the Twitter account of a particular government agency or their favorite celebrity was legit and not some dude in a basement impersonating them.
As we move on from this particular walled garden, knowing who you’re actually talking to remains important, maybe even more so. On the fediverse (where Mastodon lives), there’s a plethora of users flaunting blue check marks, but without a verification authority like on Twitter, they lack the value of the original incarnation. Even the third parties that will “verify” you depend on your Twitter verification or presence.
I’ve been online long enough to remember how the internet was before the Eternal September. I also remember being surprised when I first saw an ad online—a banner ad on Yahoo, around 1995 or 1996. Since that time, advertisement dollars became the engine of corporate social networks.
Twitter alone made about $4.5 billion from ad services in 2021, about 89% of its total revenue. The so-called “public town square” is actually a walled garden that persuaded 217 million people to come inside and now sells their attention to the highest bidder.