From Social Media to Sour Media in 2016

December 05, 2016
By: Doug Mohney

Lost all patience with re-written news with misleading headlines? How many people have you un-friended over the last twelve months on Facebook (News - Alert)? Been annoyed or harassed on Twitter for something you might have said that was taken out of context? The year of 2016 is going down as the one where social media went sour in a big way.

A loss of patience and courtesy in electronic communications is nothing new, with "flame wars" and "trolls" enabled by the fact many people will say things in email or a blog post they would never do in person. Mailing lists and news groups (look them up post-Millennial kids!) had the occasional bad actor that stood out among the crowd, but nothing on the scale of bitterness that we've recently seen.

The big problem: Individuals type what they immediately think in a stream of consciousness without the social limits most might have sitting across the table or speaking in a group setting. Filtering and self-conscious behavior – or just being polite – goes out the window, be it an exchange on Facebook or John Podesta calling Bernie Sanders a "doofus" in a private email.

Newer factors leading to the souring of social media include rewritten news, fake news, a self-confirming "echo chamber" effect, and the willingness of individuals to confront people they don't know.  Rewritten news has been around for a while, where a single story – well-researched or not – is deemed worthy of distribution by other news outlets, reworked, and presented as "news," sometimes presented or republished without a link to the original story. One original story can generate thousands of Tweets and social-media reposts, but with little added value or commentary.

Fake news knows no political boundaries, with both left and right building a "story" around a fact that may or may not be true. Historically, fake news is nothing new. One can turn the clock back to the late 1800s and look at the era of Yellow Journalism to see parallels between the New York City newspapers of the day and today's electronic websites.  Advertisers only now are starting to look more carefully how some of their marketing dollars are flowing into sites they might not want associated with their brands.

                     Image via Bigstock

Combine rewritten and fake news with near-immediate amplification through social media channels and there is an issue starting to be recognized by both social media firms such as Facebook and Google (News - Alert) and advertisers.  A single story – true or false – gains rapid distribution from thousands to millions of people through social media connections.  Popular and shocking stories gain prestige and rapid distribution over longer, more thoughtful and less exciting pieces that may be more nuanced and accurate.   We've trained ourselves to be participants in this un-virtuous cycle of information distribution without hitting the pause button for considering the implications of what is said and how valid the actual story may be.

We can thank the ability to easily leave "comments" as the final step in a less civil society. Offended parties get the immediacy to blast away at a viewpoint without research or consideration; after all, there are no usual face-to-face social limitations that would slow down confrontation while the array of rewritten and fake news means that, for any given moment, "real" news can get lost in the waves of rewritten and fake storylines traveling across the Internet.

Yet there is no simple fix.  Using editorial staff to filter through stories has lead to charges of real and imagined bias, but at some point hard limits on what is and isn't published and circulated will be imposed by advertisers unwilling to support publications that they aren't comfortable with, such as the move away from Breitbart News. 

Another duct tape solution has been to simply turn off comments, cutting off trolling and extreme remarks at the price of reducing reader engagement.  It is simpler than moderation, where a human being is required to read through comments for approval and "judge" them for appropriateness and tone.

Longer term, there are two possible solutions, one technical and one sociological.  Artificial intelligence (AI) "bots" will likely evolve as filters and active moderators for news stories and comments, paring back rewritten news and checking sources and validity for other news.  Advertisers will pay for AI-based news filtered services so they don't end up supporting a story or news outlet that doesn't fit with their brand.

More optimistically, today's wave of electronic yellow journalism and anger may be a part of a cycle, with publishers and the public both swinging to be more moderate and temperate.  The yellow journalism of the late 1800s faded into more substantial newspaper reporting that lasted up until the onset of the Internet era.   We may see a similar evolution in the electronic age over the next decade or so, with fake news being pushed to the outskirts by more ad dollars going into "real" news sites.




Edited by Maurice Nagle