Facebook’s Related Content Algorithm Might Have More Potential Than It Thinks

In this entry, Peter Adams, NLP’s senior VP for educational programs, reflects on the potential of algorithms and social media to enhance the accuracy of the information users encounter, consume and share.

Twice this week, Facebook did something remarkable for me: Its related articles suggestions effectively fact-checked a piece of misinformation in my news feed. I was genuinely surprised, and perhaps even a little ecstatic, to see that the algorithm that drives this function on the platform was capable of such an act of news literacy.

The first piece of misleading information was what marketers refer to as “branded content” or, more specifically, a branded viral video created by Gillette. The second was a news report from the “PBS NewsHour” that contained a false statement. In both cases, I found suggestions for related articles that gave me the whole story.

The feature hasn’t always worked so well. In fact, it had a somewhat rough start when it was first announced at the end of last year.

The move was part of the company’s larger goal of increasing the quality of information on the platform — to make Facebook a central place where users discover and share not just personal updates, memes and funny videos, but hard news too. Since people tend to be inherently interested in the things their friends are interested in (which explains a large part of Facebook’s success), why couldn’t the platform also provide people with their daily news by first highlighting the links their friends are sharing, then suggesting additional articles on the same subject? The strategy made sense. . .  but it didn’t always work.

In late April, the related articles feature offered three stories that were all but patently false (and indisputably gauche) to users who clicked on an Associated Press report about a girl who handed the first lady her unemployed father’s resume at a White House event.

The Boston Globe (which carried the AP report and shared it on Facebook) followed up with a story about the backlash this incident caused and the broader concerns it raised about the related articles function. The report also raised concerns about the ability of Facebook to handle other news-related projects, especially FB Newswire (which, ironically, was announced the same day the AP piece about the first lady was published).

At this point, I was comfortable in my conclusion that the related articles feature was maladroit — a noble idea whose algorithmic engine could never be discerning enough to work. I decided that it was unlikely to help Facebook become a hub for quality news and might even have the reverse effect by amplifying gossip, rumors and native advertisements instead.

Then, about five weeks later, this post showed up in my feed:

When I clicked the link (and of course I did) I watched what appeared to be Tampa Bay Devil Rays’ third baseman Evan Longoria making an amazing catch to save a reporter from being hit by a ball during batting practice. It was an incredible moment, perfectly captured by the camera —in fact, it seemed a little too incredible to be true. But before I could fact-check the video myself, I returned to my feed to find this:

      

Of the three related links that now appeared beneath the post, two were entries from well-known fact-checking sites, both  of which  reveal the video to be a stunt staged by Gillette (whose brand appears in the background) and intended to go viral.

It was, indeed, a remarkable moment—a remarkable catch, even, just not one made on the field of play: Facebook put accurate information that corrected the shared link unmistakably in my way. I couldn’t miss it.

The next day, I encountered a short print piece that the “PBS NewsHour” had shared about locks on the Ponts des Artes bridge in Paris.

The piece states that “an 8 foot section of the bridge collapsed under the weight of the countless number of ‘love locks,’” and I was immediately intrigued, imagining a whole section of the popular bridge giving way and people spilling into the Seine. But when I returned to my feed, Facebook had again offered me a related article that corrected the record:

The Gizmodo report (among others) clarified that, despite reports that a section of the bridge had collapsed, what actually collapsed was merely a section of the chain link fence that lines the sides of the bridge. Gizmodo included two shots of the collapsed section of fence that were tweeted from the scene:

 

This was the second time in two days that Facebook’s related articles had put information in my way that was more accurate than the link that appeared in my feed. This doesn’t mean that Facebook has necessarily fixed the related articles problems that occurred in April. But it does mean that there’s promise — both in this specific feature and in algorithmic curation generally.

To help realize this promise, I have three suggestions for Facebook's developers:

  1. Create high standards for sites to be included in the feature (much the same way GoogleNews does)
  2. Make the algorithm smart enough to detect controversy — for example, a string of comments containing key argumentative words — and suggest standards-based information to help that discussion remain rooted in the facts
  3. Use the feature to help pop online filter bubbles by enabling it to detect opinion pieces, then suggest other opinions on the same subject from a variety of perspectives.

Yet even with these improvements, there is a limit to what algorithms can do for us; a point at which every citizen and consumer of information must take responsibility for building news-literate habits and assessing the credibility of the information they encounter themselves. But we shouldn’t underestimate the role that digital tools can play in increasing our collective news literacy and civic engagement, particularly when they bring social networks together with information that matters. With the right values undergirding their development, the digital platforms through which we access, consume and share information can play a significant role in making the challenge of knowing what to believe easier for everyone.

Peter Adams tweets about news literacy, education and technology at @PeterD_Adams