How double-edged can things turn out to be.
In the web spanning the corners of the globe, search engines provide the pathway to information and people in this modern world. But as time has gone by, they have also turned into carriers of false information.
With the complex social media algorithms in place on all the leading social media sites and search engines, those intent on malice have leveraged them to become a prolific tool for the misinformation mongers. The simple underlying reason – search engines learn to serve what you and the multitude of others have clicked on before on the same topic.
Bringing forward this concern plaguing the society, a research paper from eminent fellows at the University of Washington has sought to shed light on how the double-step between these algorithms along with a fickle human nature has been fostering misinformation. Corporates pulling in the bucks in the process.
Before we go into the highlights, let us take a moment to understand what we mean by fake news.
“Fake news” is a term that has come to mean different things to different people, and evolved into more types than we care. At its heart, “fake news” can be defined as those news stories that are false. The story itself may be fabricated, with no verifiable facts, sources, or quotes. Sometimes the stories may be propaganda that is intentionally designed to mislead the reader. Or, more identifiably, they may be designed as “clickbait” – written for gaining economic incentives.
More sophisticated ones tend to play their content down the line, making the users think on the basis of their existing beliefs and biases, compelling them to join the dots where there might be none.
Let us look at how the research has aimed to dissect this malignant process.
As it is the users who are fundamental to this whole cycle, we must touch base on how the search results go wrong in the first place.
The process gets underway with what is referred to as relevance feedback. When a user clicks on any particular search result, the search algorithm kicks into gear, learning the link clicked by the user as relevant for the search query.
This very feedback forms the backbone for the search engine, giving higher precedence to the link for that query in the future. Consequently, if enough people clicked on the same link enough times, it would give the result strong relevance feedback, thereby enabling the website to figure higher in search results for that and related queries hence.
Now that the link establishes a positive feedback loop, the website starts showing up more prominently on the search engine. This is where SEO optimization techniques come in, grasping at people’s proclivity to be more likely to click on links shown up higher on the search results list.
Essentially, the aspects of this misinformation problem is framed in an intricate duet between how a search algorithm is evaluated and how humans react to headlines, titles, and snippets generated. Search engines, primarily judged on the user engagement that they bring, are fed by the people’s desire to focus on the sensational and exciting. This is what causes an unwarranted seepage of misinformation right from the outset.
After all, it is in the search engine companies’ best interests to give a user results readily that he so desires.
Any semblance of relevance is bound to go sideways when the users in question start browsing for funny dog videos when the relevant link for example, was of a heroic deed by a pet to save his master’s life.
How deep are we caught in this net of misinformation
To test the waters of how well people discriminate between accurate information and misinformation, the researchers made use of a self-designed game called “Google Or Not.” The online game, showing two sets of results for the same query, was simple in bearings – pick the set that the users find more reliable, or most relevant.
Analysing the results from 2,100 responses from over 30 countries, it was tellingly found that about half the time people mistakenly picked as trustworthy the set with one or two misinformation results.
Bear in mind, one of these two sets contained one or two results that were either verified or labelled as misinformation stories. The game was made available publicly and advertised through various social media channels.
In simple words, drawing a conclusion from this experiment – close to half the time people are picking results infused with conspiracy theories and fake news.
Therein lies the monster. As more people pick these inaccurate and misleading results, the search engines start to adapt to learn that that’s what people want.
The steps we can employ from our side are vetting the content based on our emotions, understanding the context behind it, and maybe dig deeper to look for crowdfunding credibility if we can. But seeing as it’s not always possible circumstantially speaking, in spite of increased awareness and concern from social media companies and search engines alike, the problem has not seemed to dissipate.
Numerous examples showcasing the extent of this proliferation of fake news can be evidenced in false content about COVID-19, which has likely had an impact on vaccination intentions. Take the incredulous storming of Capitol Hill on Jan 6, where misinformation about the 2020 presidential election almost certainly played its part in inciting the public.
The Global Disinformation Index, an agency that rates the trustworthiness of 70,000 news websites, has claimed that European fake news sites earn around $75m of advertising a year, much of it placed by Google.
Indeed, fake news and misinformation have been a persistent concern ever since major global events like the infamous 2016 U.S. presidential election have brought it more clearly in front of the public eyes.
The thing to be considered also is that search engines often present results that are biased toward one subtopic, view, or perspective due to the way they compute relevance and check user satisfaction. All the more probable and dangerous, because embedding features that favour certain values over the others based on unevaluated data can breed ruckus.
All in all, misinformation is an invisible enemy. On the back of this, search engines are minting money not only by selling ads, but also by tracking users and selling their data through real-time bidding on it. It is no doubt that awareness and pressure from public quarters has spurred organisations into action, but the fight looks to be never-ending.
And it can only be combated by a synergy between the AI and humans in control of it – to weed out as much as they can.
Stay tuned for more updates and tread carefully in the wilderness of fake news.