This post is also available in: heעברית (Hebrew)

For the past few years, false or “fake” news has been a global issue, and with the rise of widely available artificial intelligence technology, it poses a greater danger than ever.

False information can lead to actual harmful consequences, so news, social media, and government organizations have adopted new strategies for dealing with the phenomenon, including putting greater emphasis on fact-checking and flagging misleading posts in order to provide the important context audiences need.

But with the flood of misinformation, how can one focus their efforts on areas where it is likely to do the most public harm? Research from Binghamton University’s School of Management (SOM) proposes a machine learning framework with expanded use of blockchain technology to combat this phenomenon.

Thi Tran who led the research explains the thought behind it: “We’re most likely to care about fake news if it causes harm that impacts readers or audiences. If people perceive there’s no harm, they’re more likely to share the misinformation… If we have a systematic way of identifying where misinformation will do the most harm, that will help us know where to focus on mitigation.”

According to Techxplore, Tran’s research proposed machine learning systems that will help determine how much harm content will cause to its audience and focus on the worst offenders. The framework would use data and algorithms to spot indicators of misinformation and use those examples to inform and improve the detection process.

The system would also consider user characteristics of people with prior experience or knowledge about fake news to help build a “harm index”, which would reflect the severity of possible harm to a person in certain contexts if they were exposed and victimized by the fake news.

Tran further explains that based on the information gathered, the machine learning system could help fake news mitigators differentiate which messages are likely to be most damaging if allowed to spread unchallenged.

“The research model I’ve built out allows us to test different theories and then prove which is the best way for us to convince people to use something from blockchain to combat misinformation,” Tran said. He also brought up a suggestion to survey 1,000 people, both fake news mitigators and content consumers, lay out three existing blockchain systems and see the participants’ willingness to use those systems in different scenarios.

“I hope this research helps us educate more people about being aware of the patterns, so they know when to verify something before sharing it and are more alert to mismatches between the headline and the content itself, which would keep the misinformation from spreading unintentionally,” Tran concluded.