Could letting social media users rank accuracy help stop misinformation?

When fighting the spread of misinformation, social media platforms typically put most users in the passenger seat. Platforms often use machine learning algorithms or human fact-checkers to flag false or misleading content for users.

“Just because this is the status quo doesn’t mean it’s the right way or the only way to do it,” says Farnaz Jahanbakhsh, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

She and her colleagues conducted a study in which they put that power in the hands of social media users instead.

They first surveyed people to learn how they avoid or filter misinformation on social media. Using their findings, the researchers developed a prototype platform that allows users to rate the accuracy of content, indicate which users they trust to rate accuracy, and filter posts that appear in their feed based on those ratings.

Through a field study, they found that users were able to effectively evaluate misleading posts without receiving any prior training. Additionally, users appreciated the ability to rate posts and view ratings in a structured way. The researchers also saw that participants used content filters differently – for example, some blocked all misinformation content while others used filters to search for such articles.

This work shows that a decentralized approach to moderation can lead to higher content trustworthiness on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation schemes, and can attract users who distrust platforms, she adds.

“A lot of disinformation research assumes that users can’t decide what’s true and what’s not, and so we have to help them. We didn’t see that at all. We have seen that people are actually dealing with content through screening and they are also trying to help each other. But these efforts are not currently supported by the platforms,” ​​she says.

Jahanbakhsh wrote the article with Amy Zhang, associate professor at the University of Washington Allen School of Computer Science and Engineering; and senior author David Karger, professor of computer science in CSAIL. The research will be presented at the ACM Conference on Computer Supported Cooperative Work and Social Computing.

Also Read :  When using virtual reality as a teaching tool, context and ‘feeling real’ matter

Fighting misinformation

The spread of online misinformation is a widespread problem. However, current methods that social media platforms use to flag or remove misleading content have drawbacks. For example, when platforms use algorithms or fact-checkers to rate posts, this can create tension among users who interpret those efforts as violating freedom of expression, among other things.

“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are exposed to, so they know when and how to talk to them about it,” Jahanbakhsh adds.

Users often try to evaluate and flag misinformation on their own, and they try to help each other by asking friends and experts to help them understand what they are reading. But these efforts can backfire because they are not supported by platforms. A user may leave a comment on a misleading post or react with an angry emoji, but most platforms consider those actions signs of engagement. On Facebook, for example, this could mean that the misleading content would be shown to more people, including the user’s friends and followers – the exact opposite of what this user wanted.

To overcome these issues and difficulties, the researchers sought to create a platform that gives users the ability to provide and view structured accuracy ratings on posts, indicate others they trust to rate posts, and use filters to control the content displayed in their feed. . Ultimately, the researchers’ goal is to make it easier for users to help each other assess misinformation on social media, which reduces the workload for everyone.

The researchers began by surveying 192 people, recruited through Facebook and a mailing list, to see if users would rate these features. The survey revealed that users are hyper-aware of misinformation and try to track and report it, but fear that their ratings could be misinterpreted. They are skeptical of platforms’ efforts to rate content for them. And, while they would like filters that block untrustworthy content, they wouldn’t trust platform-powered filters.

Using these insights, the researchers built a Facebook-like prototype platform, called Trustnet. On Trustnet, users post and share real, full news articles and can follow each other to see content others post. But before a user can post any content on Trustnet, they must rate that content as accurate or inaccurate, or question its veracity, which will be visible to others.

Also Read :  You can get up to 20 percent off PlayStation, Xbox, and Nintendo gift cards this weekend

“The reason people share misinformation is usually not because they don’t know what’s true and what’s false. Rather, at the time of sharing, their attention is misdirected to other things. If you ask them to evaluate the content before sharing it , it helps them be more sensible,” she says.

Users can also select trusted individuals whose content ratings they will see. They do this in a private way if they follow someone they are connected to socially (perhaps a friend or family member) but who they wouldn’t trust to rate content. The platform also offers filters that let users customize their feed based on how posts have been rated and by whom.

Testing Trustnet

After the prototype was complete, they conducted a study in which 14 individuals used the platform for one week. The researchers found that users could effectively evaluate content, often based on expertise, the source of the content, or evaluating the logic of an article, despite receiving no training. They could also use filters to manage their streams, although they used the filters in a different way.

“Even in such a small sample, it was interesting to see that not everyone wanted to read their news the same way. Sometimes people wanted to have misinformation posts in their feeds because they saw benefits to it. This shows that this agency is now missing from social media, and it should be returned to users,” she says.

Users sometimes struggled to rate content when it contained multiple claims, some true and some false, or if title and article were disconnected. This points to the need to give users more rating options — perhaps stating that an article is true but misleading or that it contains a political slant, she says.

Because Trustnet users sometimes struggled to rate articles in which the content did not match the headline, Jahanbakhsh launched another research project to create a browser extension that lets users modify news headlines to be more aligned with the content of the article.

Also Read :  Can I buy trade fair tickets online, trade fair tickets price, venue, and other details

While these results show that users can play a more active role in the fight against misinformation, Jahanbakhsh cautions that giving users this power is not a panacea. First, this approach could create situations where users only see information from like-minded sources. However, filters and structured assessments could be reconfigured to help alleviate that problem, she says.

In addition to researching Trustnet improvements, Jahanbakhsh wants to study methods that could encourage people to read content ratings from those with different viewpoints, perhaps through gamification. And since social media platforms may be reluctant to make changes, she’s also developing techniques that allow users to post and view content ratings through normal web browsing, rather than on a platform.

This work was supported, in part, by the National Science Foundation.

“Understanding how to combat disinformation is one of the most important issues for our democracy today. We have largely failed to find technical solutions at scale. This project offers a new and innovative approach to this critical problem that shows considerable promise,” says Mark Ackerman, George Herbert Mead College Professor of Human-Computer Interaction at the University of Michigan School of Information, who was not involved with this research. “The starting point for their study is that people naturally understand information through people they trust in their social network, and so the project uses trust in others to assess the accuracy of information. This is what people naturally do in social environments, but technical systems currently do not support it well. Their system also supports reliable news and other information sources. Unlike platforms with their opaque algorithm, the team’s system supports the kind of information evaluation that we all do.”

Smarter faster: the Big Think newsletter

Subscribe for counterintuitive, surprising and impactful stories delivered to your inbox every Thursday

Republished with permission from MIT News. Read the original article.

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button