Indian Researcher Uses Deep Learning To Shut Down Trolls And Fake Reviews

Indian Researcher Uses Deep Learning To Shut Down Trolls And Fake Reviews

The world has become more connected now than ever before. An opinion can fly at light speed across continents and a revolution can be sparked by remote players in a matter of hours. The platform owners and policymakers have to walk a tightrope as the fabric of society is at stake. And as the technology keeps improving, new methods are being discovered by fraudulent players to up their ante.

Most platform owners tussle with the after-effects of an aggravation. To identify the perpetrator and discourage them to carry out any illicit, immoral social engineering would require automated detection techniques.

Machine learning is being used for predicting stock market calamities and other anomalies. The recommendation engines used for e-commerce platforms, predict the likes and dislikes of the users by methods such as collaborative filtering to identify similarities in the choices made by the user in the past and also of their peers.

Now ML models have risen to a state where they can devour on tonnes of data and come up with solutions/insights which enhance decision making.

Since, predicting an anomaly and then making a prudent decision is crucial to discouraging unwanted social engineering, machine learning would be a great place to start in our pursuit to accommodate uninterrupted transaction of ideas.

Srijan Kumar of Stanford University and his peers from Flipkart and Carnegie Mellon University introducedREV2early last year to alleviate the noisy spammers on product reviews on retail platforms.

Since the motivation of users is associated with a rating of a product, it is extremely important to those who own the platforms to stop the fake reviews from getting over publicised and bring down the authenticity of the product; eventually sales.

The researchers proposed that users and ratings have intrinsic, inter-dependent scores that indicate how trustworthy they are and how a product is going to be evaluated based on this.

REV2 algorithm is designed to calculate the intrinsic scores of the users and their ratings based on their behaviour.

Twitter has received a lot of flak recently for its deplatforming of few critics based on the mood swings of a certain section of the platform.

Since people now are more troubled by the misalignment of views of others, things take an aggressive turn with every tweet. The administrators of the platform walk have to make a trade-off between keeping the interaction healthy while avoiding the danger of shutting down of few voices because it didn’t meet the threshold of some algorithm or some section didn’t like it.

There is a lot of misinformation circulated and it gets worse with the popularity of the entities involved in the discussion. The dialogues can start with casual nitpicking and can transform into trolls and threats. This virtual wildfire decouples the users from the truth and they usually end up in their own echo chambers.

With the success of REV2, Kumar is now working on how to make the internet a better place to have healthy interactions. His methods include extraction of attributes from social graph structure to characterize and mitigate thedamage of disinformation.

The fake reviews were handled by REV2 using graph mining collective classification algorithm and now for tackling multiple account abuse in online discussions, Kumar proposes first web-scale characterization model, which statistically analyses user interaction graphs to predict online conflicts.

This technique is already in use by the administrators at Reddit and Wikipedia. And, the efficacy of this model will be verified when it is put to use on platforms like Twitter — mankind’s newfound turf for warfare.

Images Powered by Shutterstock