Corporate NewsNews

Artificial Intelligence Tool Introduced to Detect and Remove Caste-Based Abuse from Social Media Platforms

The model was launched by Social Media Matters in association with Spectrum Labs

Today, Social Media platforms have become the biggest platforms that host a diverse range of social interactions. From cyber bullying to fake news, Social Media platforms have seen a rampant rise in privative news or events. In order to keep such behaviours at bay, Social Media Matters partners with Spectrum Labs to launch a Behaviour Identification Model that can detect Caste Discrimination within online communities. They contain a spectrum of information ranging from personal to mundane to sharing political opinions and building communities. The online communities have hence; become a fertile ground for groups based on ethnicity or castes. The Behaviour Identification model comes into play to detect the same.

The model available through Spectrum’s behaviour identification solution is designed to be flexible and fit into any workflow. Spectrum’s deployment options include a SaaS offering or an on-premise binary. The model is currently trained to detect caste discrimination within all forms of text data including Status updates, messages, tweets, comments, etc. Both deployments have a streaming API or batch mode for data processing.

Spectrum’s behaviour models are all designed to be constantly updated over time. We work with customers to iterate on the baseline model on a regular cadence (typically monthly) to ensure that we are flagging the content our customers need. This process ensures that our results are customized for each customer so they can trust the results.

Detecting Caste Discrimination:

Real-Time: Recognize and respond to toxicity immediately before it evolves into an even bigger problem.

Multi-Languages: Spectrum Labs has a patent-pending approach to international language that allows us to scale across regions.

Secure Deployment: Offering the power to understand the community while maintaining the data privacy requirements.

The model results are surfaced for customers in a way that they can plug into existing moderation efforts. This includes webhooks into internal systems for customers to manage users (warn, suspend, ban, etc.), manage content (remove a post, limit who can see it, etc.), send alerts, send for moderator review, and even pipe into analytics platforms to see trends over time.

Speaking on the launch, Amitabh Kumar, Founder of Social Media Matters quoted, “Caste Discrimination is one of the oldest forms of evils still existing in Indian society. Sadly it is also reflected in the cyber spaces, we together with Spectrum has created an Artificial Intelligence tool that will help social media platforms, like Facebook, TikTok, Twitter, Instagram, detect and remove caste based abuse from their platforms. It will decrease the time taken for detection, also decrease the constant stress human moderators have to go through constantly dealing with abuse. Initially, the model is trained to work with several languages English, Hindi, and Hindi-English mix and we’ll continue to upgrade it further.”

Spectrum also offers a set of moderator tools through a UI called Guardian. This UI includes 4 main features: Moderation Queue, Automation Builder, Retraining, and Analytics. Results can either be piped into this Spectrum offering or plugged into existing moderation efforts.

Speaking on the collaboration Justin Davis, CEO at Spectrum Labs quoted, “The hardest part of building an AI model that can effectively detect caste discrimination online is really defining and understanding not only what caste discrimination is, but also what it is not. Ami and his brilliant team at Social Media Matters have dedicated themselves to raising awareness about injustice and discrimination in many forms, so we could not have asked for better partners: their insights and expertise helped us navigate the nuances, history, and politics of caste discrimination, to build a tool that can combat it effectively and inclusively. We were humbled and honored to work with them.”

Related posts

Varonis’ MDDR to Effectively Stop Data Breaches

adminsmec

Tenable Names Cybersecurity Expert Meg O’Leary Chief Marketing Officer

adminsmec

Gartner Market Guide for Zero Trust Network Access Recognises InstaSafe

adminsmec
x