Homegrown microblogging platform Koo on Thursday announced the launch of a new proactive content moderation feature designed to provide users with a safer and more secure social media experience. The company said the in-house developed features are capable of proactively detecting and blocking any form of nudity or child sexual abuse materials in less than 5 seconds, labeling misinformation and hiding toxic comments and hate speech on the platform.


"Koo is committed to providing a safe and positive experience for its users. In order to provide users with a wholesome community and meaningful engagement Koo has identified few areas which have a high impact on user safety i.e Child Sexual Abuse Materials & Nudity, Toxic comments and hate speech, Misinformation, Disinformation, and Impersonation, and is working to actively remove their occurrence on the platform. The new content moderation features are an important step towards achieving this goal," the company said in a statement. 


Here Are New Safety Features


No Nudity Algorithm:


Koo says its in-house ‘No Nudity Algorithm’ proactively and instantaneously detects and blocks any attempt by a user to upload a picture or video containing child sexual abuse materials or nudity or sexual content. These detections and blocking take less than 5 seconds. Users posting sexually explicit content are immediately blocked from posting content, being discovered by other users, being featured in trending posts, or being able to engage with other users in any manner.


Toxic Comments and Hate Speech:


Actively detects and hides or removes Toxic Comments and Hate Speech in less than 10 seconds so they are not available for public viewing.


Violence:


Content containing excessive blood/gore or acts of violence is overlaid with a warning for users. 


MisRep Algorithm For Impersonation:


Koo’s in-house ‘MisRep Algorithm’ constantly scans the platform for profiles that use the content or photos or videos or descriptions of well-known personalities to detect impersonated profiles and block them, the company said. On detection, the pictures and videos of well-known personalities are immediately removed from the profiles and such accounts are flagged for monitoring of bad behavior in the future, it added. 


Misinfo & Disinfo Algorithm:


Koo’s in-house ‘Misinfo & Disinfo Algorithm’ actively, and in real-time, scans all viral and reported fake news basis public and private sources of fake news, to detect and label misinformation and disinformation on a post, thereby minimizing the spread of viral misinformation on the platform. 


Koo's Co-founder Mayank Bidawatka said, "At Koo, our mission is to unite the world and create a friendly social media space for healthy discussions. We are committed to providing the safest public social platform for our users. While moderation is an ongoing journey, we will always be ahead of the curve in this area with our focus on it. Our endeavor is to keep developing new systems and processes to proactively detect and remove harmful content from the platform and restrict the spread of viral misinformation. Our proactive content moderation processes are probably the best in the world!” 


Launched in 2020, Koo is one of the world's largest microblogging platforms available in more than 20 global languages. The platform encourages the free exchange of thoughts and ideas among users while adhering to the law of land to remove content that violates the Guidelines.