On Thursday, the microblogging platform Koo introduced a set of new proactive content moderation features aimed at enhancing user safety and improving the overall social media experience.
The newly developed features have the capability to swiftly identify and eliminate any instance of nudity or child sexual abuse material in under five seconds. Additionally, they provide labeling for misinformation and work to conceal toxic comments and hate speech, according to a statement released by Koo.
With an intention to establish a safe and welcoming environment, Koo emphasizes its dedication to inclusivity and a language-first approach for its user base, positioning itself as a serious competitor to Twitter.
During the announcement, Koo reaffirmed its commitment to creating a secure social media landscape for its users, underscoring the significant impact of issues such as child sexual abuse materials, toxic commentary, misinformation, and impersonation on user safety.
The company described these new moderation features as a pivotal advancement toward achieving these goals.
Mayank Bidawatka, co-founder of Koo, stated that the platform’s objective is to foster a friendly social media atmosphere conducive to constructive discourse.
“Although moderation is an ongoing effort, we strive to remain at the forefront of this domain. Our focus is on continuously developing new methodologies to proactively identify and eliminate harmful content and curb the dissemination of misleading information. I believe our proactive moderation system is among the best in the world,” Bidawatka remarked.
Koo’s in-house ‘No Nudity Algorithm’ is designed to instantly detect and prevent users from uploading images or videos that contain child sexual abuse materials or nudity. This action occurs in less than five seconds.
Users attempting to share sexually explicit content will be immediately banned from posting, discovering other users’ content, appearing in trending posts, or engaging with others on the platform.
The platform’s safety tools also work to identify, hide, or remove toxic comments and hate speech in under ten seconds to ensure they are not visible to the public.
Content that features extreme violence or graphic material is accompanied by warnings for user discretion.
Koo’s ‘MisRep Algorithm’ examines the platform for impersonated accounts that misuse images, videos, or descriptions of famous personalities, promptly removing such content and flagging the accounts for future review.
Additionally, the ‘Misinfo and Disinfo Algorithm’ actively scans for viral and reported false information using both public and private sources, labeling misinformation and disinformation as it surfaces. This measure aims to minimize the spread of misleading content on the platform.