Thursday, January 28

YouTube unveils new anti-hate weapons to better protect videographers

YouTube is rolling out new features to more effectively fight disrespectful, toxic and / or hateful comments. A way for the firm to protect videographers from the less shiny fringe of their communities.

YouTube sur smartphone

YouTube is pulling out all the stops to shut down hateful comments as quickly as possible, but also to dissuade users from posting them …

Endemic problem on the net, openly hostile, disrespectful or even downright toxic comments are legion. Among the first affected by this scourge, YouTube has announced the gradual deployment of several features with a dual objective: to deter haters to post their hateful comments, but also to protect videographers by offering a moderation filter intended to prevent them from reading this type of content.

YouTube also plans to conduct a survey of creators in 2021 to collect data that will aim to determine why some videographers are more concerned by online hatred and harassment. The main novelty nevertheless remains the appearance on the platform of a warning message when a user is about to post an offensive comment. He will then be able to consult the community rule, modify your comment (using a prominent button) … or share it anyway.

Online hate: YouTube takes the problem head on (or at least tries)

With this alert, YouTube wants above all to be dissuasive by adding an obstacle to the publication of a comment likely to be hateful. The idea is loosely taken from Instagram, which implemented a similar system last year. The platform has since indicated that this method has helped reduce harassment in the comments, but does not provide precise data. As reported TechCrunch, this type of measure arrives late on the networks. Because if they can effectively dissuade the publication of inappropriate comments, and help reduce harassment, they also induce a possible drop in the engagement rate for platforms. A problem dreaded by YouTube, Instagram and many others.

In the case of YouTube, the alert system will be based on algorithms capable of learning which types of comments are disrespectful by analyzing those that are most often reported by users. IA requires, this system will improve over time.

Here is the alert that will appear when a hateful comment is about to be posted.

Here is the alert that will appear when a hateful comment is about to be posted // Gamesdone: YouTube via TechCrunch

At the same time, YouTube also intends to offer a filter to videographers to save them from reading hateful content. The latter will be offered through the YouTube Studio channel management tool. Once active, this filter will hide hurtful comments that have been previously reported. So far, moderation of comments requires in some cases the intervention of the videographers themselves, who are then confronted with potentially offensive messages. YouTube wants to avoid this.

These measures will be supplemented, the service promises, by a reorganization of its moderation process in order to make the processing of reported comments more rapid and efficient. The platform explains that it has multiplied by 46 the number of hate comments deleted since 2019. We also learn that in the last quarter more than 1.8 million channels were suspended. 54,000 of them for hate speech.



Leave a Reply

Your email address will not be published. Required fields are marked *