The world's most used social network, Facebook, reported that deactivated between 3.2 and 3.2 billion false accounts between April and September, mostly managed by robots, while eliminated 11.4 million contents that incited hatred.
The company headed by Mark Zuckerberg published on Wednesday the fourth edition of its report on compliance with community standards, in which it explains the progress made by the company in the fight against false accounts and information, illegal activities and the contents that it considers not adequate.
The 3,200 million false accounts deactivated in the second and third quarter of the year represent more than double the 1,554 accounts deactivated in the same period of 2018.
In addition to the contents of the so-called "hate speech", there are also 18.5 million messages removed for containing nudity and child sexual exploitation; 4.5 million related to suicides or self-harm; and the 5.7 million who sought to harass other users.
It's about the first time that the Menlo Park company (California, USA) includes among these metrics the contents related to suicides or self-harm, and Facebook's vice president for Integrity, Guy Rosen, congratulated himself that 96.1% of them in the second quarter of the year and 97.1% in the third were proactively detected by the company before anyone denounced them .
There is also Instagram data
Also for the first time, Facebook provided Instagram data -of his property-, in this case related to child nudity and child sexual exploitation, regulated goods – specifically, illegal sale of firearms and drugs-, suicide and self-harm and terrorist propaganda.
In Instagram, a platform in which the average age of users is lower than that of Facebook, the contents removed due to child nudity and child sexual exploitation were significantly lower: 1.26 million.
The contents related to suicide and self-harm were 1.68 million; those who incited terrorism, 240,000; and finally, those who announced drugs and firearms, 3 million.
"The investment we have made in artificial intelligence over the past five years remains a key factor in addressing these problems. In fact, recent advances in this technology have helped to detect and eliminate content that violates our policies," Rosen said. submit the report