Facebook to Open-Source 2 Algorithms to Identify Harmful Content

Published August 4th, 2019 - 09:30 GMT
The duo of technologies stores files as digital hashes and compares them with known examples of harmful content
The duo of technologies stores files as digital hashes and compares them with known examples of harmful content. (Shutterstock)
Highlights
The algorithms have been released on Github, and Facebook hopes that developers and other companies will make use of it

Facebook has announced that it will make two algorithms open-source which it uses to identify abusive or harmful content on its platform.

{"preview_thumbnail":"https://cdn.flowplayer.com/6684a05f-6468-4ecd-87d5-a748773282a3/i/v-i-1…","video_id":"1cc7cee0-7b25-4504-8744-7c11c08df363","player_id":"8ca46225-42a2-4245-9c20-7850ae937431","provider":"flowplayer","video":"10 Highest Average Beer Prices Around the World"}

PDQ and TMK+PDQF are the two technologies which Facebook uses to identify child sexual exploitation, terrorist propaganda, and graphic violence.

The duo of technologies stores files as digital hashes and compares them with known examples of harmful content, The Verge reported.

The algorithms have been released on Github, and Facebook hopes that developers and other companies will make use of it to identify harmful content and add it to a shared database to curb online violence.

Subscribe

Sign up to our newsletter for exclusive updates and enhanced content