In a blog post yesterday, social media giant Facebook said that it will use artificial intelligence (AI) to find and remove terrorist content before other users see it after the platform was criticised for not doing enough to tackle extremism. Homer Strong, Director of Data Science at Cylance commented below.
Homer Strong, Director of Data Science at Cylance:
“Overall this direction is promising. A major issue with using humans to provide ground truth for AI is that humans are not perfect either. There needs to be processes for evaluating human judgement in parallel to machine judgement. Otherwise the AI can end up learning the subjectivities of individual reviewers, distracting the AI from learning properly.
Both the confidence and the decision of sufficiently sophisticated AI can be bypassed using adversarial learning techniques. A terrorist who is blocked by Facebook is more likely to switch to some other platform rather than bypass the AI, but Facebook can never completely remove terrorist content.”