After yet more revelations that Facebook’s ad platform can be used for nefarious purposes, the social media company is assuring the public that it has rules in place that will prevent this from happening again. Until, you know, it happens again.
In today’s blog post, Facebook explained that its ads should be used for good and not evil, which is almost certainly a response to the most recent spate of news coverage of Facebook’s failures to stop the Russian government from trying to influence the outcome of the presidential election, or to follow fair housing laws.
“Advertising should be transparent,” said the company that hosted tens of thousands of pro-Trump and anti-Clinton ads bought by the Russian government-linked “Internet Research Agency” during the run-up to the 2016 presidential election. The ads were seen by an estimated 126 million Americans.
“Advertising should be safe and civil; it should not divide or discriminate,” explained a spokesperson for Facebook’s ad platform, which allowed investigative reporters from Propublica to use the social media platform’s automated audience targeting system when they purchased housing ads that excluded black, Hispanic, and Asian people. They were also able to buy ads for a specific audience that included “Jew haters” and people whose “field of study” included “how to burn Jews.”
“We may not always get it right,” the company admitted, adding: “We’re always making improvements.” Perhaps it should make some improvements on its improvement-making system; only a week ago, Propublica was again able to buy discriminatory housing ads more than a year after its first report, and several months after Facebook claimed it was working very hard to prevent this from happening.
In what is surely a coincidence, Facebook also announced today that it is expanding its very good and beneficial automated suicidal thoughts detection system, which scans users’ posts and messages for warning signs and isn’t invasive or creepy at all.