The National  Memo Logo

Smart. Sharp. Funny. Fearless.

Monday, December 09, 2019 {{ new Date().getDay() }}

Reprinted with permission from ProPublica.

In the wake of ProPublica’s report Thursday that Facebook advertisers could have directed pitches to almost 2,300 people interested in “Jew hater” and other anti-Semitic topics, the world’s largest social network said it would no longer allow advertisers to target groups identified by self-reported information.

“As people fill in their education or employer on their profile, we have found a small percentage of people who have entered offensive responses,” the company said in a statement. “…We are removing these self-reported targeting fields until we have the right processes in place to prevent this issue.”

Facebook had already removed the anti-Semitic categories — which also included “How to burn jews” and “History of ‘why jews ruin the world’” — after we asked the company about them earlier this week. Then, after our article was published, Slate reported that Facebook advertisers could target people interested in other topics such as “Kill Muslim Radicals” and “Ku-Klux-Klan.” Facebook’s algorithm automatically transforms people’s self-reported interests, employers and fields of study into advertising categories.

Because audiences in the hateful categories were “incredibly low,” the ad campaigns targeting them reached “an extremely small number of people,” Facebook said. Its statement didn’t identify the advertisers. Conceivably, those who might find it helpful to target anti-Semites could range from recruiters for far-right groups to marketers of Nazi memorabilia.

ProPublica documented that the anti-Semitic ad categories were real by paying $30 to target those groups with three “promoted posts” — in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes.

Facebook’s advertising has become a focus of national attention since it disclosed last week that it had discovered $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.

Like many tech companies, Facebook has long taken a hands-off approach to its advertising business. Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Traditionally, tech companies have contended that it’s not their role to censor the internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.

Facebook CEO Mark Zuckerberg wrote at the time that “there is no place for hate in our community,” and pledged to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious,” he wrote.

 

Start your day with National Memo Newsletter

Know first.

The opinions that matter. Delivered to your inbox every morning

Jeff Danziger lives in New York City. He is represented by CWS Syndicate and the Washington Post Writers Group. He is the recipient of the Herblock Prize and the Thomas Nast (Landau) Prize. He served in the US Army in Vietnam and was awarded the Bronze Star and the Air Medal. He has published eleven books of cartoons and one novel. Visit him at DanzigerCartoons.

Florida Gov. Ron DeSantis

Photo from Ron DeSantis' official Facebook

Reprinted with permission from Press Run

Florida has become a Covid-19 debacle, again.

Keep reading... Show less
x

Close