Google offers antisemitic and racist ad targeting too


Image: Smith Collection/ Gado/ Getty Images

Facebook isn’t the only advertising giant enabling ad targeting around hateful the arrangements and phrases.

Google, the world’s biggest publicize corporation, lets ad buyers target contents around antisemitic and racist pursuit words like “Jewish parasite” and “Black people ruin everything, ” BuzzFeed discovered this week.

The discoveries come as Facebook and Google have had to reckon with the persona their huge platforms play in spreading misinformation and loathe in the wake of a presidential election in which both ran rampant online.

Google and Facebook collectively control around 70 percent of all online ads, and much of the processes that control them are dealt with in automation. That’s led to several headaches for each as their efforts to strike a balance between reigning in their platforms’ uglier propensities and stifling the free flow of information.

Worse yet, Google’s ad platform will offer up an algorithmically generated index of other offensive propositions when such terms are participated.

The report comes on the heels of a ProPublica investigation in which the news site purchased and received approval for Facebook ads aimed at consumers who had self-reported interest in topics like “Jew love, ” “How to burn jews, ” or, “History of’ why jews ruin the world.’”

Facebook cracked down on the problem hours later by removing targeting capabilities around some user-reported lands, including employer, education, and undertaking name. The corporation accused its automated systems for having made the categories based on what a relatively small number of users had put in their profiles.

Unlike Facebook, Google’s options are arising from the thousands of millions of investigations that happen on its site every day and appear when someone searches the specific word, as opposed to automatically targeting individuals.

It’s patently close to impossible for Google to police every single search term that’s funneled into its self-serve ad platform, “but theres” ways to better filter them with machine learning, as Google itself has demonstrated in the past.

Google’s SVP of Ads, Sridhar Ramaswamy, claimed in a statement that these particular offensive words stole through the cracks of its enforcement system.

“Our goal is to prevent our keyword suggestions tool from inducing offensive suggests, and to stop any offensive ads seeming, ” Ramaswamy said. “We have language that informs advertisers when their ads are offensive and therefore rebuffed. In such instances, ads didn’t operated against the vast majority of these keywords, but we didn’t catch all these offensive suggestions.”

“That’s not good enough and we’re not attaining condones. We’ve already turned off these suggestions, and any ads that shaped it through, and will work harder to stop this from happening again, ” the statement continued.

BuzzFeed reported that every term it had being implemented in its exam campaign had subsequently been scrubbed save for “blacks destroy everything.”

The ads, which also targeted investigations on “the evil jew, ” “jewish control of banks, ” and “why do Jews ruin everything, ” reportedly garnered 17 impress before they were shut down.

Google previously faced a massive advertiser boycott earlier this year when it was discovered that ads were appearing on YouTube videos from Nazis, terrorists, and other hate groups. The investigation monster responded forcefully to that dispute with more vetting staffers and automated filtering, but some YouTube inventors claimed implementation grew uneven and unaccountable.

This story has been updated with a statement issued by Google .

Read more: http :// mashable.com /

Facebooks generation of Jew Hater and other advertising categories prompts system inspection


Facebook automatically generates categories advertisers can target, such as “jogger” and “activist,” based on what it observes in users’ profiles. Usually that’s not a problem, but ProPublica found that Facebook had generated anti-Semitic categories such as “Jew Hater” and “Hitler did nothing wrong,” which could be targeted for advertising purposes.

The categories were small — a few thousand people total — but the fact that they existed for official targeting (and in turn, revenue for Facebook) rather than being flagged raises questions about the effectiveness — or even existence — of hate speech controls on the platform. Although surely countless posts are flagged and removed successfully, the failures are often conspicuous.

ProPublica, acting on a tip, found that a handful of categories autocompleted themselves when their researchers entered “jews h” into the advertising category search box. To verify these were real, they bundled a few together and bought an ad targeting them, which indeed went live.

Upon being alerted, Facebook removed the categories and issued a familiar-sounding strongly worded statement about how tough on hate speech the company is:

We don’t allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.

The problem occurred because people were listing “jew hater” and the like in their “field of study” category, which is of course a good one for guessing what a person might be interested in: meteorology, social sciences, etc. Although the numbers were extremely small, that shouldn’t be a barrier to an advertiser looking to reach a very limited group, like owners of a rare dog breed.

But as difficult as it might be for an algorithm to determine the difference between “History of Judaism” and “History of ‘why Jews ruin the world,’” it really does seem incumbent on Facebook to make sure an algorithm does make that determination. At the very least, when categories are potentially sensitive, dealing with personal data like religion, politics, and sexuality, one would think they would be verified by humans before being offered up to would-be advertisers.

Facebook told TechCrunch that it is now working to prevent such offensive entries in demographic traits from appearing as addressable categories. Of course, hindsight is 20/20, but really — only now it’s doing this?

It’s good that measures are being taken, but it’s kind of hard to believe that there was not some kind of flag list that watched for categories or groups that clearly violate the no-hate-speech provision. I asked Facebook for more details on this, and will update the post if I hear back.

Update: As Harvard’s Joshua Benton points out on Twitter, one can also target the same groups for Google ad words:

I feel like this is different somehow, although still troubling. You could put nonsense words into those keyword boxes and they would be accepted. On the other hand, Google does suggest related anti-Semitic phrases in case you felt like “Jew haters” wasn’t broad enough:

To me the Facebook mechanism seems more like a selection by Facebook of existing, quasi-approved (i.e. hasn’t been flagged) profile data it thinks fits what you’re looking for, while Google’s is a more senseless association of queries it’s had — and it has less leeway to remove things, since it can’t very well not allow people to search for ethnic slurs or the like. But obviously it’s not that simple. I honestly am not quite sure what to think.

Read more: https://techcrunch.com

Twitter YouTube Timeline


Twitter YouTube Timeline.

https://twitter.com/actingnetworks/status/446067756166041601

https://twitter.com/actingnetworks/status/446049505541357568

https://twitter.com/actingnetworks/status/446020635400372224