“We have zero tolerance for bad actors,” says Google’s Head of Trust & Safety

Fashion

 

At the recently held ‘Grow with Google’ event in Singapore, Arjun Narayan, the company’s Head of Trust & Safety, based in the company’s Asia-Pacific headquarters in the city, was peppered with questions around data security, particularly in the light of the developments at Facebook. Narayan said that Google is trying to reconcile its own values and also make sure that all the three pieces of its ecosystem – the consumer, the creator or publisher, and the advertiser – all see commercial value, feel safe and secure. Because “we don’t take that lightly as there are enough choices and competition”. The executive spoke to a group of journalists on the sidelines of the event.

What exactly is Google doing to ensure safety of the ecosystem?

From an advertiser perspective, we have launched several different advertiser policies in response to the trends we have seen. For example, we saw a trend in the ticket re-seller vertical where there was unfair pricing and we thought transparency was good for our consumers. So we tightened rules on ticket re-seller advertising. On the publisher side, we launched a new technology called ‘page level enforcement’. This was in response to the news media saying they are a news site with thousands of web pages. Some of these web pages  might not be in compliance but the broader website is. We have guidelines on what we allow. We hold them to the policy standards. So we introduced a technology that allowed us to take clinical action against those specific pages such as blocking ads on those pages. Pre-page level enforcement, we had to take nuclear action because we had zero tolerance. The publisher gets time to remedy specific pages that are non-compliant.

Who flags the content?

There are many channels. One feedback channel is users. They can flag content, ads. If it is an user experience issue and if it is an issue that is consistently being raised, we re-evaluate that and see if we need to do something from a policy perspective. If it is a policy non-compliance issue, we take action. We have zero tolerance for bad actors. We have zero tolerance for badness.

How long does it take to remove content?

There is no easy answer. It is a factor of what is the kind of abuse we see. And when was the abuse detected. We tackle the issue at the transactional level and we also look for root cause and try to mitigate the motivation behind the abuse. We saw ads with sensationalist headlines and monetisation of fake news. We saw an increasing trend of news sites becoming popular – so scammers started taking advantage of that. They were in this because there was commercial incentive. So we introduced a policy that allowed us to take ads off misrepresentative content.  Sometimes it technology, sometimes it is out human review teams that take action.

How do you define ‘badness’?

This is nuanced. There are different policies for different kinds of badness. Misinformation is when an advertiser or publisher deliberately mis-states or conceals  information, or content or intent with the idea of commercial profit. For example, you are not a NYT but claim to be one. Another example of badness is distributing malware. Or you are trying to harvest information from users without consent. We do make this information publicly available and its transparent.

In light of the Facebook development, what are some of the policies you are in the process of reviewing?

At Google, we have maintained very high standards. We have adequate safeguards and we constantly re-evaluate our safeguards. From a philosophical standpoint, we value user trust and we do not take that lightly. Because we know that is extremely important for our ecosystem to thrive. We have the policies, safeguards, and enforcement in place. Then from an user choice perspective, we provide users the choice. They have access to how their data is being used, they also have opt-outs available.

Do you think all users know about the choices you are taking about?

There is never enough when it comes to educating our users. There is always more we can do in this space. We want to make sure our community is aware of these options.

In broader sense, you are saying that Google has always been doing the right things… what are some of the specific safeguards or polices that were reviewed in light of what happened at Facebook? Or those that need to be reviewed.

One, we obviously have very high standards. We maintain very high standards because our whole business model depends on it. We know that we can’t take this lightly. Second, Google has a set of users that are different from Facebook. Google Search for instance, is a research resource. Google Maps is a different kind of product. We are execution-focussed. We make sure we make content available to the user at the time they need the information. We are not in the viral business. Third, we do learn lessons. In all humility, we all have lessons to learn. So we are watching the industry, we watching the space. But at the same time, we do what is right for the user, irrespective of whether things are in the news or not. We have been doing this for 17 years. It has just become an issue now.

The grey area really is third party developers accessing data from Google…

(Google corporate communication representative intervenes: We have seen no evidence of that kind of abuse in our system. This is a question for Facebook. From our side, we have seen no evidence of that.)

So when an app asks if I can log in with my Google ID, there is some sharing of data …

(Google corporate communication representative intervenes: I don’t think Arjun can answer that. Yes, the only situation where that may be the case is Android. We have been evaluating at the App store.  So far nothing has surfaced.

Leave a Reply

Your email address will not be published. Required fields are marked *