Blogs

Dutch DPA shares new data about the Right to be Forgotten

Three years ago, the European Court of Justice gave judgment in the Google Spain-case, which established the so-called ‘right to be forgotten.’ This right enables individuals to require from search engines that they remove irrelevant search results for searches on their name.

I have co-authored several articles and also a book chapter on the right to be forgotten. One of these articles deals with how the right to be forgotten has been received in The Netherlands. Frederik Zuiderveen Borgesius and I may have to write an update since some uncertainty arose in Dutch case law about how the right to be forgotten applies in cases concerning sensitive personal data. Also, the Dutch Supreme Court has recently ruled on the right to be forgotten. In its decision, the Dutch Supreme Court essentially held that the right to privacy, as a rule, overrides the interests of the search engine and the interest of internet users searching for information. This ruling is in fact a restatement of what the European Court of Justice had already ruled in its Google Spain decision.

Many Dutch individuals have exercised their right to be forgotten. In the Netherlands, Google has received more than 32.500 requests regarding almost 114.000 URLs. About 46% of those URLs have been removed by Google. The reason for writing this blog is that the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) has released new data about the cases it considered.  This newly released data concerns cases in which the search engine denied to delist a search result and the DPA acted as a mediator.

If a search engine operator refuses to delist a search result, individuals can request the Dutch DPA to act as a mediator. Since the Google Spain-decision, the Dutch DPA has received requests from 155 individuals. Although this may seem like a high number of people, they only represent a fraction of the people whose removal requests have been denied by a search engine.

In 70 cases, the Dutch DPA decided not to mediate either because (1) the search engine did not clearly violate Dutch data protection law, (2) the facts of the case were unclear, or (3) there were ongoing legal proceedings.

In 52 cases, the Dutch DPA did act as a mediator between Google and an individual, and in two cases it mediated between Microsoft Bing and an individual. In 37 of those cases, search results were eventually removed. In 14 cases, Google stood by its decision not to remove the search results. One case related to Bing is still in process. If you crunch the numbers, you will find that in about 24% of the 155 cases, the search engine (Google) came back from its decision not to remove a particular result.

In the report there is also a list of categories of cases in which the DPA decided not to mediate. In these cases the Dutch DPA thus seems to agree with the search engine’s refusal to remove the results. This is an interesting list because it shows where the the Dutch DPA draws the line, but also how some may try to use (or misuse) the right to be forgotten in an attempt to conceal inconvenient information. The DPA did not mediate in cases concerning:

  • Controversial statements by politicians, including former politicians, if the information was not older than 4 years, or longer if the search results related to still relevant political history.
  • The behavior of top executives or people with high managerial responsibilities in companies or organizations, as well as accounts of the wealth of very rich people (provided that the information was not obviously false or factually incorrect).
  • (Medical) disciplinary sanctions and convictions for serious criminal offenses.
  • Very complex cases in which proceedings are still ongoing and which deal with allegations, convictions, and punishment of criminal conduct or fraud by individuals playing a role in public life, and in which the Dutch DPA was unable to assess whether the information in question was accurate.
  • Inconvenient, but not incorrect or out-dated information about people playing a role in public life, especially information that has been made public by these people themselves.
  • Defamatory information, provided that the information was not obviously false or untrue.
  • Search results that appear when someone searches for a word, an address, or a telephone number.

What can we tell from this list? Well, a couple things.

Firstly, opponents of the right to be forgotten have often pointed out that the right can also be misused to remove relevant information from the public’s eye. Based on this list, we can assume that there have been Dutch politicians or former politicians who have tried to use the right to be forgotten to remove inconvenient information about themselves. This is of course not what the right to be forgotten was created for. As search engines are primarily responsible for removal decisions, we can only hope that they take their decisions about the removal of information with care. The list released by the Dutch DPA underscores this point.

Secondly, the Dutch DPA gives an overview in its report of the body of case law concerning sensitive data, however inconsistent it may be. The Dutch DPA does not take a clear stance on whether removal requests concerning sensitive data should be treated differently. However, from the above list it is clear that the Dutch DPA is of the opinion that information about serious criminal convictions, which could be considered sensitive data, should not be delisted if people search for a criminal’s name. This seems to suggest that the Dutch DPA does not treat information about crimes any different from non-sensitive personal data.

And finally, the Dutch DPA does not want to be involved in assessments of whether a particular piece of information is defamatory. Making determinations about the defamatory nature of information is essentially left to the court. Such information should not be removed on the basis of the right to be forgotten, unless it is very clear that the information is incorrect.

The Dutch DPA’s report is available here (in Dutch).

Notice and takedown systems as restrictions on e-commerce?

Yesterday, I talked about the recently released report of the European Commission on its e-commerce sector inquiry. The Commission conducts such inquires to see whether, in a particular sector of the economy, there are indications that competition may be negatively influenced.

The Commission Staff Working Document that accompanies the report inter alia deals with the role that online marketplaces play as sales channels for retailers and manufacturers. More specifically the report goes into the restrictions that limit the ability of retailers to sell their products on online marketplaces. Somewhat surprisingly, notice and takedown systems are put under the heading of “Restrictions to sell on online marketplaces” as well (p. 133 & 149).

Notice and takedown systems are put in place by operators of online marketplaces primarily to enable holders of intellectual property rights to notify these operators about infringing materials on the platform. Notice and takedown systems are incentivized by the liability regime in the E-Commerce Directive, which entails that only those intermediaries that have knowledge of infringing material can be held liable for those materials.

Now, in the Commission Staff Working Document there are some interesting numbers about the use of notice and takedown systems. About 60% of the online marketplaces have a notice and takedown system in place. Interestingly, 40% do not offer a specific notice and takedown system. According to the Working Document:

“Some of those that indicate not to have specific notification mechanisms in place are smaller marketplaces that verify the listing of each product manually thereby limiting risks of illegal or counterfeit products being sold on the marketplace. Others contractually oblige retailers not to sell such products and to respect existing rights of third parties.”

Even though the Working Document speaks of “specific notification mechanisms”, I assume that these marketplace can still be notified about infringements through their general contact forms.  Some marketplaces also cooperate with right owners and offer more proactive mechanisms. The Working Document does not mention any of such programs, but a well-known initiative is eBay’s VeRO program.

Parties who notify online marketplaces regarding illegal activities or materials include intellectual property rights owners, public authorities, competitors of sellers, and customers. Quite some online marketplaces also remove listings on their own initiative:

“Four out of five marketplaces report to remove items or sellers from the marketplace also on their own initiative, i.e. without a prior complaint from a third party. Items removed are typically prohibited items or items that may infringe third party intellectual property rights.”

The Working Document does not explain how notice and takedown systems may restrict the selling of products online. Maybe the fact that a product is removed after a notification must be seen as a restriction? But if a product is removed through a notice and takedown proces, the removed product must – at least in theory – be infringing or unlawful. Why would preventing the sale of infringing or unlawful products be seen as restriction?

Could it be that e-commerce is restricted by the workings of notice and takedown? As notice and takedown systems are a form of private ordering by online service providers, these systems are notoriously opaque and may not always strike a fair balance between the interests of different parties. The Working Document makes note in particular of the complaints by retailers, who “stress the importance of the transparency of the process and consider the possibilities of retailers to defend their interest and request review of the decision taken by the marketplace as not sufficient.” As such, may notice and takedown systems hinder e-commerce and form a restriction on the sale of products online?

Two thirds of online retailers use algorithms to adjust prices to competitors

Yesterday, the European Commission released its Final report regarding the European E-commerce sector. In this report the Commission summarizes
the input it received from stakeholders active in the sector and presents its own views. Here are some of the key facts and figures regarding the use of modern technologies in the European e-commerce sector.

1. Two thirds of online retailers use algorithms to adjust prices

The Commission highlights that online trade has increased price transparency. Consumers can obtain and compare prices more easily than in the offline world. Retailers can compare prices as well. Among the respondents to the Commission were 1.051 retailers. The report presents an interesting finding with regard to the use of price setting algorithms by these retailers:

“A majority of retailers track the online prices of competitors. Two thirds of them use automatic software programmes that adjust their own prices based on the observed prices of competitors.”

According to the Commission Staff Working Document (p. 175), 43% of these retailers adjust prices manually. 8% of the retailers also use software to adjust prices, and 27% combine manual and automatic price adjustments. There is nothing wrong with adjusting your prices to the prices of your competitors. But, as the Commission notes, “the availability of real-time pricing information may also trigger automatised price coordination.” The use of price monitoring and setting algorithms then raises competition concerns.

Later on in the report, the Commission also notes that price monitoring algorithms may also be used by manufacturers to check whether their resellers deviate from any pricing recommendations given manufacturers. “This could allow manufacturers to retaliate against retailers that deviate from the desired price level. It may even limit the incentives for retailers to deviate from such pricing recommendations in the first place.”

2. Digital content: concerns

I’m of course also interested in what the report says regarding digital content, especially since it is so closely related to the use of intellectual property rights. However, there are no real surprises or interesting facts in the report. The report essentially finds that online distribution models offer a lot of advantages. With regard to the possibilities of online transmission, it states the obvious: lower costs, flexibility, scalability.

On the provision of digital content, such as music and audiovisual products, the report mentions that the availability of relevant intellectual property rights is “the key determinant for competition.” Licenses acquired by content providers often deal with the technology used to disseminate the online content, the duration or time at which the content may be provided (“release windows”), and geographic areas in which the content may be offered. The Commission expresses concerns about contractual restrictions in licensing agreements and the effect they may have on competition and innovation.

3. Geo-blocking

Geo-blocking and geo-filtering technologies can be used by retailers to refuse to sell to customers abroad or alter the price, terms and conditions, etc. Such technologies may enable retailers to partition the market and essentially form a cartel. The report notes:

“The majority of geo-blocking measures in relation to consumer goods result from unilateral business decisions of retailers not to sell cross-border. However, more than 11% of retailers indicated that they have contractual cross-border sales restrictions in at least one product category in which they are active.”

Hence, one out of nine asked retailers use geo-blocking technologies to prevent cross-border sales. In some circumstances, the use of these technologies may be problematic from a competition law perspective.

The Final report of the European Commission is available here.