Social Welfare at the Cost of Privacy. Not Anymore, Said Dutch Court

Social Welfare at the Cost of Privacy. Not Anymore, Said Dutch Court

de Stadsdriehoek - one of the central and most touristic neighborhoods in Rotterdam, the Netherlands, that SyRI knows nothing about. Photo credit - Olga Kyryliuk.
The beginning of February was marked by a pivotal case in which the District Court of The Hague declared the Dutch legislation regarding an algorithmic risk assessment model – the System Risk Indication (SyRI) – to be in violation of the right to privacy. SyRI, enacted through legislation in 2014, enabled the analysis of 17 different categories of an individual’s personal data, ranging from employment history and property records to their health insurance information and amount of debt. The February ruling comes after policy and human rights advocates questioned whether individuals subject to the algorithm’s analysis knew they were under such surveillance, and whether SyRI’s “findings” led in fact to further discrimination of those already facing marginalization in society.

Written by 2019-2020 Open Internet Leader Olga Kyryliuk
 

The beginning of February was marked by a pivotal case in which the District Court of The Hague declared the Dutch legislation regarding an algorithmic risk assessment model – the System Risk Indication (SyRI) – to be in violation of the right to privacy. SyRI, enacted through legislation in 2014, enabled the analysis of 17 different categories of an individual’s personal data, ranging from employment history and property records to their health insurance information and amount of debt.  The February ruling comes after policy and human rights advocates questioned whether individuals subject to the algorithm’s analysis knew they were under such surveillance, and whether SyRI’s “findings” led in fact to further discrimination of those already facing marginalization in society. 

What is SyRI, and Why is it Scary?

SyRI was designed to facilitate the process of fraud detection in the field of social security, tax payment, social insurance contributions, and non-compliance with labor laws. Selected state authorities (e.g. municipalities, tax offices, police, the Immigration and Naturalization Service) upon permission of the Ministry of Social Affairs and Employment, could cooperate and use SyRI to compare data sets held separately by each of them in order to identify any discrepancies that would subsequently point to persons of risk. The usual suspects tended to be residents of poor neighborhoods, as they were largely dependent on social security benefits, which, according to the government, made them more inclined to commit fraud. The results of  SyRI’s algorithmic analysis were reflected in risk notifications that were then kept for two years in a special register.

In support of SyRI, the government argued that its deployment improved the efficiency of tracing fraud, and that it had no direct legal consequences for the persons of concern. Once a risk report was produced, it was left at the discretion of a respective administrative body whether to conduct further investigation with regard to the person concerned. A risk report as such was not evidence of a violation and couldn’t be grounds for law enforcement action. Only a follow-up investigation could lead to the application of sanctions or any other restrictive measures against an individual. 

However, it’s worth noting that SyRI was solely used for investigations in designated  “problem neighborhoods” (par. 6.92. and 6.93), demonstrating  a prejudice the authorities had towards low-income communities. In  cases of data processing, as with SyRI, the right to respect for private life also affects the right to equal treatment in equal cases and the right to protection against discrimination, stereotyping, and stigmatization. In its decision, the court warned that given the huge sets of data processed in SyRI, there is a reasonable risk of it being biased against a lower socioeconomic status or an immigration background. 

Selective application of SyRI and presumption of guilt incorporated into its design distorted the notion of a welfare system, which instead of ensuring a dignified level of life for the most vulnerable, was used to conduct a witch-hunt seeking out those the algorithm determined to be most likely to commit fraud - regardless of the actual evidence. SyRI’s actual value relative to its intended purpose has also been called into question. Since its 2014 launch, SyRI has been used by only four Dutch municipalities so far – Capelle aan den Ijssel, Eindhoven, Haarlem and Rotterdam. The Capelle aan den Ijssel SyRI project produced 137 risk reports and identified 41 potential risk addresses, none of which resulted in follow-up investigations due to incorrect filling of data. Another source states that not a single fraud case has been opened based on SyRI findings in the above cities. Such results question the claimed efficiency of an algorithmic risk surveillance model, and its required intrusiveness  in terms of individual’s privacy.

What is the Court’s Opinion?

Upon assessment of the SyRI legislation, the District Court of The Hague found direct contradiction with paragraph 2 Article 8 of the European Convention on Human Rights (ECHR) for failure to ensure a “fair balance” between a social interest of sustaining a well-functioning welfare system that the legislation serves, and a violation of private life that the legislation makes. At the same time, the court didn’t question the cooperation of state authorities for the purpose of data exchange and use of SyRI. On the contrary, it supported the idea of exploiting new technological possibilities provided by SyRI for preventing and combating fraud, while specifically pointing out a special responsibility of a state to consider both the benefits associated with the use of new technologies, and the possible interference with the right to respect for private life. Unfortunately, the Dutch government failed to ensure the above balance in the case of SyRI deployment.

According to the court, the SyRI legislation suffers from a number of important omissions and inaccuracies. First and foremost, it contains zero information about the type of algorithms and risk indicators embedded in SyRI risk model, as well as the decision-making tree. Interestingly, throughout the duration of court proceedings the government didn’t reveal much about SyRI’s nature, noting only that it “operates on the basis of pre-defined indicators of risk and the algorithm is not of the ‘learning’ type.”

Additionally, the legislation provides insufficient safeguards for protecting the privacy of data subjects, who are neither notified about the processing of their data, nor of the availability of a risk report. Even if no subsequent profiling took place, the mere collection by SyRI of huge amounts of data has a significant effect on a private life of a person. For example, individuals who are flagged as a “high risk” become the object of government scrutiny with potential negative consequences. And those not marked by the system are, in any case, subjected to a significantly higher level of analysis than is applied to residents of wealthier neighborhoods. Noteworthy is that in its assessment of the SyRI legislation under Article 8 of the ECHR the court also referred to fundamental principles of personal data processing as stipulated in the General Data Protection Regulation. Thus, with a view to the principles of transparency, purpose limitation, and data minimization, the court concluded that the legislation is insufficiently clear and verifiable to decide on SyRI proportionality and necessity in a democratic society.  

Even the UN Special Rapporteur Took a Strong Stance

The UN Special Rapporteur on extreme poverty and human rights Philip Alston presented to the court his own reading of the situation as amicus curiae (friend of the court) and expressed a number of well-grounded concerns with regard to digital welfare states and threats that their fraud detection policies pose to human rights, namely the right to social security and the right to privacy. He also stressed the biased nature of using digital tools to pursue welfare fraud due to their potential to disproportionately target the poorest as those who are intrinsically inclined to fraud. With the application of SyRI, whole neighborhoods are deemed suspect and are made subject to special scrutiny. The Rapporteur repeatedly pointed to low success rates of the pre-SyRI risk analysis projects, and no clear data being available as to how that indicator improved with an introduction of an algorithmic model. Similarly, it is not clear whether this model pays off in terms of money spent on its operation and the amount saved from detected fraudulent activities. Rapporteur Alston welcomed the court’s decision, saying it “sets a strong legal precedent for other courts to follow,” and is a wake-up call not only for the Netherlands, but for many other countries experimenting with digitizing government.

Although the decision can still be appealed, it’s a reminder to governments that no matter how far they might go digitally, internationally recognized human rights should be a litmus test for any implemented policies and tools. Unfortunately, SyRI is not a unique phenomenon, with other states (Australia, Canada, India, Kenya, the United Kingdom, the United States - to name a few) developing similar welfare surveillance models. However, the Dutch court was the first to say that privacy is priceless, even in welfare states.