It is important to note that, in European legislation, the articles in the main text are binding
on member states but are accompanied by “recitals,” which are designed to help states
interpret the articles and understand their purpose. Recitals are usually regarded as helpful
rather than binding, but this is contested and differs among states. Unfortunately, in relation
to Article 22, Recital 71 mentions some key matters not included in the main text. Article
22(3) mandates that safeguards include “at least the right to obtain human intervention on
the part of the controller, to express his or her point of view and to contest the decision,” but
the safeguards listed in Recital 71 “should include specific information to the data subject
and the right to obtain human intervention, to express his or her point of view, to obtain an
explanation of the decision reached after such assessment and to challenge the decision” (italics
added).
This strange mishmash of texts thus cannot firmly be said to mandate a right to explanation
in all or indeed any circumstances and may not be interpreted the same way from state to
state.
This is a serious, but not the only, problem with Article 22.
•
Article 22 applies only to systems where decisions are made in a “solely” automated
way—that is, there is no human in the loop—and there are very few of these and fewer
that are “significant” (see below). How meaningful this input has to be is subject to
recent regulatory guidance,
3
but remains unclear and untested.
•
What is a “decision”? The GDPR gives us no help with this at all other than that it
includes a “measure” (Recital 71). Is sending a targeted ad to a user using an algorithmic
system a decision? It produces no binding effect; the advert may be ignored; and in
many cases, it is hard to see what action causally flows from it. Yet as in the well-
publicized Latanya Sweeney example,
4
sending adverts promoting help with criminal
arrests solely to “black-sounding” names was worrying and offensive—and potentially
dangerous if these characterizations were inherited by systems selecting individuals for
stop and search or airport screening. Although a single advert delivery decision might
not have a significant effect on an individual’s life, the cumulative effect on an entire
group or class may be worrying. Such group privacy impacts are not dealt with well
by DP law—an area based on individualistic human rights—and are exacerbated by a
continuing lack of provision for class actions in EU states.
•
Article 22 applies only to a decision that produces legal or other “significant” effects.
This is vague in the extreme. Some would argue this could only apply to systems that
make important, binding decisions on things like criminal justice, risk assessment, credit
scoring, education applications, or employment. Yet such systems are rarely if ever
entirely automated, even if the human’s involvement is often nominal. Furthermore,
some commercial decisions may seem trivial as a one-off, but are significant in aggreg-
ate. Mendoza and Bygrave argue that advertising decisions can never be significant,
5
while European regulators recently produced guidance indicating the opposite.
3
Might
systems recommending buying choices or targeting adverts not limit a user’s worldview
or choices, or disseminate “fake news” via algorithmic filter bubbles? Arguably, such
phenomena are becoming deeply and significantly destructive to our democracy. We
3
Electronic copy available at: https://ssrn.com/abstract=3052831