top of page

The future of passports: Biometrics authentication and potential risks

The world of passports has changed radically over time. Austrian novelist Stefan Zweig (1881-1942) famously noted that there was a time when “there were no permits, no visas, and it always gives me pleasure to astonish the young by telling them that before 1914 I travelled from Europe to India and to America without a passport and without ever having seen one”.

Making a leap to the 21st century, Matthew Longo explains how our bodies, or perhaps better: our biography, will be our passports sooner than we think. The growing use of biometrics in our current age of course differs radically from previous times. According to Longo, “ruling authorities have issued travel documents since at least medieval times, and some of these early versions included a brief description of the bearer [including age, height etc.]. But it was only with the introduction of the passport photograph in Europe during World War I that they could be reliably used to verify their owner’s identity – although even this was not certain”. The availability of fingerprint and face recognition has only reinforced this process.

The widespread use of passports is therefore a fairly recent phenomenon and technological innovations have led to the increasing intertwinement of passports and surveillance (in the form of border control technology and biometric identification systems). These recent developments could be troubling, according to Longo. If we were merely witnessing a transition from paper to digital, this would not raise any concerns. However, what we are in fact seeing is a move from verification to identification.

Traditionally, the passport was checked upon crossing a border to see if the passport and the person matched. In case of a problem, this related to the mismatch between person and passport and not the actual person him/herself. The shift from verification to identification (i.e. matching an individual’s biometrics to a digital profile that can include behavioral, reputational and social data) can carry enormous personal risks. As Longo explains, “today, when a border agent pulls up your record, he’s not just verifying that you are who you claim to be, he’s also pulling up your identifying material – and that could include a judgment by the authorities about how he should be treating you”.

On a different occasion, Longo made the useful distinction between biometric and biographic data, the latter covering the abovementioned behavioral data. FCI previously wrote about China’s pioneering role in this kind of “algorithmic data collection” by means of the Social Credit System. There is a fine line between biometric and biographic data. The decision by the city of San Francisco to ban facial recognition earlier this year – applauded by some for potential inaccuracy of the underlying technology, but also because of America’s “long national history of politicized and racially-biased state surveillance” – can be said to include both forms of data.

Although facial recognition at first sight merely seems to check someone’s physical characteristics, the flaws in the current technology and the inherent biases in algorithms open the door to greater government control. It has by now become common practice for governments to assign “risk ratings” to individuals – frequently based on race or religion – and to turn to data mining. The automated algorithms used for data mining are far from neutral and instead appear frequently ideologically biased. There is a real risk that groups and individuals may be differentially targeted and affected by big data surveillance in what might be considered a system of “social sorting” based on race, income or religion. Longo notes that with the current technology available, authorities tend to cast their net wide and have an incentive to overreact. “For citizens”, he notes, “this tradeoff – between civil liberties and security – is far from clearly beneficial”. The dystopian yet not unrealistic scenario is that biometrics will develop in a way that helps “predict threat-actors before they act”. Governments can then act based on data that is “action-independent” and we end up in a society where ‘innocent-until-proved-guilty’ are turned into ‘risky-until-proved-safe’.

It is for future citizens to urgently reflect on how to control the creep of security technology, which could ultimately turn citizenship into something one needs to earn, rather than a right one can count on.

Edited by: Dr. Olivier Vonk

68 views0 comments


bottom of page