From Greece, to Mexico, to Kenya, to Palestine, borders across the globe have become hotbeds of unregulated technological experimentation, where full ecosystems of automated “migration management” technologies are being deployed to intensely surveil people on the decision with small accountability or oversight. And the dragnet they’ve created is having a devastating human cost.
Instead of being able to exercise their internationally recognised human right to migrate, the vast array of surveillance technologies now deployed against people on the decision – including drones, sound canons, robo-dogs, surveillance towers, predictive analytics, biometric data harvesting, lie detectors, heat sensors, high-tech exile camps, and more, many of which have now been infused with artificial intelligence (AI) – means they are being forced into increasingly desperate and life-threatening situations.
As a result, full border-crossing regions have been transformed into literal graveyards, while people are resorting to burning off their fingertips to avoid invasive biometric surveillance; hiding in dangerous terrain to evade pushbacks or being placed in refugee camps with dire surviving conditions; and living homeless due to the fact that algorithms shielded from public scrutiny are refusing them immigration position in the countries they’ve sought safety in.
In her book, The walls have eyes: Surviving migration in the age of artificial intelligence, exile lawyer Petra Molnar papers and centres on countless first-hand tales of people facing fear, violence, torture and death at the hands of state border authorities.
“Borders are both real and artificial. They are what historian Sheila McManus calls an ‘accumulation of terrible ideas’, created through colonialism, imperial fantasies, apartheid, and the regular practice of exclusion,” writes Molnar. “The walls have eyes offers a global communicative of the sharpening of borders through technological experiments, while besides introducing strategies of togetherness across physical and ideological borders.
“It is an invitation to simultaneously bear witness to violent realities and imagine a different planet – due to the fact that a fresh planet is possible and ‘hope is simply a discipline’. We can change the way we think about borderlands and the people who are caught at their sharpest edges.”
Speaking with Computer Weekly, Molnar describes how lethal border situations are enabled by a mixture of increasingly hostile anti-immigrant politics and sophisticated surveillance technologies, which combine to make a deadly feedback loop for those simply seeking a better life.
She besides discusses the “inherently racist and discriminatory” nature of borders, and how the technologies deployed in border spaces are highly difficult, if not impossible, to divorce from the underlying logic of exclusion that defines them.
‘We are Black and the border guards hatred us. Their computers hatred us too’
For Molnar, technology provides a window into how power operates in society, and whose priorities take precedence.
“So much of the tech absolutely could be utilized for another purposes,” says Molnar, noting how drones could be utilized for maritime rescue, while AI could be utilized to audit immigration decisions or aid identify racist border guards.
“Instead, it’s always weaponised and positioned as a tool to oppress an already marginalised group of people. So much of it is about these broader logics of what the migration control strategy is expected to be doing – which is preventing people who are unwanted or an ‘other’, or seen as a threat or a fraud, and keeping them distant as much as possible, and besides to deter people from coming.”
Petra Molnar, human rights lawyer and author
Given the socio-technical nature of technology – whereby the method components of a given strategy are informed by social processes and vice versa – Molnar says the conceptualisation of borders as a bulwark against “the other” affects how technology is developed for usage in border spaces.
This dynamic is summed up by a quote Molnar uses in the book from Addisu – a individual on the move, originally from Ethiopia, who has been trying to scope the UK since arriving in Europe 2 years ago: “We are Black and the border guards hatred us. Their computers hatred us too.”
In essence, the exclusionary impulse of borders means migration is yet framed as a problem: “It’s seen as something to solve, and now we have tech to solve the problem.”
Molnar adds the framing of migration as a problem means states are then able to derogate from their rights due to the fact that they’re seen as threats or frauds, and are not considered refugees until proven otherwise.
“If you look at criminal law, in most jurisdictions at least, you’re innocent until proven otherwise. But you’re not a exile unless proven otherwise. We call it the reverse onus principle, where it’s on the individual to prove that they are telling the truth, that they have a legitimate right to protection.”
She says if this is the political starting point, “where you presume that everybody is unwelcome unless proven otherwise, and you don’t have quite a few law, and you’re obsessed with technology, it creates this perfect environment for truly high-risk tech with beautiful much zero accountability”.
Reduced to a data point
To Molnar, the usage of surveillance technologies to manage people’s movement across borders is inherently dehumanising. “That’s something I saw as a trend in so many of the conversations I had with people on the move, who were reflecting on being reduced to a data point, or an eye scan or a fingerprint, and already feeling more dehumanised in a strategy that’s primed to see them as subhuman,” she says.
She adds that automating or even just augmenting migration-related decision-making with algorithms or AI besides works to divorce people on the decision from their humanity in the eyes of those yet making the decisions.
“Instead of looking individual in the eye, they’re looking at an image of a individual or a data point that is, again, divorced from a person’s humanity and the complexity of people’s stories and legal cases.”
Because border spaces are already so opaque, discretionary and characterised by immense power differentials between border officials and people crossing, Molnar says the usage of various technologies only makes it more hard to introduce accountability and responsibility, not just in terms of governance, but on the human level of how it divorces the people carrying out the force from their own humanity as well.
“When the force happens way over there as a consequence of tech, as a consequence of surveillance, it’s not so immediate, and then possibly not so viscerally felt, even by the decision-makers who are there. That is besides a violent practice, this disavowal of responsibility.”
“Technology forces us to not sit in the beautiful complexity of what it means to be a human being, but alternatively be categorised as a data point in these rigid categories that don’t map onto the messiness of human reality”
Petra Molnar, human rights lawyer and author
This effect is besides exacerbated by AI, which has peculiarly insidious effects in migration-border management contexts due to the way it sees people on the decision through the gaze of past prejudices, and fundamentally projects existing inequalities, biases and power imbalances into the future while treating those discrepancies as an nonsubjective truth.
“Ultimately, it’s about putting people in boxes and concretising their experience based on truly rigid categories,” she says. “Technology forces us to not sit in the beautiful complexity of what it means to be a human being, but alternatively be categorised as a data point in these rigid categories that don’t map onto the messiness of human reality.”
By reducing people to rigid categories and classifications, Molnar says it becomes easier to treat them with cold, computerised contempt.
However, she adds, while a logic of deterrence is clearly baked into the global border system, in practice, the usage of increasingly sophisticated surveillance systems only works to push people towards increasingly dangerous routes, alternatively than deter them completely.
“You see that a lot with the border surveillance infrastructure that’s grown up around the Mediterranean and Aegean, but besides the US-Mexico corridor. They’ll say, ‘If we introduce more surveillance, then people are going to halt coming’, but that doesn’t work,” she says, adding that people exercising their internationally protected right to asylum are being forced into taking more dangerous routes to scope safe destinations.
“That’s why you see so many people drowning in the Mediterranean or Aegean Sea, or why you have deaths nearly tripling at the US-Mexico border.”
‘Humane’ dehumanisation
Molnar says that while border technologies have a clearly dehumanising effect on people on the move, authoritative justifications for utilizing the tech mostly revolve around making the migration process more humane.
“The Democrats [in the US] have become very good at this due to the fact that they say, ‘Smart borders are more humane, they’re better than the Trump wall and putting babies in cages’. But then, erstwhile you start picking it apart, you realise that these policies are besides hurting people. Again, the close tripling of deaths at the US-Mexico border since the introduction of the smart wall system, that’s rather telling,” she says, adding that technology frequently works to obfuscate the degree and seriousness of border violence, hiding under the guise of someway being more humane: “That’s why it’s crucial to interrogate the power of the technology.”
In many cases, the deployment of surveillance technologies in migration contexts is not only posited as more humane, but is explicitly justified under the pretext of providing humanitarian support to underdeveloped countries.
“Europe and the US are so implicated in supporting regimes that are very problematic under the guise of humanitarian support. But oftentimes, it’s for border externalisation – it’s to get another actors to do the dirty work for you,” she says, adding that the European Union (EU), for example, regularly provides backing and tech to various paramilitaries on the African continent that are active in border enforcement, as well as coastguards and border force teams in countries like Libya and Niger.
“If the frontier is moving further and further away, it makes it easier for ‘Fortress Europe’ to stay unassailed.”
Molnar adds that these kinds of humanitarian justifications for expanding border tech deployments are besides being pushed by the 3rd sector and non-governmental organisations.
Although people working in these organisations are frequently well-intentioned, Molnar says global organisations specified as the United Nations (UN), Unicef or the planet Food Programme have “huge normative power” over the thought that “more data is better”, and are so a massive driving force behind normalising quite a few the border tech presently in use.
“The exile camps in Kenya, like Dadaab and Kakuma, were any of the first places that had biometric registration. If you look at the Global Compact on Migration, which is this large global paper that was put together a fewer years ago, the first point is ‘more data’. That’s rather telling,” she says.
“When you see, for example, what the United Nations advanced Commissioner for Refugees did with the Rohingya refugees – they collected so much data, and then inadvertently shared it with the Myanmar government, the very government that the refugees are trying to flee from.
“But how did that happen? I think we request to query what happens in this ‘third space’ of global actors too, not just states or the private sector.”
Surveillance pantomime
In her book, Molnar notes that while the various technologies of surveillance and control deployed at borders work well due to their diffuse, omnipresent nature, they frequently don’t even should be that effective in achieving the goals of state authorities, as “their spectre and spectacle changes our behaviour, modifies our thinking, and adds to a general sense of unease, of always being watched”.
Highlighting her visits to the Evros region between Greece and Turkey, Molnar says it’s not always clear what the tech is doing, especially erstwhile it comes to any of the more obscure AI-driven tools being used.
“It’s almost like it’s the ‘performance’ of the tech that’s more important,” she says, adding that while it is surely actual that the tech does straight affect negative outcomes for people on the move, it’s unclear if this is due to the fact that the tech is doing its job, or due to the powerful effect “security theatre” has on people’s behaviour.
“The performance of surveillance and securitisation is what is rather powerful. But ultimately, I think so much of it is about politics … you feel the power of the surveillance, even if it’s not truly there. You feel the paranoia.”
Molnar adds that this dynamic is further enabled by the legislative and governance frameworks around border technologies, which fundamentally work to shield the states and corporations active from any meaningful accountability, as invoking the spectre of “national security” allows them to shut down any scrutiny of the tech they’re deploying.
While the governance of border technologies globally is already characterised by utmost opacity, this gets even worse “as shortly as the national safety paradigm is invoked, due to the fact that then there are even less responsibilities that a state has, for example, to citizens or afraid researchers to tell them what’s actually happening”.
Molnar adds that while governance and regulation can aid improve transparency within a strategy that is designed to be opaque, there have been “disappointing trends” in fresh years.
“I think quite a few us – possibly naively – are hoping that the European Union’s AI Act might be a strong force for good erstwhile it comes to putting up any guardrails around border tech in particular,” she says, adding that while it contains affirmative measures on the face of it – including a hazard matrix and allowing for certain technologies to be coded as advanced hazard – the national safety carve-outs mean these responsibilities placed on various actors by the government simply do not apply in border spaces.
“As shortly as you can say something’s national security, so the law doesn’t apply in the same way, what good is simply a part of government like that?”
Artificial and colonial borders
Highly critical of the way artificial borders are seen as natural phenomena erstwhile they are, in fact, historically fresh social constructs, Molnar says the surveillance apparatus being deployed worldwide helps to keep and reenforce imperialist power dynamics.
“So many people think that borders and bordering as a practice is something that’s always been with us when, in fact, it’s a social construct, and borders have been shifting and changing since time immemorial,” she says. “There were times where borders truly weren’t a thing, where people could just travel freely from place to place, and the current reality of hard border control is actually a very fresh phenomenon.”
Relaying a fresh journey to Naco, a tiny border community in Arizona, Molnar says she was struck by how porous the US-Mexico border was even just a fewer decades ago, with locals sharing tales of how they are no longer able to decision across town since it was bifurcated by a immense wall.
“People would be playing volleyball across the border, and now there’s this hulking part of infrastructure. It seems so intractable like it was always there, but that’s not the case at all.”
For Molnar, the horrific human impacts of border technologies so yet run along and reenforce “colonial delineations” with who is seen as a “worthy” immigrant and who becomes the eventual “persona non grata” being “mapped along the lines of Western imperialism, white supremacy and apartheid”.
She warns that these modes of reasoning mean erstwhile it comes to the improvement of border or migration management technologies, people on the decision and their needs will always be thought of last.
“Why can’t they just usage AI to aid us with all the forms?” she says, only half joking. “Instead, it’s visa triaging, and robo-dogs, and AI-powered lie detectors. Why don’t you just talk to people on the move? What do they need?
“But again, it’s all about power, and it’s power brokers that are the ones making decisions. It’s the states, it’s the private sector, it’s the UN and global organisations. It’s definitely not affected communities.”
Further highlighting the example of AI-powered lie detectors in airports – which were developed by the EU’s Horizon 2020 task – Molnar says the academics active (who she spoke with directly) did not take into account the way people on the decision may act differently due to trauma, which affects their memory and how they tell stories or relay information, or cultural differences in the ways they communicate.
“I remember talking to this group of academics, and they were so distressed. They were like, ‘We didn’t think about any of this’, and I said, ‘How could you not? Did you not talk to a single exile lawyer or a exile before designing this?’ That’s disturbing to me,” she says.
“Affected communities are the last possible kind of stakeholder in this conversation, and I think we request to flip that completely.”
Optimism over despair
It is an open question for Molnar whether it is even possible to introduce fresh technologies into border spaces that don’t support their inherently “violent ideals of exclusion”, noting that while it is entirely possible to imagine genuinely helpful uses of tech, a confluence of powerful interests is preventing this from happening.
Petra Molnar, human rights lawyer and author
“So much of the money in the ‘border industrial complex’ that’s grown up around border tech is there to support states and the private sector in their goal to keep people out, alternatively than utilizing even a fraction of this money to either make the strategy better as it is right now, or even support tech improvement for communities, by communities.”
However, Molnar says there are actions that people can and are taking to aid support people on the decision facing force at borders.
She adds while there are already calls for stronger regulation of border technologies to hold governments accountable, and civilian society and journalists have a function in asking hard questions about their tech deployments, 1 option could be to take a smaller, more localised approach.
Highlighting various municipalities in the US that have banned facial designation technology, Molnar says a akin “community approach” could be taken with regards to the various technologies being deployed in border spaces.
Despite this, she notes “that besides isn’t enough” as any approach would besides gotta look at the problem holistically, given border tech specifically operates at both a national and global scale, creating a tension and disconnect between people on the ground and those holding the political levers of power.
Given the fast environmental degradation taking place worldwide, Molnar adds the usage of technology to push people distant and strengthen borders simply will not halt people from migrating.
An alternate approach is so urgent, which Molnar says must include challenging the existing laws (while recognising the clear limits of our current legal systems); co-opting and co-designing the tech in the interests of people on the move, alternatively than corporations and state authorities; and creating participatory institutions self-directed by people on the decision (based on the rule ‘nothing about us without us’).
“I think so much of it comes down to seeing 1 another as full human, and leading from a place of curiosity alternatively than fear.”
Molnar concludes that while the situation at borders around the globe may be bleak, “there are always people who make choices to show up”.
Whether that be people launching their own search and rescue boats in the Mediterranean, going into the Sonoran Desert to do “water drops” for those crossing the dangerous terrain, or farmers sheltering people on the decision in the corridor between Poland and Belarus – all of which pose a real threat of arrest and failure of liberty – Molnar says it’s yet about human-to-human interaction and uncovering ways of moving past differences: “There is always a choice.”