Many tech critics, including This Machine Kills’ Edward Ongweso Jr and Jathan Sadowski, have argued for a long time now that not every piece of technology needs to exist, and that the leading question for any piece of tech should be: do we need this?
This is in direct opposition to how tech normally emerges, whereby an “innovation” is revealed and then thrust on the public, usually in the form of a harmful beta version that we’re promised will improve — the large language model generative AI systems are only the most recent example of this. Worse, though, is how we are then seemingly stuck with it. There’s no serious discussion about whether this particular genie should go back in the bottle, but instead only debate about how to fix it and how we can adapt to it moving forward. This is our broken system of technological progress.
So this is the first in a casual series I’m starting on Tech We Don’t Need, ideas or products that have no place in our world but have been forced on us for various reasons. I’d like to start with facial recognition.
There was an awkward moment in 2020, deep into the pandemic and after the eruptive activism following the murder of George Floyd, where facial recognition was in the news. Most notably, Amazon decided in June to pause the use of Rekognition, its facial recognition tool, by police departments for one year, which “might give Congress enough time to put in place appropriate rules” for the ethical use of facial recognition (Microsoft said something similar). Around the same time, IBM said it would stop selling facial recognition software, and various cities in the United States began passing pans on the technology. Amazon extended its ban “indefinitely” in May 2021.
Fine and dandy — Amazon’s portfolio is diversified to the extent that losing Rekognition’s sales to further police forces was deemed economically beneficial. Meanwhile, those calls for rules on the tech’s ethical use have gone unheeded, as it remains largely unregulated terrain. Here in Canada, the House of Commons ethics and privacy committee recommended last year that there ought to be a “national pause” on the use of facial recognition technology until a proper legal framework is created. It also recommended that companies be forbidden from collecting biometric information from people unless they opt-in. Of course, no further action has been taken, likewise in the US, where a number of jurisdictions who previously banned the tech are now bringing it back.
Much of the mainstream pushback to facial recognition is due to its obvious and well-documented racial and gendered biases, as they particularly perform worse on women of colour. This phenomenon understandably received much critique, but I must admit that I found myself frustrated with the general response. Unfortunately, we live in a world, an unwanted utopia (sorry), where AI systems that incorporate facial recognition assist in hiring, loan access, and countless other things. To be ignored or discriminated against for these basic functions is dehumanizing.
From this woeful place, though, the response is largely focused on adapting these systems so that they don’t discriminate, meaning (in most cases) training the technology with a wider range of faces. For instance, Dr. Ellis Monk, associate professor of sociology at Harvard University’s T.H. Chan School of Public Health, has long studied colorism, and sought to change the dynamics of these recognition systems that perpetuated it. So, as Fortune put it, “it’s with that in mind that Dr. Monk launched a partnership with Google earlier this year.”
This is where I take pause myself. Google, for one, famously pushed out Timnit Gebru, a Google researcher focused on AI ethics, and other members of its Ethical AI research group, for daring to internally question how potentially harmful its tech could be. It goes deeper than that. Dr. Monk said to Fortune, “A lot of the time, there’s such a rush to be the first to do something that it can supersede the kind of caution that we need to take whenever we introduce any form of this technology into society. I would say is that there probably needs to be a lot more caution about launching these technologies in the first place, so, it’s not just about mitigating the things that are already out there and trying to fix them.”
True enough, but this reveals one thing that is seemingly off the table: not doing it to begin with. Dr. Monk suggests that we shouldn’t have to wait until after tech is launched to clean it up, we should wait to launch new tech until it’s truly ready and deemed ethical. That may marginally improve things, but who does the deeming? Google? And what about companies like Clearview AI, which clearly has no scruples about working with the police and anyone else willing to pay?
We seem unable to talk about whether certain tech, like facial recognition, should be used at all, or whether it’s possible to put the lid back on Pandora’s box. There is an overarching assumption that once something is here, we have to get used to it, and we have to wait for years while governments figure what to do about it, if anything. There should be space to ask whether a product like Clearview AI’s, which gathers prints of people’s faces online without their consent to compare against law enforcement material, should exist at all.
I’ve heard of some possibly beneficial uses of facial recognition, like in healthcare, where it can help with diagnosis or monitoring patients for changes in their condition. But generally speaking, I would argue that facial recognition technology is an inherently authoritarian tool. But that’s not even my point — I’m open to hearing more about how it can be repurposed. The point is, within the normative discourse around technological progress, that conversation is a non-starter. And that leaves us at a severe disadvantage when it comes to planning for the future we actually want.
Ephemera
John Herrman wrote an interesting piece for New York’s Intelligencer about how the dominance of the algorithmic video, as led by TikTok, is ruining social media. Much of this feels intuitively true: many apps have desperately struggled to be more like TikTok because it’s the hottest thing around, and most of them have done so quite poorly. Herrman makes they key observation that from the beginning, people went to TikTok to see content from people they don’t know, alongside the promise that this algorithmic sorting makes it easier for anyone, including you, to go viral. “In contrast, on social networks centered around people and feeds, content recommendations made by machine-learning algorithms sourced from users you don’t know feel out of place, incongruous, interruptive, and sleazy. They feel like what they are: desperate attempts to juice engagement based on a machine’s idea of what you want or need. They feel like ads.” Exactly, and as he further notes, if TikTok is banned, it would be a gigantic boon to its Silicon Valley-based copycats as the worse-but-only games in left in town.
Yet another great episode of Paris Marx’s Tech Won’t Save Us podcast, with guest Moira Weigel who talks about her recent Data & Society report on how Amazon’s supposed featuring of small businesses as third-party sellers actually just turns them into mini-Amazons.
I used Letterboxd user Mark Cutliffe’s list “You Don’t Get Me, I’m Part of the Union!” to help prepare for my Film & Labour course — a great resource.
Movie Recommendation: Today I sing the praises of the great filmmaker Chris Marker’s 3-minute short film Cat Listening to Music, which is available on YouTube and various other places online. It does what it says on the tin: a cat peacefully listens to music. But there’s something remarkable about this simplicity, and how Marker (seemingly) edits according to Guillaume’s movements, and even jump cuts to match the kitty’s energy.
Song Recommendation: “For Granted” — Yaeji