16 August 2019
Ever feel like you are being watched? No wonder! According to research, you’re caught on CCTV at least 70 times a day! But what if those cameras automatically recognised your face?
For many of us, facial recognition is a handy tool – unlocking our phones in a literal blink of an eye, or automatically tagging that selfie on social media. However, the tech also comes with more than a few concerns. Mass surveillance and the start of a Big Brother society are at the top of that list.
But that’s old news, right? We’ve been talking about facial recognition tech – the risks and the opportunities – for a long time. However, today, Amazon has hit the headlines for its controversial Rekognition facial recognition technology, revealing that it now recognises eight emotions, including fear.
Why on earth would they want to include fear in the functionality, I hear you ask! Personally I am not sure. They have noted that the capabilities can be used to identify human trafficking for example. Whatever the reason, the announcement has certainly caused a stir online, with commenters calling the move ‘terrifying’.
The particular red flag came alongside the knowledge that Amazon has reportedly been working with the US government and law enforcement with this facial recognition technology. There are reports that the tech was pitched to America’s Immigration and Customs Enforcement (ICE) last year.
There are also concerns about the accuracy of the technology, which when tested, falsely flagged more than 20 members of congress as people who had been accused of crimes. The tests also raised questions about its accuracy across all ethnicities. When you consider that this technology has been trialled by law enforcement both in America and the UK, the question of accuracy could lead to huge consequences for the general public.
Sci-fi to mainstream
So what’s the answer? There is no denying that being able to unlock your smartphone with your face has brought facial recognition tech from sci-fi to the mainstream. However, there are no rules and regulations about how it can be used.
Three cities in America have already banned the use of the technology over fears of its impact on basic human rights. Tech giant Microsoft is also arguing for a government initiative to regulate use of the technology.
Closer to home, in 2018 Greater Manchester Police came under fire for running a trial at the Trafford Centre in which they monitored visitors for six months using Automatic Facial Recognition (AFR) tech, scanning shopper for known criminals and missing people.
All that said, the tech comes with huge opportunities – both in business and making our everyday lives more convenient. For example, a London bar has employed facial recognition tech to show who is next in line to be served. The company that invented the tech, DataSparQ, also plans to set up a ‘FaceTab’. This would remove the need for a credit card when accumulating a tab in the bar, recognising your face instead. While social media networks have the functionality to automatically tag and group images based on the faces featured within them, for example.
Would you accept facial recognition tech, despite the risks, if it made your life easier?
Either way, perhaps the key lies in finding an agreed regulation. Any company or organisation using the technology would adhere to it. And they’d be obliged to inform people when they are being monitored, for example.
As the saying goes, with great power comes great responsibility.