top of page
Search
  • Writer's pictureMichelle Dang

Moratorium on AI facial recognition

Had the opportunity to research and speak on AI capabilities especially facial recognition in surveillance activities in my Law and Technology class that I thought was quite an interesting share:


TLDR: I argue that the moratorium on facial recognition technology would stifle technology innovation to the detriment of societal safety. Certainly, there is rich literature and compelling arguments for the moratorium given the harmful implications to minorities and the hefty decay of privacy and civil rights. But weighing up instances of public interest, AI should continue to be explored and perhaps also an exploration for a tempered and less severe method of controlling AI innovation.

Submitted to University of Sydney Law Society Dissent Journal, 2020.

The proposed moratorium on facial recognition was considered in Proposal 11 of The Human Rights Commission in its Discussion Paper on Human Rights and Technology foregrounds the tension between individual privacy rights and collective security objectives. Calls for a moratorium are hinged on the protection of private liberties threatened by the mass storage and interconnected usage of existing user data on social media. Function creeping becomes the byproduct of inadequate privacy protections in legislation that is falling too far behind the pace of technology advancement in biometrics. On the other side, arguments against a moratorium are concerned with the stifle of innovation and confidence in it’s interoperability hub model of government use of data to protect national security. With existing privacy laws taking a ‘balancing’ approach between individual rights to privacy and other interests (Mann & Smith 2017: 122, 131) and with the backdrop of the tragedy of 9/11 there are important surveillance activities that are already enhanced with the use of facial recognition in law enforcement as well as preventative measures. Automated Facial Recognition Technology (AFRT) poses great advances and dangers when access to databases hosting private citizen information if left unbound. Fruits of innovation in the enforcement of law can be nurtured without a strict moratorium if safeguards and methods of redress are introduced.


The misuse and security vulnerability of big data from its collection to its deployment poses serious ethical issues to modern law makers. Whilst there are many publicised failures applications of facial recognition systems, the economies of scale and ease of embedding biometric technology into existing surveillance systems is a tempting opportunity to pass. The For Your Information inquiry of the Privacy Act 1988 (Cth) conducted by the Australian Law Reform Commission recommended that exceptions to accessing government collected data by bodies can only be permitted with compelling justification such as enforcement related activities. The proposed balancing approach has scholars deeply concerned as individual rights are invariably ‘traded off’ against the community interests in preventing, detecting and prosecuting crime (Bronitt & Stellios, 2005 cited in Mann & Smith, 2017). Beyond law enforcement, the Explanatory Memorandum to the Identity-Matching Service Bill 2019 states that ‘using facial biometrics can make government and private sector services more accessible and convenient to citizens’. From a data security stance, the Australian Human Rights Commission (AHRC) in response cautioned that, because biometrics are based on what are considered unique characteristics, ‘there is a risk that biometric identification may be perceived to be more accurate than may be the case’ and further marginalise minority and vulnerable communities in its wake.


The lack of a privacy tort further tips the balance away from private liberties and exposes potential for function creep - the danger when open source images can be collected over social media such as Facebook. Even with hub models such as the structure of organisations like National Facial Biometric Matching Capability (NFBMC), consent checkboxes become false assurances to users if governments and subsidiary law enforcement agencies can take without warrant nor consent of individuals concerned. Instead, the Government may follow the template of the National Criminal Investigation DNA Database (NCIDD) created through legislation (Part 1D Crimes Act 1912 Cth) whereby private data is stored if only, say, a candidate has a criminal background to warrant the storage of their DNA profile - a “reasonably necessary” case. This caveat requires agencies to have a lawful basis of collection and use case for facial images such as the Australian Federal Police polling data to fulfil policing functions or law enforcement purposes. It is an imperfect system, fraught with scores of weaknesses (including marginalisation of minority groups) but with biometric information aggregating quickly into a centralised repository, it would be a step in the direction towards protecting private liberties and community safety with preemptive measures. Amazon Web Services as a vendor for Rekognition software returned the onus on those purchasing the capability; “Each organization choosing to employ technology must act responsibly or risk legal penalties and public condemnation.” The controversy of US law enforcement utilising Rekognition is a compelling cautionary tale for misuse and the damaging effect of underdeveloped technology. One suggested reality of improperly regulated technology and their inaccuracies may put innocent defendants in a position where the onus is on them to disprove their involvement. Similarly, Microsoft was called to cancel its contract with Immigration and Customs Enforcement shows that it is not only regulation required of the technology but also it’s creators.


The question of proportionality where human rights are impinged upon could become a useful tool to circumvent significant invasion of privacy inspired by the IMS Bill. Broadly, the Bill sought to establish a range of services to identify, recognise or verify a facial image. In addition, it proposes systems for the collation, access, use, sharing and disclosure of this type of data. Parliamentary Joint Committee on Intelligence and Security (PJCIS) in their submission to the IMS Bill stated that particular care needs to be taken to ensure that the use of biometric technologies, including facial recognition technologies, is strictly controlled. A moratorium would be akin to the ‘blanket exemption to privacy legislation for organisations participating in the identity-matching services’’ they feared. Perhaps, greater oversight and extended Commissioner powers to ensure compliance with international covenants and protections of Article 17 of the International Covenant on Civil and Political Rights (ICCPR), and Australia’s domestic privacy protection regime set out in the Australian Privacy Principles (APPs) (Schedule 1 of the Privacy Act).


I’d argue that the collection of biometric information necessitates modern life and state surveillance. It already is adopted in international travel into Australia where passports hold significant biometric data on it’s owner. Losing a passport is just as much a risk posed as risks are posed to government databases (albeit different in severity). Data collection is inevitable and big data begets it’s use for the betterment of society. It is whether the information is used against an individual that calls for an overhaul of privacy law and revisits a question regarding a Bill of Rights to be the protectors of civil liberties, not a strangle on technology.




2 views0 comments

Recent Posts

See All

Personal Analytics.

Saw this article: https://www.technologyreview.com/2013/05/03/16109/the-data-made-me-do-it/ whilst looking into some big data trends and...

Design Thinking the Lawyer

https://rethinking.legal/designing-lawyers-down-from-the-ivory-tower-6c336c634fe When I found this Medium piece, I immediately sent it to...

Comments


0
bottom of page