Beyond the interesting constitutional legal technical debate that on the interpretation of the fourth amendment in the digital age took place in Riley v. California 1The interesting thing about the case for the purposes of this column is to highlight how powerful the shock wave of Artificial Intelligence (AI) is, which is even already present in the judicial discourse of such an important court. Judge Roberts’ phrase, “Modern cell phones … are now such a pervasive and insistent part of everyday life that the proverbial visitor to Mars might conclude that they were an important feature of human anatomy,” does nothing but confirm this trend by referring to the phenomenon of cyborgization, which is one of the most popular faces of AI and which causes the greatest attraction in public opinion.
The navigation chart that the United States has in the field of AI dates from October 12, 2016 and is about the report “Preparing for the future or artificial intelligence ”, presented by the Committee on Technology of the National Science and Technology (NSTC), Subcommittee on Machine Learning and Artificial Intelligence, attached to the Council of the Executive Office of the President. The report’s tone is optimistic that AI has the potential to improve people’s lives by helping solve some of the world’s biggest challenges and inefficiencies. It also notes that AI, given its nature as a multipurpose technology, has applications in many products, such as cars and airplanes, which are subject to regulations designed to protect the public from harm and ensure fairness in economic competition.
In light of these considerations, the report asks: how will the incorporation of AI affect goods and services based on the relevant regulatory approaches in the matter? The thesis of the report is that, in general, the approach of AI market regulation to protect public safety should be based on the assessment of the risk aspects that the addition of AI can reduce, along with the risk aspects. that can increase. Furthermore, if a risk falls within the limits of an existing regulatory regime the policy discussion should begin by considering whether existing regulations already adequately address the risk or whether they should be adapted to the addition of AI. On the other hand,
Promoting innovation and respect for human rights ( civil rights ) are the terms of the regulatory equation that is beginning to take shape in the United States. It is about promoting AI with justice, fairness, accountability and safety , in accordance with an orientation preliminarily formalized in the Big Data report : Seizing Opportunities Preserving Values (2014) 2 , which emphasizes regulation in terms of privacy, consumer rights, transparency, security and non-discrimination.
However, the first federal laws on AI have not fallen on issues related to civil rights., but they have regulated defense and security issues. The Section 238 (g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub. L. No. 115-232, 132 Stat. 1636, 1695 (Aug. 13, 2018) (codified at 10 USC § 2358, note) took the important step of proposing some AI definitions, prescribing that the term “artificial intelligence” includes the following: (1) any artificial system that performs tasks under different and unpredictable circumstances without significant human supervision or that can learn from experience and improve performance when exposed to data sets; (2) an artificial system developed in computer software, physical hardware, or in another context that solves tasks that require human-like perception, cognition, planning, learning, communication, or physical action; (3) an artificial system designed to think or act like a human, including cognitive architectures and neural networks; (4) a set of techniques, including machine learning,
On the other hand, two bills are currently being processed: the Algorithmic Accountability Act of 2019 (04/10) (2019), introduced by Senators Ron Wyden (D-OR), Cory Booker (D-NJ) and Representative Yvette Clarke (D-NY). The project seeks to avoid discrimination against women and ethnic groups through the use of algorithms in cases of employment, consumption, credit, among other matters. Also, Representative Mark Takano (D-California) introduced the Justice in Forensic Algorithms Act project(09/17/2019), to ensure that defendants have access to source code and other information necessary to exercise their rights of confrontation and due process when algorithms are used to analyze the evidence where appropriate (opening the black box of forensic algorithms).
And it is that the most complex challenge to address at the present time, due to the profound impact that it produces in the daily lives of citizens, is that of algorithmic transparency, strongly promoted by the current dominant paradigm of AI, namely, systems algorithms based on artificial neural networks ( neural networks ) 3 , one of whose characteristics is the difficult, if not sometimes impossible, traceability or explicability of how they produce their predictions and decisions.
The problem then is not only that the algorithms of the most diverse functionalities discriminate by biases, prejudices or selection mechanisms ( bias) that the programmers themselves introduce them, or because the data themselves carry them implicit, but rather as a result of the machine learning of the neural algorithm and the impossibility of drawing the map or decision tree, it is never possible to know how they come to make their predictions . Which, by the way, has generated in response the birth of a new field of research known as Explainable Artificial Intelligence , whose purpose is to search for algorithmic tools that allow tracing or explaining how AI systems and, in particular, those of deep learning, produce their results.
Meanwhile, people’s rights will continue to be besieged by neural algorithm networks and other deep learning
1 Riley v. California, 573 US 373 (2014).
2 Executive Office of the President (2014). Big Data: seizing opportunities, preserving values , Washington DC, White House.
3 An artificial neural network is a system of programs and data structures that approximates the functioning of the human brain (or rather, how the human brain is believed to work). They constitute a model of deep learning that, in turn, it is a subset of a broader field of AI known as machine learning.). One way to empirically measure the leadership of the aforementioned AI techniques is by observing the explosive growth in the number of patent applications worldwide for them. According to the latest Report of the World Intellectual Property Organization (2019), “the machine learning techniques that revolutionize artificial intelligence are deep learning and neural networks, and these are the fastest growing artificial intelligence techniques in terms of requests Patent: Deep learning showed an impressive average annual growth rate of 175 percent from 2013 to 2016, reaching 2,399 patent applications in 2016; and neural networks grew at a rate of 46 percent during the same period, with 6,506 patent applications in 2016. Among functional AI applications, computer vision, which includes image recognition, is the most popular. Computer vision is mentioned in 49 percent of all artificial intelligence-related patents (167,038 patent documents), growing annually by an average of 24 percent (21,011 patent applications filed in 2016) ”(pp. 13 -14).