Please Follow us on Gab, Minds, Telegram, Rumble, Gab TV, Truth Social, Gettr, Twitter
Guest post by Martin Elizalde, Argentine lawyer practicing in the field of information security
Artificial Intelligence (A.I.) has come to stay. It could be defined as a series of computational techniques and associated processes that are used to improve the ability of machines to perform intellectual tasks, such as pattern recognition, computer vision, and language processing (National Science and Technology Council, 2016). Almost astonishingly, its function can also extend to activities that include predicting future events and solving complex tasks. Indeed, A.I. influences our perception of the truth. It works through algorithms that actually determine a person’s preferences.
Staggering as its impact is, its use has not been fully framed by the law yet. Actually, A.I. regulation is rather rare, and in spite of a somewhat comprehensive European body of laws, there is not a general agreement on how to address it. While some countries are following the path toward a complete regulation, it is also true that, even there, Law usually runs behind the technology. Thus, it has been considered that as regulation is still pending: “the increasing use of artificial intelligence (AI) generates new challenges for human rights, with expressed concern about the unprecedented level of surveillance across the globe by state and private actors, which is incompatible with human rights”. Why does the quoted author mention surveillance? Well, because it is a keyword: one of the most formidable A.I. tools is facial recognition-- for it is highly used to monitor.
What should be done then? Some weeks ago, I watched a TED Conference. The speaker, Mrs. Genevieve Bell, is a professor that directs the 3A Institute and Florence Violet McKenzie Chair at the Australian National University (ANU). There, instead of taking a “problem-solution” approach, the author proposed to make questions about this issue and through their answers, to put some light on it.
On my part, I think that there are two fundamental issues to be addressed: first, what is the intent for using A.I.?; second, is the technological procedure that is followed to collect information to feed the algorithms in A.I. transparent enough?
Let's go to the former issue. In my opinion, the intent must abide by our fundamental human rights: Dignity, Freedoms, Equality, Solidarity, Citizen’s rights, and Justice. As human rights law outlines the minimal norms of behavior that everyone is entitled to, governments are responsible for ensuring that these minimum standards are kept and that individuals responsible for violating them are held accountable, typically through administrative, civil, or criminal law.
If opposed to these fundamental rights, the intent would be unethical. For example, some countries are using facial recognition to crack political opposition. In this case, the intent is clearly biased and the use of the tool should be considered illegal.
The latter issue is related to transparency. Is a fact that the new technology heavily rests on the use of algorithms. Similar to the human brain, an algorithm needs pieces of information (data) to gain knowledge and understanding. Regardless of the way data is collected, it is essential to maintain the neutrality, credibility, quality, and authenticity of the data collection process.
To check it, the data collection that makes its algorithms possible must be transparent. Algorithmic transparency is openness about the purpose, structure, and underlying actions of the algorithms used to search for, process, and deliver information. Moreover, it is convenient to use third-party auditing to warrant it.
Overall, transparency is not something that happens at the end of deploying a model when someone asks about it. It is a chain that travels from the designers to developers to executives who approve deployment to the people it impacts and everyone in between.
Clearly, following Mrs. Bell’s approach, there are indeed many more questions about A.I. whose answers could provide a better understanding of the risks it poses. However, making these two, about intent and transparency, may help enough to find some right answers.
Sources & quoted authors:
Charter of Fundamental Rights of the European Union.
“Building Transparency into AI Projects” by Reid Blackman and Beena Ammanath. Harvard Business Review.
Human Rights and Law in the Age of Artificial Intelligence, Minh Tuan Dang, Vietnam National University, Hanoi. https://www.abacademies.org/articles/human-rights-and-law-in-the-age-of-artificial-intelligence-12420.html#:~:text=The%20basic%20rights%20directly%20affected,expression%2C%20and%20right%20to%20work.
0-05-2022 13:45, “Human Rights and Artificial Intelligence: an EU external policies perspective”
“Round Table: Will there be a global consensus over AI regulation?”, by Kerem Gülen October 24, 2022 in Artificial Intelligence. https://dataconomy.com/2022/10/artificial-intelligence-laws-and-regulations/#Why_are_law_and_regulation_important_in_AI_development
“6 big ethical questions about the future of AI”, by Genevieve Bell, https://www.ted.com/talks/genevieve_bell_6_big_ethical_questions_about_the_future_of_ai
“The shape of things to come”, by H.G. Wells.
Leave a Reply