Please Follow us on Gab, Minds, Telegram, Rumble, Gab TV, Truth Social, Gettr, Twitter
New boy in town. The use of Artificial Intelligence mostly falls beyond regulatory reaches -- although not in Europe. To navigate this uncharted terrain, the European Commission is proposing its first-ever legal framework: the “Artificial Intelligence Act”. This regulation addresses the risks of AI and focuses primarily on strengthening rules about data quality, transparency, human oversight, and accountability. Actually, the proposal is part of a wider package, which also includes the updated “Coordinated Plan on AI”.
Some kind of class system. The cornerstone of the act is a classification system that determines the level of risk this kind of technology could pose to the health and safety or fundamental rights of an individual. Governments and companies using these tools will have different obligations, depending on the risk level. The rules also specify requirements for companies providers of the so-called “foundation models” such as ChatGPT, which have become a key concern for regulators, given how advanced they’re becoming.
The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) has coined the term "foundation model" referring to models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning.
First Act, first approach. That said, let's go to the act itself. The AI Act categorizes its applications into four levels of risk: unacceptable, high, limited, and minimal or no risk.
Unacceptable risk: you're poison running through my veins. Some of the AI outcomes, perceived by the lawmaker as extremely dangerous are banned by default. The act provides some examples: systems using subliminal techniques, or manipulative or deceptive techniques to distort behavior; systems used for risk assessments predicting criminal or administrative offenses; systems exploiting vulnerabilities of individuals or specific groups.
High risk: I told you I was trouble. AI systems identified as high-risk include: technology used in, for example, critical infrastructures as transport; that could put the life and health of citizens at risk; that may determine the access to education and professional course of someone’s life (e.g. scoring of exams) and to essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan) According with the proposed Act, these high-risk systems will be subject to strict obligations before they can be put on in the market, including considering an adequate risk assessment and mitigation systems; the high quality of the data sets feeding the system to minimize risks and discriminatory outcome, and the logging of activity to ensure traceability of results.
Furthermore, all remote biometric identification systems are considered by the act as high risk and thus subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited. Finally, it is mandatory for some appropriate human intervention to oversight measures to minimize risk. The act also contemplates some exceptions to those rules, if the judiciary or the so-called “independent bodies” give their authorization.
Limited risk: Champagne problems. They are posed by AI systems with specific transparency obligations. When using systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Minimal risk: Pure and simple. Last but not least, the proposal act allows the free use of minimal-risk AI. This includes applications such as enabled video games or spam filters. The vast majority of AI systems currently used in the EU falls into this category.
Paying for all your future sins. Moreover, the act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or six percent of global income. Submitting false or misleading documentation to regulators can also result in fines.
In the land of regulation. At this stage, this regulation has only been proposed, this it is not an actual (and enforceable) body of law. Moreover, no matter what its outcome looks like, after debate and some foreseeable changes, it probably will be the first regulation attempt made to handle AI’s hurtful second effects. However, its mere existence is barely surprising for, as an author says, “Every technology in history with comparably transformational capabilities has been subject to rules of some sort.” AI is absolutely “transformational” -- on this point, everybody agrees. By whom or up to what degree it is going to be regulated depends on human -as opposed to artificial- intelligence, common sense, and even compassion.
- Miami-Dade Republican Executive Committee Goes Full Tyranny - Robert's Rules Don't Exist
- Who Is Christina Pushaw? Why is she working for Ron DeSantis?
Works Cited
- Brussels, 21.4.2021 COM (2021) 206 final 2021/0106 (COD) “REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS {SEC(2021) 167 final} - {SWD(2021) 84 final} - {SWD(2021) 85 final}
- The Artificial Intelligence Act
- Regulatory framework proposal on artificial intelligence
- The European Union’s Artificial Intelligence Act explained
- Explainer: What is the European Union AI Act?
- Europe takes aim at ChatGPT with what might soon be the West’s first A.I. law. Here’s what it means
- Regulating AI Will Be Essential. And Complicated
- The case of the EU AI Act: Why we need to return to a risk-based approach
- Foundation Models
- Developing and understanding responsible foundation models
- “Pure and Simple”, performed by Dolly Parton
- “You know I am not good”, performed by Amy Winehouse
- “You know I am not good”, performed by Amy Winehouse
- “You know I am not good”, performed by Amy Winehouse