• The EU "Artificial Intelligence Act": Ground Zero for the Regulation of the Information Revolution

    June 13, 2023
    No Comments
    The ChatGPT logo. ChatGPT is an AI chatbot made by OpenAI and is one of the most popular AI systems currently available to the public.

    Please Follow us on GabMindsTelegramRumbleGab TVTruth SocialGettrTwitter

    New boy in town. The use of Artificial Intelligence mostly falls beyond regulatory reaches -- although not in Europe. To navigate this uncharted terrain, the European Commission is proposing its first-ever legal framework: the “Artificial Intelligence Act”. This regulation addresses the risks of AI and focuses primarily on strengthening rules about data quality, transparency, human oversight, and accountability. Actually, the proposal is part of a wider package, which also includes the updated “Coordinated Plan on AI”.

    Some kind of class system. The cornerstone of the act is a classification system that determines the level of risk this kind of technology could pose to the health and safety or fundamental rights of an individual. Governments and companies using these tools will have different obligations, depending on the risk level. The rules also specify requirements for companies providers of the so-called “foundation models” such as ChatGPT, which have become a key concern for regulators, given how advanced they’re becoming.

    The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) has coined the term "foundation model" referring to models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning.

    First Act, first approach. That said, let's go to the act itself.  The AI Act categorizes its applications into four levels of risk: unacceptable, high, limited, and minimal or no risk.

    Unacceptable risk: you're poison running through my veins. Some of the AI outcomes, perceived by the lawmaker as extremely dangerous are banned by default. The act provides some examples: systems using subliminal techniques, or manipulative or deceptive techniques to distort behavior; systems used for risk assessments predicting criminal or administrative offenses; systems exploiting vulnerabilities of individuals or specific groups.

    High risk: I told you I was trouble. AI systems identified as high-risk include: technology used in, for example, critical infrastructures as transport; that could put the life and health of citizens at risk; that may determine the access to education and professional course of someone’s life (e.g. scoring of exams) and to essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan) According with the proposed Act,  these high-risk systems will be subject to strict obligations before they can be put on in the market, including considering an adequate risk assessment and mitigation systems; the high quality of the data sets feeding the system to minimize risks and discriminatory outcome, and the logging of activity to ensure traceability of results.

    Furthermore, all remote biometric identification systems are considered by the act as high risk and thus subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited. Finally, it is mandatory for some appropriate human intervention to oversight measures to minimize risk. The act also contemplates some exceptions to those rules, if the judiciary or the so-called “independent bodies” give their authorization.

    Limited risk: Champagne problems. They are posed by AI systems with specific transparency obligations. When using systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

    Minimal risk: Pure and simple. Last but not least, the proposal act allows the free use of minimal-risk AI. This includes applications such as enabled video games or spam filters. The vast majority of AI systems currently used in the EU falls into this category.

    Paying for all your future sins. Moreover, the act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or six percent of global income. Submitting false or misleading documentation to regulators can also result in fines.

    In the land of regulation. At this stage, this regulation has only been proposed, this it is not an actual (and enforceable) body of law. Moreover, no matter what its outcome looks like, after debate and some foreseeable changes, it probably will be the first regulation attempt made to handle AI’s hurtful second effects. However, its mere existence is barely surprising for, as an author says, “Every technology in history with comparably transformational capabilities has been subject to rules of some sort.” AI is absolutely “transformational” -- on this point, everybody agrees. By whom or up to what degree it is going to be regulated depends on human -as opposed to artificial- intelligence, common sense, and even compassion.

    Works Cited

    SHARE THIS ARTICLE
                                  

    Author

    Martín Francisco Elizalde

    Martin Elizalde is an Argentine lawyer based in Buenos Aires. His areas of practice include forensic analysis, cyber security and artificial intelligence.

    Off the press

    guest

    0 Comments
    Inline Feedbacks
    View all comments

    Follow Us

  • Miami has long suffered from a lack of opposing opinions to the corporate media narrative. We aim to create Miami's and Florida's premier investigative newspaper and will bring the truth, no matter where that truth lands
    Copyright © 2024 The Miami Independent
    contact@creativedestruction
    media.com
    magnifier