Sponsored By

What the EU AI Act Means for the Enterprise, Wherever It’s LocatedWhat the EU AI Act Means for the Enterprise, Wherever It’s Located

In the same way entities that access, process, and hold data belonging to individuals or entities are held accountable under GDPR rules, these new EU AI rules will also apply to entities far beyond the boundaries of the EU itself.

Martha Buyer

October 7, 2024

7 Min Read
A graphic on technology and regulation
Image: putilov_denis - stock.adobe.com

Why should you pay attention to the European Union’s groundbreaking and world-leading take on AI regulation? Because enterprise communications are global and what happens in one country may affect an enterprise somewhere else. The information here, I hope, will provide at least some very important direction and general information about the EU’s Artificial Intelligence Act. We examined this act in December 2023, but I want to use this post to expand and explain on some key provisions that affect enterprise communications technology professionals.

With the adoption of this legislation, the EU has forcefully taken the lead in drafting language that is far more than guidance about the deployment, use and storage of AI systems and the data that is manipulated by it to protect its governments, enterprises and citizens. (NOTE: I didn’t say “residents.” Here’s why: Residents are people who live in the EU and compose a more limited group. Citizens could be anywhere in the world, and their data, regardless of where each is physically situated, is subject to these rules even though he/she/they may be located in Nebraska. This is also true for GDPR, but it’s an important distinction.) Parts of the act have already become enforceable, with the balance becoming enforceable within the next 18 months.

The EU has taken the lead in drafting not just policy, but binding and enforceable laws addressing the use of AI and the underlying data that drives it. Over the summer, the Biden Administration announced new AI actions and secured, according to tis own press release “major voluntary commitment” to enhance a previous executive order on AI regulation and management. But any way you look at it, a voluntary commitment is a much lower regulatory burden than is a binding piece of legislation. Think of the Pirate Code, and remember that Captain Jack Sparrow referred to it as “guidelines.” Guidelines are nothing more than suggestions. Laws are laws.

Like the EU’s document, the U.S.’s executive actions direct that risks be identified and managed, from as early as initial product development through implementation and ongoing system management. While actions such as these are certainly steps in the right direction, “voluntary commitment” is a long, long way from enforceable law. Compound this with the fact that technology evolves much more quickly than legislation ever will, and it’s not hard to see that there’s a big gap between what’s going on on this side of the pond vs. what’s happening in the EU.

In the same way entities that access, process and hold data belonging to individuals or entities are held accountable under GDPR rules, these new EU AI rules will also apply to entities far beyond the boundaries of the EU itself. While this note only provides a few key highlights, certain definitions are critical to understanding what the act could mean to non-EU based businesses. Again, this list is far from exhaustive, these four words and phrases will be critically important in managing data at all levels.

What follows are four key definitions from the Artificial Intelligence Act:

Deployer - a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. A deployer is an entity that is using an AI product or system in its operations. This could be a bank, a governmental agency or a school system, for example.

Data Holder - a natural or legal person who has the right or obligation, in accordance with EU or national law, to use and make available data — including, where contractually agreed — product data or related service data which it retrieved or generated during the provision of a related service. Here, an example would be an entity that doesn’t just use AI and underlying data, but that retains that data.

Importer - a natural or legal person located or established in the [European] Union that places on the EU market an AI system that bears the name or trademark of a natural or legal person established in a third country. Here, for example, an AI system that is running in the UK, but is utilizing a product developed by and owned by an entity in the U.S. There’s nothing wrong with this model, but it does identify that some part of the process has crossed an international border.

Provider- a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

GDPR and the EU Artificial Intelligence Act are really only about identifying and managing risk. The EU Artificial Intelligence Act document defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” That’s such a clear definition that it bears repeating: “the combination of the probability of an occurrence of harm and the severity of that harm.” 

Identifying and managing risk is why lawyers have jobs. In the case of the EU Artificial Intelligence Act, a four-level pyramid of risk has been created with activities in the “high risk” category warranting the most attention. We addressed this in a December 2023 post on the EU Artificial Intelligence Act, but what follows presents somewhat of a deeper dive.

The activities included in the lowest level of “Minimal Risk” include voluntary codes of conduct, spam filters and video games. Here, deployers must make individuals aware that such systems are in place and in use. Further, with respect to deep fakes, deployers that generate or manipulate image, audio or video that are “deep fakes” must disclose that the content in question has been either manipulated or generated artificially. There are some risks associated with these activities, but though not non-existent, such actions do not warrant the scrutiny or disclosures that riskier data creation, manipulation and storage pose.

The second level of risk is categorized as “Limited Risk.” Included in this level are, emotion recognition, biometric categorization, AI generated and manipulated content.

“High Risk” activities include conformity assessment as defined in Articles 26 and 27 of the EU AI Act, obligate the deployer to rely upon instructions for actual use, monitor and report issues, retain logs, provide human oversight, and assess whether any fundamental rights are—or could be--impacted by the AI system in question. Data including biometrics, information related to critical infrastructures, education, employment and work management all fall into this category. Deployers of high-risk AI systems must commit, interestingly, to use such systems “in accordance with the provider’s instructions for use, and further agree to cooperate with law enforcement and other governmental authorities as requested. Such commitments to extensive and recorded oversight may go a long way in preventing unauthorized access to AI systems and data.

The highest level of risk is categorized as “Not Acceptable.” Under the EU AI Act, such access and actions are strictly prohibited. This level of risk associated with AI processes are “subliminal, manipulative and deceptive systems, systems that exploit vulnerabilities; systems that utilize and/or access facial recognition databases, those that infer emotions, include biometric categorization and social scoring.” The EUAI Act explicitly prohibits them.

The challenge on the enterprise side is to both identify activities that can be considered high risk, and then managing those activities to minimize potential exposure. The world with AI on it is fraught with uncertainty, particularly as new rules and obligations are rolled out and become not only the law of the land, but the enforceable law of the land.

There will be many new developments and certain litigation that will continue to shape the governance of AI practices throughout the world. For now, the EU is leading the pack of regulations and regulators, but it’s far from alone.

About the Author

Martha Buyer

Martha Buyer is an attorney whose practice is largely limited to the practice of communications technology law. In this capacity, she has negotiated a broad array of agreements between providers and both corporate and government end users. She also provides a wide range of communications technology consulting and legal services, primarily geared to support corporate end-users' work with carriers and equipment and service providers. In addition, she works extensively with end users to enable them to navigate international, federal, state and local regulatory issues, with particular attention to emergency calling, along with issues related to corporate telecommunications transactions among and between carriers, vendors and end-users. She has also supported state and federal law enforcement in matters related to communications technology. Ms. Buyer's expertise lies in combining an understanding of the technologies being offered along with contractual issues affecting all sides of the transaction. Prior to becoming an attorney, Ms. Buyer worked as a telecommunications network engineer for two major New York-based financial institutions and a large government contractor. She is an adjunct faculty member at Regis University, the Jesuit college in Denver, where she teaches a graduate-level course in Ethics in IT.