In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.
This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.
Defining an “AI System” Under the AI Act
The AI Act (Article 3(1)) defines an “AI system” as (1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs; (6) such as predictions, content, recommendations, or decisions; (7) that can influence physical or virtual environments. The AI System Definition Guidelines provide explanatory guidance on each of these seven elements.
Key takeaways from the Guidelines include:
- Machine-based. The term “machine-based” “refers to the fact that AI systems are developed with and run on machines” (para. 11) and covers a wide variety of computational systems, including emerging quantum computing systems (para. 13). Interestingly, the Guidelines note that “biological or organic systems” can also be “machine-based” if they “provide computational capacity” (para. 13).
- Autonomy. The concept of “varying levels of autonomy” in the definition refers to the system’s ability to operate with some degree of independence from human involvement (para. 14, AI Act Recital 12). Systems that are “designed to operate solely with full manual human involvement and intervention,” whether through manual controls or automated controls that enable humans to supervise operations, are thus out of scope of the AI system definition (para. 17). In contrast, a “system that requires manually [i.e., human] provided inputs to generate an output by itself” would qualify as such a system because the output is generated without being “controlled, or explicitly and exactly specified by a human” (para. 18).
- Adaptiveness. The Guidelines explain that “adaptiveness after deployment” refers to a system’s “self-learning capabilities, allowing the behaviour of the system to change while in use” (para. 22). The Guidelines state that “adaptiveness after deployment” is not a necessary condition for a system to qualify as an AI system, because the AI Act uses the term “may” in relation to this element of the definition (para. 23).
- Objectives. Objectives are the explicit or implicit goals of the task to be performed by that AI system (para. 24). The Guidelines draw a (not wholly clear) distinction between an AI system’s “objectives”—which are internal to the system—and its “intended purpose,” which is external to the system, relates to the context of deployment and turns on the “use for which an AI system is intended by the provider” (para. 25; citing Art. 3(12)). The Guidelines give the example of a corporate AI assistant whose intended purpose is to assist a company department to carry out certain tasks; this purpose is fulfilled through the system’s internal operation to achieve its objectives, but also relies on other factors, such as the system being integrated into the customer service workflow, the data that is used by the system and the system’s instructions for use.
- Inferencing and AI techniques. The Guidelines state that the capability to infer, from the input received, how to generate outputs is a “key, indispensable condition” of AI systems (para. 26). The Guidelines explain that the term “infer how to” is broad. It is not limited to the “ability of a system to derive outputs from given inputs, and thus infer the result”; instead, it also refers to the “building phase” of an AI system, “whereby a system derives outputs through AI techniques enabling inferencing” (para. 29). The Guidelines state that supervised learning, unsupervised learning, self-supervised learning, reinforcement learning, deep learning, and knowledge-and logic-based techniques are all examples of AI techniques that enable inferencing in the building phase.
- Outputs. Outputs include four broad categories: (1) predictions, meaning estimations about an unknown value from a known value (para. 54), (2) content, meaning newly generated material such as text or images (para. 56), (3) recommendations, meaning suggestions for specific actions, products, or services (para. 57), and (4) decisions, meaning conclusions or choices made by the AI system (para. 58).
- Interaction with the environment. Interacting with the environment means the AI system is “not passive, but actively impact[s] the environment in which [it is] deployed” (para. 60). Impacted environments can be physical or virtual.
The Guidelines also point to Recital 12, which excludes from the AI system definition “simpler traditional software systems or programming approaches” and systems “that are based on the rules defined solely by natural persons to automatically execute operations”. The Guidelines provide examples of systems that may fall into this category—including those for improving mathematical optimization, basic data processing, systems based on classical heuristics, and simple prediction systems. According to the Guidelines, although some of these systems have the capacity to infer, they nonetheless fall outside the scope of the definition “because of their limited capacity to analyse patterns and adjust autonomously their output” (para. 41).
The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.