The European Commission (EC) has recently issued guidelines (“Guidelines”) on the definition of an AI system, as mandated by Article 96(1)(f) of the AI Act. The Guidelines aim to assist understanding of the scope and application of the AI Act, particularly for businesses and legal professionals navigating the regulatory landscape of AI technologies. The Guidelines, while helpful, lack – like many recent guidelines and papers from the EU-level – a practical approach and “grip” that helps organizations to evaluate their situation precisely and with legal certainty. Non-binding nature of the Guidelines It is important to note that the Guidelines are not legally binding. Only the Court of Justice of the European Union (CJEU) can provide an authoritative interpretation of the AI Act. Nevertheless, these guidelines can serve as an initial reference point for applying the AI system definition. Case-by-case assessment The Guidelines emphasize the necessity for a case-by-case assessment rather than a fixed, exhaustive list of AI systems. This approach ensures that the definition remains adaptable and relevant to the evolving landscape of AI technologies. At the same time, it makes the legal assessment for organizations more difficult and provides less legal certainty. Key elements of an AI system The Guidelines outline along Art. 3 (1) of the AI-Act seven key elements that collectively define an AI system. The AI system’s definition is lifecycle-based, covering both the pre-deployment (building) phase and the post-deployment (use) phase. Not all elements need to be present in both phases, reflecting the complexity and diversity of AI systems and ensuring the definition aligns with the AI Act’s goals by being adaptable to various types of AI systems. However, all mandatory elements must appear at least once for the system to meet the definition of an AI system. The seven elements are: 1. Machine-based system: AI Systems are fundamentally machine-based, incorporating both hardware and software components. This includes advanced technologies such as quantum computing systems. 2. Autonomy: Autonomy is a defining characteristic, referring to systems designed to operate with varying levels of independence from human intervention. Systems requiring full manual control are excluded from the AI system definition, while those with some degree of independent action qualify as autonomous. 3. Adaptiveness: Adaptiveness after deployment, though not mandatory due to the AI Act’s wording in Art. 3(1) using “may”, refers to an AI system’s ability to exhibit self-learning capabilities and change behaviour based on new data or interactions. 4. Objectives: AI systems are designed to achieve specific objectives, which can be explicitly encoded by developers or implicitly derived from the system’s behaviour and interactions. These objectives may differ from the intended purpose of the AI system. 5. Inferencing: The ability to infer how to generate outputs using AI techniques is a key distinguishing feature. Inferencing must be interpreted broadly as this ability applies primarily to the use phase (when the AI system generates outputs) but also to the building phase (when the AI System derives outputs through AI techniques, enabling inferencing). The Guidelines explain that AI systems use different techniques that enable inferencing: a. Machine learning approaches: AI systems learn from data to achieve objectives. Examples include: i. Supervised learning: AI systems learn from labelled data (e.g., email spam detection, medical diagnostics, image classification). ii. Unsupervised learning: AI systems learn patterns from unlabeled data (e.g., drug discovery, anomaly detection). iii. Self-supervised learning: AI systems generate their own labels from data (e.g., language models predicting the next word in a sentence). iii. Reinforcement learning: AI systems learn through trial and error based on a reward function (e.g., robotic arms, autonomous vehicles). v. Deep learning: AI systems use layered architectures (neural networks) for representation learning, allowing them to learn from raw data. b. Logic- and knowledge-based approaches: AI systems infer from encoded knowledge or symbolic representation of the task to be solved. These systems rely on predefined rules, facts, and logical reasoning rather than learning from data. Per the Guidelines, the following systems are not AI systems in the meaning of the AI Act and, consequently, do not fall in scope of the AI Act: a. Systems for improving mathematical optimization: Systems designed to improve mathematical optimization or to accelerate traditional optimization methods (e.g., linear or logistic regression) have the capacity to infer, but such systems do not exceed basic data processing. b. Basic data processing: These systems execute predefined operations without learning, reasoning, or modelling – such systems simply present data in an informative way. c. Systems based on classical heuristics: These systems use rule-based approaches, pattern recognition, or trial-and-error strategies rather than data-driven learning. d. Simple prediction systems: Even if these systems technically use machine learning approaches, their performance does not meet the threshold required to be considered an AI system as they are only using basic statistical estimation. 6. Outputs: The capability of AI systems to generate outputs, such as predictions, content, recommendations, or decisions, sets AI systems apart from other software. AI systems outperform traditional software by handling complex relationships in data and generating more nuanced, dynamic, and sophisticated outputs. The Guidelines also provide more details on the different categories of outputs: a. Predictions: AI systems estimate unknown values based on given inputs. Unlike non-AI software, machine learning models can identify complex patterns and make highly accurate predictions in dynamic environments (e.g., self-driving cars, energy consumption forecasting). b. Content: AI systems can create new material, including text, images, and music. c. Recommendations: AI systems personalize suggestions for actions, products or services based on user behaviour and large-scale data analysis. Unlike static, rule-based non-AI systems, AI can adapt in real-time and provide more sophisticated recommendations (e.g., hiring suggestions in recruitment software). d. Decisions: AI systems autonomously make conclusions or choices, replacing human judgment in certain processes. 7. Interaction with the environment: AI systems actively interact with and impact their deployment environments, including both tangible physical objects (e.g. robot arms) and virtual spaces (e.g. digital spaces, data flows, and software ecosystems). Implications While the Guidelines provide a useful starting point, including examples and explanations, they ultimately emphasize that each system must be assessed individually to be considered an AI system or not. |