Co-authored by Adriaan Lourens, Candidate Attorney
Artificial intelligence is rapidly reshaping market dynamics and challenging regulators to reconsider how competition law should evolve in response. The use of AI and algorithmic tools raise potential risks under competition law, including the possibility of facilitating collusion between competitors. This creates a complex challenge for competition authorities, who must balance the need to foster innovation with the responsibility to curb anti-competitive conduct in increasingly complex digital markets.
Understanding the risk
The South African Competition Act, 1998 prohibits agreements, concerted practices or decisions by firms that lead to collusive behaviour such as price fixing, market allocation or collusive tendering (bid rigging). Importantly, the Act does not require a formal or express agreement to establish collusion. A “concerted practice” is also prohibited, which is defined as “co-operative or co-ordinated conduct between firms, achieved through direct or indirect contact, that replaces their independent action, but which does not amount to an agreement”.
At first glance, the use of AI tools by individual firms, in the absence of any express coordination with competitors, may not appear to contravene the Act. However, such conduct should not be regarded as entirely unproblematic. A growing concern is that, despite the absence of any explicit agreement, AI tools may produce aligned or coordinated pricing or market strategies, particularly when they rely on similar training data or are built on the same models.
The risk is increased where multiple competitors use the same AI tools, as this may create a scenario where pricing decisions are indirectly coordinated through a shared system. Conduct that initially appears independent and lawful may, over time, develop into a “concerted practice”.
Information exchange through AI
Another concern is that AI tools may unintentionally enable the sharing of commercially sensitive information between competitors. As AI tools become more integrated into business operations, the risk of unintentional data sharing increases, especially where such tools are trained on data from multiple competitors in a particular market.
AI models rely on large datasets, and it is often unclear what data has been used or its origin. If this includes sensitive information from competing firms, such as pricing, cost or customer details, the risk of unintended collusion becomes significantly higher. The capacity of AI to rapidly process and analyse massive datasets in real time amplifies the impact and risk of potential anti-competitive outcomes.
Regulatory uncertainty
South Africa, like most jurisdictions, lacks a dedicated legislative or regulatory framework governing the use of AI, and the Competition Commission has yet to issue formal guidance on the competition law risks associated with the use of AI tools.
Some jurisdictions are taking proactive steps. The EU, for example, has introduced the AI Act, a broad regulatory framework aimed at addressing a range of AI-related issues. While the AI Act expands the investigative and oversight powers of competition authorities in relation to AI, it does not directly address concerns around collusion facilitated by AI.
However, under existing EU case law, doctrines such as conscious parallelism, or autonomous tacit collusion, are better developed. This refers to scenarios where firms independently adopt similar market conduct without any agreement or communication. Based on these principles, firms may potentially face liability where the coordination resulting from the use of AI tools could have reasonably been foreseen or prevented.
In the United States, the Department of Justice’s Antitrust Division and the Federal Trade Commission are investigating various industries utilising algorithmic pricing, viewing the joint use of a common pricing algorithm to set baseline or maximum prices, or to share information as potentially unlawful concerted action under the applicable competition legislation. These matters remain at various stages of investigation and litigation, and to date there are no final court decisions that set a binding legal precedent on algorithmic pricing collusion.
Risk mitigation measures
Given the legal uncertainty and increasing regulatory scrutiny, firms should adopt measures to mitigate the potential competition law risks linked to the use of AI.
This includes carefully assessing AI vendors to ensure they do not inadvertently facilitate anti-competitive behaviour. and ensuring that service level agreements include safeguards to minimise liability and promote compliance with competition law.
Regular audits of AI tools are crucial to confirm ongoing compliance with the Act and to ensure appropriate controls over data access and sharing.
Firms should also be mindful about disclosing excessive or unnecessary information, whether publicly or through unsecured AI tools. The Competition Commission’s Guidelines on the Exchange of Competitively Sensitive Information emphasise that public availability does not necessarily render information non-sensitive. Firms must carefully assess both the nature and extent of any disclosure, limiting it to what is strictly necessary. This is especially important in highly concentrated sectors with few market participants, where the use of algorithmic and AI tools may inadvertently lead to coordinated outcomes in contravention of the Act, even in the absence of direct communication between competitors.
AI offers significant advantages but also introduces complex legal risks. Firms must carefully manage the AI tools they use and put strong compliance systems in place to reduce competition law risks.