The AI Act, which came into force on 1 August 2024, marks a milestone for the responsible use of artificial intelligence (AI) in the EU. While the majority of the provisions will only apply from 2 August 2026, the provisions on AI literacy pursuant to Art. 4 AI Act will apply from 2 February 2025. In addition, this provision is one of the very few provisions of the AI Act that applies to all AI systems within the scope of the AI Act.
In view of this, companies are now faced with the challenge of ensuring a sufficient level of AI literacy within a short space of time. Promoting AI literacy is also of strategic importance for companies. It not only allows them to operate securely, but also protects against liability risks and strengthens confidence that AI is being used responsibly. Art. 4 AI Act is therefore a key provision for establishing internal AI compliance within an organisation, even though it imposes no penalties for breaches.
The following article outlines the main regulatory content of this important provision and presents a potential training concept for teaching AI literacy.
Nature of Article 4 AI Act as an appeal
Art. 4 AI Act is formulated as an appeal rather than as a concrete obligation, reflected in the fact breach of Art. 4 AI Act is not subject to penalties (e.g. fines). This does not mean, however, that a company should neglect its training obligation, as the lack of explicit penalties does not release companies from their responsibility to ensure that their AI systems are used securely.
If damage is caused by operating an AI system incorrectly or not performing an adequate risk assessment and if the damage could have been prevented by appropriate training, this could be interpreted as a breach of the general due-diligence obligation. Companies should take proactive training measures. This will not only protect the company's reputation, but also strengthen trust among employees and business partners and show that the organisation takes its legal and ethical responsibility for the use of AI seriously.
AI literacy: concept and requirements
According to Art. 4, "providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf," taking prior knowledge and the type of AI system used into account.
The AI Act does not provide an abstract standard to determine when the level of AI literacy is sufficient. It is obvious that this will lead to implementation problems and considerable effort and expense in business practices. This effort and expense, however, can be significantly reduced through modular training courses that teach the dimensions of AI literacy provided for in the AI Act in varying depth depending on the target audience.
Different dimensions of AI literacy
Art. 3 (56) AI Act defines AI literacy as "skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause".
This definition encompasses several dimensions to ensure that users apply the technology in a technically, legally and ethically sound manner. The literacy requirements include:
- Technical knowledge: Training should be designed to teach the basic functions of AI systems, machine learning and algorithms. This enables users to understand and scrutinise the AI system's decision-making processes.
- Awareness of opportunities and risks: AI offers numerous advantages, such as increased efficiency and improved decision-making processes. At the same time, users must be able to recognise risks such as data protection problems, bias and security gaps so that they can actively prevent them.
- Legal and ethical understanding: Users must be informed about the legal requirements and understand the EU's ethical standards to use AI responsibly. These standards include, in particular, data protection, the avoidance of discrimination and the obligation of transparency and documentation.
The standards are therefore not just about conveying the legal framework for the use of AI. The aim is to enable employees to use AI responsibly.
As a result, to satisfy the requirements of Art. 4 AI Act, a holistic concept for teaching AI literacy that highlights the technical, legal, ethical and social issues associated with artificial intelligence is necessary.
Training courses as a foundation for AI literacy
An important measure for teaching AI literacy within the meaning of Art. 4 AI Act is training all those who work in any way with AI systems or support their operation. These include in particular:
- employees in operational roles: those directly responsible for operating and maintaining AI systems.
- executives and managing directors: those who make strategic decisions about the use and implementation of AI and require a broad understanding of the technologies and their implications.
- temporary agency workers and external service providers: those who work on a temporary basis or as consultants must also have the necessary literacy for dealing with AI.
Although the teaching of AI literacy in accordance with Art. 4 AI Act is not limited to formal training, a comprehensive training concept lays the foundation to ensure that all parties involved are able to operate AI systems correctly and responsibly. They must be able to identify and minimise risks at an early stage. Simply put, they must be AI literate.
Modular training concept for teaching literacy to specific target groups
A modular training concept makes it possible to adapt training courses flexibly to the different needs and knowledge levels of employees and thus practically satisfy the requirements of Art. 4 AI Act. Such a concept divides the training courses into different modules that are tailored to specific target groups and requirements. The training courses could be structured as follows:
- Basic training for all employees: This general training course provides a sound understanding of AI, including how AI systems work, legal framework conditions and ethical considerations. The aim is to establish a broad basis of knowledge about the opportunities and risks of AI.
- Specialised training for executives and managers: For decision-makers, the focus is less on technical details and more on understanding the potential and legal framework conditions of AI. The aim is to strengthen strategic decision-making when dealing with AI.
- Technical training for IT and developers: Employees with a technical background need detailed knowledge of AI mechanisms such as machine learning, data processing and algorithms. These training courses address the specific technical challenges that arise in the AI development process.
- Training for external advisors and service providers: External persons involved in AI proceedings need a basic understanding of the AI technology used and the applicable legal and ethical requirements. This ensures that external parties involved in using AI systems are also able to do so in a secure and compliant manner.
Further measures to promote AI literacy
In addition to training, companies can take further measures to promote AI literacy in the long term:
- Internal guidelines and standards: These create transparency and guidelines for all employees when working with AI. The standards should regulate the use, monitoring and maintenance of AI and ensure that the requirements of the AI Act are met.
- Appointment of an AI officer: An AI officer acts as the central contact person and coordinator for all AI-related issues in the organisation. They can organise training courses and are on hand to answer employees' questions or solve their problems.
- Regular training programmes: The rapid development in AI makes continuous training necessary to stay up to date and be able to react to new legal developments.
These measures help to create a sound knowledge base and a high level of expertise in dealing with AI. Establishing an AI officer and regular further training also ensure centralised coordination and sustainable support.
Employees' AI literacy as a strategic goal
The requirements of the AI Act are a strategic opportunity for companies to ensure long-term security and quality when dealing with AI. A modular training concept makes it possible to implement the requirements of Art. 4 in a targeted manner and prepare employees for their respective specific roles in dealing with AI. Companies that take this training obligation seriously not only strengthen their own compliance and risk mitigation, but also promote a corporate culture of responsibility and security when dealing with AI. This means that these companies will be perceived as trustworthy players on the market and at the same time will benefit from the advantages brought by the proper use of AI.
The training strategy is more than just a formality. It lays the foundation for the secure and future-oriented use of AI and strengthens confidence that new technologies will be used responsibly.
For more information on the CMS AI literacy training programme and other topics, follow this link: CMS Client Academy | Artificial Intelligence – Basics | E-Learning and for CMS’s outlook on the AI act, click here: Looking ahead to the EU AI Act (cms.law).
For more direct information on how to meet these training obligations, contact your CMS client partner or these CMS experts.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.