EU AI Act: 5 things you need to know

5 key aspects of the EU AI Act your company should implement now
Since August 2024, the EU AI Act — the European Union's regulation on Artificial Intelligence — has been in force. It defines the legal requirements for the use of AI within the EU and is the first comprehensive legal framework for AI globally. The objective is to ensure the safe, transparent, and controlled use of AI, while still enabling innovation.
More and more companies are harnessing the potential of AI — in customer service, marketing, or HR. It is therefore crucial for businesses to familiarise themselves with the regulation early on and implement the necessary measures. Importantly, the AI Act does not only apply to companies that develop AI systems or use them at scale. Even occasional use of AI tools to support day-to-day tasks can fall under the regulation’s scope.
We’ve summarised the key points of the EU AI Act and outlined the most important requirements your organisation should be aware of.
1. Assess the risk level of AI use: which rules apply?
Under the EU AI Act, AI applications are classified into four risk categories: minimal, limited, high, and unacceptable. The applicable rules vary according to the level of risk, with AI systems deemed to pose an unacceptable risk being outright prohibited.
Organisations should therefore review their AI systems to determine which risk category applies and what legal obligations they must meet. Many everyday workplace tools will fall into the minimal or limited risk category. For example, a chatbot used in customer service is generally subject only to transparency obligations. However, AI tools that support candidate screening in HR or assist with medical diagnoses are classified as high-risk and must meet stricter requirements.
2. Train employees in the use of AI
According to Article 4 of the EU AI Act, companies that develop or use AI systems must ensure their employees have adequate AI-related skills and knowledge. Training must be tailored to the systems and use cases involved. The level of training required will vary depending on the company and the AI tools in use. For instance, employees using AI to create marketing content should understand that AI systems do not conduct fact-checks and that outputs need to be reviewed before publication. For higher-risk systems, training is typically more comprehensive.
An easy way to provide basic AI training is to use off-the-shelf content, but costs can rise with multilingual or accessible formats. Content creation tools like imc Express offer pre-built resources, including AI training, reviewed by global law firm Baker McKenzie, to comply with Article 4 of the EU AI Act. The X-Library in imc Express includes 60+ copyright-free, customisable courses in 80 languages, which can be tailored to an organisation’s branding and content.

3. Ensure transparency towards end users
When companies use AI, the EU AI Act mandates transparency. This applies to AI-generated images and content, as well as to software with embedded AI components or AI-based recommendation systems in online shops. Such use must be clearly indicated — for example, with labels like “AI-generated” or “Created with Artificial Intelligence”.
4. Ensure human oversight for high-risk systems
AI systems classified as high-risk must be subject to human oversight. The individuals responsible should be able to understand the capabilities and limitations of the system and must make the final decisions themselves, without blindly relying on the AI.
One example is the shortlisting of candidates for job interviews. If AI supports the pre-selection process, HR staff must be able to understand and review the outcome — and make the final decision themselves. Companies should establish clear processes and responsibilities for such cases.
5. Understand and comply with AI documentation requirements
High-risk systems are subject to specific documentation requirements. Providers must create extensive technical documentation. However, users of such systems also bear responsibility, including the obligation to report incidents to relevant authorities.
Even though AI tools in the limited or minimal risk category are not subject to these documentation requirements, it is still advisable to keep track of key information, such as:
- Which AI tools are in use
- Who is using them
- What they are being used for
- What data is being input
By doing so, organisations maintain an overview of their AI usage and remain aware of potential risks.
Training, transparency, and responsible use
To ensure compliance with the EU AI Act, organisations should develop a clear understanding of which AI systems are in use and how they are classified in terms of risk. There are transparency obligations towards third parties — any AI-generated outputs must be labelled accordingly.
High-risk AI systems should always be subject to human supervision. More generally, responsible use of AI is essential. Employees who work with AI must receive appropriate training to ensure they have the necessary skills. imc Express offers a suitable AI training programme to support this.