From 2025 the Artificial Intelligence Act (AI Act) has begun to gradually apply in EU Member States, and in Poland a dedicated act is being drafted to implement its provisions. It is the most extensive regulatory initiative on AI in the world to date and will directly affect businesses across a wide range of sectors – from new technologies to industry, finance and services.
Why is the EU introducing AI regulation?
The AI Act aims to ensure uniform rules for the development, placing on the market and use of artificial intelligence systems throughout the EU. The legislator’s main objectives are to:
- ensure a high level of protection of health, safety and fundamental rights,
- reduce the risk of unfair practices and manipulation,
- create a framework that supports innovation and the competitiveness of European companies,
- avoid fragmentation of national rules and provide legal certainty for businesses.
The goal is for AI to develop responsibly and in line with EU values, while at the same time guaranteeing companies free access to the single market.
Timeline for the entry into force of the rules
The provisions will not apply in full immediately. Transitional periods have been introduced:
- from February 2025 – the first chapters started to apply (definitions, general principles),
- from August 2025 – provisions on prohibited practices, supervision and sanctions entered into force,
- from August 2026 – full application of the Act,
- from August 2027 – additional obligations for high-risk systems.
For companies, this means they need to start preparations now, especially in the areas of classifying AI systems and adapting compliance processes.
What obligations will companies face?
Businesses will need to determine whether their systems fall into one of the following categories:
- prohibited practices – e.g. manipulative subliminal techniques or social scoring,
- high-risk systems – e.g. systems used in recruitment, finance, healthcare or critical infrastructure,
- limited-risk systems – e.g. chatbots, content generators that must meet transparency requirements,
- minimal-risk systems – where obligations are limited.
Providers of high-risk systems will be required to maintain detailed technical documentation ensuring the auditability and explainability of how algorithms function.
Companies will have to report serious incidents related to the operation of AI to the supervisory authority.
High-risk systems may not be placed on the market without a prior conformity assessment and the CE marking confirming compliance with the regulation’s requirements.
Breaches of the rules may result in severe financial penalties, similar in scale to those known from the GDPR.
Opportunities for businesses
Despite the considerable number of obligations, the new rules may ultimately prove beneficial:
- greater trust from customers and business partners thanks to certified, safe AI systems,
- support from regulators through regulatory sandboxes,
- easier access to the EU market – harmonised rules reduce the risk of legal barriers between countries,
- an opportunity to gain a competitive edge for companies that are among the first to implement compliant and transparent solutions.
The new AI rules mean that companies must thoroughly prepare for a new regulatory reality. Organisations will need to implement procedures for risk assessment, documentation, reporting and certification of AI systems. At the same time, support tools are emerging – individual opinions, regulatory sandboxes and mechanisms for mitigating sanctions.
For businesses that start adapting early, these regulations can become not only an obligation but also an opportunity to grow in a safe and transparent legal environment.
Author:
Dr Krzysztof Staniek
Tax Advisor | Attorney-at-Law | Managing Partner

