Always informed,
the AI-ACT on
at a glance

- Be well prepared for the new AI-ACT regulations. This page provides you with all the relevant information you need to understand and comply with the requirements. 

Our AI-ACT compliance assistant takes you by the hand and guides you safely through the legal jungle of the EU AI Act and our AI governance platform ensures that you remain legally compliant at all times.

Our customers

The EU AI-ACT is a law of the European Unionwhich defines how artificial intelligence (AI) should be used safely and responsibly. It classifies AI systems according to their risk. The aim of the AI Act is to ensure the protection of people and transparency in the use of AI. Companies must ensure that their AI applications comply with the rules in order to avoid penalties.

Try our EU-AI-ACT Risk-Quick-Check and simply have the risk of your AI application assessed.

The solution

AI Safe and fair design!

The EU faces the challenge of making artificial intelligence safe and fair. AI is being used more and more in important areas, e.g. in medicine or in decisions made by authorities. The AI-ACT is intended to prevent AI from harming people, promoting discrimination or working in a non-transparent manner. With the law, the EU wants to ensure that AI systems are safe, protect people and are used fairly. At the same time, it is intended to promote innovation without neglecting safety and ethical standards.

Discrimination and prejudice

AI can reinforce prejudices if it is trained with unfair or incorrect data. For example, it can put applicants or people at a disadvantage when it comes to loans if the data is biased. This leads to discrimination.

Lack of transparency

Many AI systems are difficult to understand because they make complex decisions without humans knowing exactly how. This leads to a lack of transparency and makes it difficult to understand AI decisions.

Security risks

AI systems can make mistakes in safety-critical areas, such as in self-driving cars or in medicine. Such errors can harm people if they lead to incorrect decisions or accidents.

The EU AI Act also applies to organizations outside the EU when they supply AI systems to EU consumers. This means that international companies must also comply with the regulations.

DEADLINES

The EU-AI-ACT comes 2025 - act now.

The EU AI-ACT will be introduced gradually - here you can see the most important dates up to implementation by 2027.

EU AI Act timetable (as of July 2024)
2021
Apr 21, 2021
EU Commission proposes the AI Act
2024
Aug 02, 2024
Entry into force of the law
2025
May 02, 2025
Codes of conduct are applied
Aug 02, 2025
Governance rules and obligations for General Purpose AI (GPAI) become applicable
2026
Aug 02, 2026
Start of application of the EU AI Act for AI systems (including Annex III)
2027
Aug 02, 2027
Application of the entire EU AI Act for all risk categories (including Annex II)
Our solutions are comprehensive: we don't just offer individual tools, but a whole package so that our customers become real experts. This allows us to grow and develop together.

Gabriele Bolek-Fügl
CO-FOUNDER & CEO

Roadmap

What you have to do as an entrepreneur pay attention?

Here you will find the most important deadlines for implementing the new regulations. These deadlines are crucial to ensure that your AI systems are adapted in time. If you miss a deadline, you could face legal and financial consequences that affect the operation of your AI applications.

DEADLINE

The AI Act came into force on August 01, 2024 came into force and will be fully applicable two years later, with some exceptions:

  • Prohibitions occur after six months in force,
  • the Governance rules and the Obligations for AI models for general purposes occur after twelve months in force, and
  • the regulations for AI systems, that are embedded in regulated productsapply according to 36 months.

 

You can find an overview and further details in our Timeline

RISK ASSESSMENT

Companies must carefully analyze the potential risks of their AI applications. This includes assessing possible negative effects on users and society. This risk assessment helps to take timely measures to minimize dangers and ensure that the AI applications are safe and secure.I systems are used responsibly. 

Batch for the AI compliance assistant

Our AI Governance Platform the AI Compliance Assistant 

to easily assess the risks of your AI applications. It supports

help you identify and monitor potential hazards so that you can act safely and responsibly at all times.

Technical documentation

Companies must create technical documentation for their AI applications. This documentation describes exactly how the AI works, what data it uses and what security measures are taken. This helps to create transparency and ensure that the AI meets the requirements.


pAIper.one supports companies in the creation of technical documentation for AI applications.

With our AI Governance Platform, all important information is automatically recorded and clearly presented so that companies can easily provide the necessary evidence and reports for the AI Act.

Monitoring and maintenance

Companies need to constantly monitor and regularly maintain their AI systems. This means that they need to ensure that the AI is working properly, uses up-to-date data and remains up to date in order to avoid risks and meet safety requirements.

pAIper.one helps companies to monitor and maintain AI systems with our AI Compliance Tracking.

The platform provides continuous updates and alerts when issues arise due to legislative changes, so companies can keep their AI applications secure and up to date at all times.

Staff qualification

With regard to the workforce, Article 26 requires companies to ensure that their employees are sufficiently trained to understand and operate the AI systems correctly. This includes training on the functioning, risks and legal requirements of the AI used. They must also introduce appropriate safety measures to ensure that AI applications are used responsibly and safely. The workforce should also be able to recognize potential risks and respond appropriately.

pAIper.one supports companies in training their employees in the safe use of AI systems.

The platform offers targeted training and information to help staff understand the functioning, risks and legal requirements of AI and recognize potential dangers at an early stage.

From general staff training to AI Compliance Officer, we offer workshops and training courses to optimally prepare your company for dealing with AI and the AI-ACT.

CONSEQUENCES

Failure to comply with the EU AI-ACT can have serious financial and legal consequences.

  • High finesUp to 30 million euros or 6 % of global annual turnover.
  • Reputational damageLoss of trust among customers and partners.
  • Sales bansProhibition to offer or use certain AI systems.
  • Legal disputesPotential lawsuits by affected parties.
  • Competitive disadvantagesLoss of market opportunities due to lack of compliance.
  • Recalls/adjustmentsHigh costs for subsequent corrections.
The European Commission stated that the AI Act was designed to "ensure that AI developed and used in the EU is trustworthy, with safeguards to safeguard people's fundamental rights", with particular emphasis on protecting health, safety and rights

European Commission

RISK CATEGORIES

What are the Risk categories?

Different obligations apply depending on the AI risk class and the role of the actor, and the individual standards must be looked at carefully. The AI Act takes a risk-based approach, i.e. the higher the risk, the greater the legal obligations.

There are a total of four risk classes intended, from unacceptable about high and limited up to minimal risks.

OUR PARTNERS

Always up to date with pAIper.one

- Discover all the important insights on the EU AI Act and shape your AI strategy in a secure and future-oriented way.

FAQ

  • which is designed for varying degrees of autonomous operation,
  • which can be adaptable after its introduction and
  • which uses the input received to produce results such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
  • unacceptable high risk (prohibited AI practices),
  • high risk (high-risk AI systems),
  • limited risk (AI systems intended for interaction with natural persons), and
  • minimal and/or no Risk (all other AI systems that do not fall within the scope of the AI Regulation).

AI systems that make critical decisions, such as in medicine, human resources or public safety, are considered high-risk.

There are hardly any regulations for AI systems with minimal risk. Companies can voluntarily introduce additional security measures.

A AI model is the basis on which an artificial intelligence (AI) works. These are the algorithms and data structures that enable an AI to learn and perform tasks. For example, GPT-4 is such a model.

A AI system uses an AI model to enable specific applications. One example is ChatGPT, which is based on the GPT-4 model and functions as a usable application.

The AI Act mostly regulates the AI system, as it represents the practical implementation of the technology.

Yes, the EU-AI-ACT allows exceptions in certain cases, especially for high-risk applications, if special safety precautions are taken.

The EU AI-ACT divides AI systems into four risk categories: minimal, limited, high and impermissible. Each category has different requirements.

Yes, all companies that use or offer AI in the EU must comply with the EU-AI-ACT, regardless of their size or industry.

Create technical documentation, carry out risk assessments and ensure that the application meets all EU AI-ACT requirements.

This depends on the complexity, but can take several months, depending on how many adjustments and documentations are required.

Data on the functioning, safety measures and risk assessments of the AI application must be clearly and fully documented.

The EU-AI-ACT ensures greater security and transparency, which makes the use of AI safer, but also lays down strict rules for developers.

Sectors such as healthcare, transport, human resources and public safety are the most affected by the EU AI ACT, as they often work with high-risk AI systems.

The risk assessment should be carried out regularly, especially in the event of changes to the AI application or new legislation.

You must ensure that these AI systems comply with the requirements of the EU AI ACT and are used safely.

National authorities in each EU country monitor compliance with the EU AI-ACT and carry out checks.

Therefore pAIper.one - the comprehensive software and support for the EU-AI-ACT

With pAIper.one, we offer specialized digital software products for compliance with legal requirements (AI-ACT) in the field of artificial intelligence. Our software solutions, the AI-ACT Governance Platform, AI-ACT Compliance Assistant and AI-ACT Compliance Community, help companies to operate safely within the AI-ACT legal framework. In addition to our digital products, we offer topic-specific workshops and training courses together with our partners and lawyers. These are offered exclusively via the AI-ACT Compliance Community.

Made in Austria

EN