Always informed,
the AI-ACT on
at a glance
- Be well prepared for the new AI-ACT regulations. This page provides you with all the relevant information you need to understand and comply with the requirements.
Our AI-ACT compliance assistant takes you by the hand and guides you safely through the legal jungle of the EU AI Act and our AI governance platform ensures that you remain legally compliant at all times.
Our customers
The EU AI-ACT is a law of the European Unionwhich defines how artificial intelligence (AI) should be used safely and responsibly. It classifies AI systems according to their risk. The aim of the AI Act is to ensure the protection of people and transparency in the use of AI. Companies must ensure that their AI applications comply with the rules in order to avoid penalties.
Try our EU-AI-ACT Risk-Quick-Check and simply have the risk of your AI application assessed.
The solution
AI Safe and fair design!
The EU faces the challenge of making artificial intelligence safe and fair. AI is being used more and more in important areas, e.g. in medicine or in decisions made by authorities. The AI-ACT is intended to prevent AI from harming people, promoting discrimination or working in a non-transparent manner. With the law, the EU wants to ensure that AI systems are safe, protect people and are used fairly. At the same time, it is intended to promote innovation without neglecting safety and ethical standards.
Discrimination and prejudice
AI can reinforce prejudices if it is trained with unfair or incorrect data. For example, it can put applicants or people at a disadvantage when it comes to loans if the data is biased. This leads to discrimination.
Lack of transparency
Many AI systems are difficult to understand because they make complex decisions without humans knowing exactly how. This leads to a lack of transparency and makes it difficult to understand AI decisions.
Security risks
AI systems can make mistakes in safety-critical areas, such as in self-driving cars or in medicine. Such errors can harm people if they lead to incorrect decisions or accidents.
The EU AI Act also applies to organizations outside the EU when they supply AI systems to EU consumers. This means that international companies must also comply with the regulations.
DEADLINES
The EU-AI-ACT comes 2025 - act now.
The EU AI-ACT will be introduced gradually - here you can see the most important dates up to implementation by 2027.
Gabriele Bolek-Fügl
CO-FOUNDER & CEO
Roadmap
What you have to do as an entrepreneur pay attention?
DEADLINE
The AI Act came into force on August 01, 2024 came into force and will be fully applicable two years later, with some exceptions:
- Prohibitions occur after six months in force,
- the Governance rules and the Obligations for AI models for general purposes occur after twelve months in force, and
- the regulations for AI systems, that are embedded in regulated productsapply according to 36 months.
You can find an overview and further details in our Timeline
RISK ASSESSMENT
Companies must carefully analyze the potential risks of their AI applications. This includes assessing possible negative effects on users and society. This risk assessment helps to take timely measures to minimize dangers and ensure that the AI applications are safe and secure.I systems are used responsibly.
Our AI Governance Platform the AI Compliance Assistant
to easily assess the risks of your AI applications. It supports
help you identify and monitor potential hazards so that you can act safely and responsibly at all times.
Technical documentation
Companies must create technical documentation for their AI applications. This documentation describes exactly how the AI works, what data it uses and what security measures are taken. This helps to create transparency and ensure that the AI meets the requirements.
pAIper.one supports companies in the creation of technical documentation for AI applications.
With our AI Governance Platform, all important information is automatically recorded and clearly presented so that companies can easily provide the necessary evidence and reports for the AI Act.
Monitoring and maintenance
Companies need to constantly monitor and regularly maintain their AI systems. This means that they need to ensure that the AI is working properly, uses up-to-date data and remains up to date in order to avoid risks and meet safety requirements.
pAIper.one helps companies to monitor and maintain AI systems with our AI Compliance Tracking.
The platform provides continuous updates and alerts when issues arise due to legislative changes, so companies can keep their AI applications secure and up to date at all times.
Staff qualification
With regard to the workforce, Article 26 requires companies to ensure that their employees are sufficiently trained to understand and operate the AI systems correctly. This includes training on the functioning, risks and legal requirements of the AI used. They must also introduce appropriate safety measures to ensure that AI applications are used responsibly and safely. The workforce should also be able to recognize potential risks and respond appropriately.
pAIper.one supports companies in training their employees in the safe use of AI systems.
The platform offers targeted training and information to help staff understand the functioning, risks and legal requirements of AI and recognize potential dangers at an early stage.
From general staff training to AI Compliance Officer, we offer workshops and training courses to optimally prepare your company for dealing with AI and the AI-ACT.
CONSEQUENCES
Failure to comply with the EU AI-ACT can have serious financial and legal consequences.
- High finesUp to 30 million euros or 6 % of global annual turnover.
- Reputational damageLoss of trust among customers and partners.
- Sales bansProhibition to offer or use certain AI systems.
- Legal disputesPotential lawsuits by affected parties.
- Competitive disadvantagesLoss of market opportunities due to lack of compliance.
- Recalls/adjustmentsHigh costs for subsequent corrections.
Companies should deal with the AI Act at an early stage, especially if they use or offer high-risk AI systems. These systems may need to be adapted, for example by integrating emergency mechanisms.
A conformity assessment is then necessary to ensure compliance with the regulations. Companies using third-party AI should review these systems and understand their own obligations.
The requirements of the AI Act are comprehensive and timely preparation is crucial to meet the deadlines.
Essentially, providers of high-risk CCI systems are held liable here, but in some cases also their operators. Some obligations also apply in relation to GPAI. Transparency obligations essentially apply to the use of AI systems with limited or minimal risk. For example, Art. 50 of the AI Act stipulates that information must be provided on the use of AI chatbots or deepfakes.
There are various consequences for violations of the AI Act: n. These must be "effective, proportionate and dissuasive" (Art. 99 para. 1 sentence 2 AI Act).
- High fines: Up to 35 million euros or 7 % of global turnover for serious violations.
- Operating bans: Certain AI systems may no longer be operated.
- Stricter controls: Additional checks and sanctions may follow in the event of missing documentation or non-compliance with the rules.
It is important to meet the requirements at an early stage in order to avoid these consequences.
European Commission
RISK CATEGORIES
What are the Risk categories?
Different obligations apply depending on the AI risk class and the role of the actor, and the individual standards must be looked at carefully. The AI Act takes a risk-based approach, i.e. the higher the risk, the greater the legal obligations.
There are a total of four risk classes intended, from unacceptable about high and limited up to minimal risks.
Applications or AI systems that evaluate social behavior, influence people or exploit their weaknesses and thus lead to a disadvantage or threat to them. The use of these technologies is prohibited (with exceptions for law enforcement purposes).
The following in particular are classified as AI systems with unacceptable risk:
- Subliminal influence
- Exploiting the weakness or vulnerability of persons
- Biometric categorization
- Evaluation of social behavior
- Biometric real-time remote identification systems
- Risk assessment of natural persons
- Face recognition databases
- Derivation of emotions of natural persons
- Analysis of recorded image material
In the event of an unacceptable risk, the AI Act provides for a Prohibition of the corresponding AI system before
Applications that are not explicitly prohibited and pose a high risk to the fundamental rights, safety and health of natural persons. The use of these technologies is regulated by extensive requirements in the AI Act.
High-risk AI systems include:
- Biometric identification
- Management and operation of critical infrastructures (KRITIS)
- Education and vocational training
- Employment, personnel management and access to self-employment
- Accessibility and use of basic private and public services and benefits (e.g. housing, electricity, heating, internet, doctors)
- Prosecution
- Migration, asylum and border control
- Administration of justice and democratic processes
AI systems that interact with natural persons must inform the data subjects about this (so-called "transparency and information obligations").
Examples of AI systems with limited risk include chatbots, while search algorithms, computer games and spam filters fall into the "minimal risk" category.
Companies that use AI systems that are not explicitly subject to the requirements of the AI Act are encouraged to adopt voluntary codes of conduct that adequately regulate the use of AI. In addition, companies are generally required to train employees with regard to AI ("AI literacy").
Always up to date with pAIper.one
FAQ
- which is designed for varying degrees of autonomous operation,
- which can be adaptable after its introduction and
- which uses the input received to produce results such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
- unacceptable high risk (prohibited AI practices),
- high risk (high-risk AI systems),
- limited risk (AI systems intended for interaction with natural persons), and
- minimal and/or no Risk (all other AI systems that do not fall within the scope of the AI Regulation).
AI systems that make critical decisions, such as in medicine, human resources or public safety, are considered high-risk.
There are hardly any regulations for AI systems with minimal risk. Companies can voluntarily introduce additional security measures.
A AI model is the basis on which an artificial intelligence (AI) works. These are the algorithms and data structures that enable an AI to learn and perform tasks. For example, GPT-4 is such a model.
A AI system uses an AI model to enable specific applications. One example is ChatGPT, which is based on the GPT-4 model and functions as a usable application.
The AI Act mostly regulates the AI system, as it represents the practical implementation of the technology.
Yes, the EU-AI-ACT allows exceptions in certain cases, especially for high-risk applications, if special safety precautions are taken.
The EU AI-ACT divides AI systems into four risk categories: minimal, limited, high and impermissible. Each category has different requirements.
Yes, all companies that use or offer AI in the EU must comply with the EU-AI-ACT, regardless of their size or industry.
Create technical documentation, carry out risk assessments and ensure that the application meets all EU AI-ACT requirements.
This depends on the complexity, but can take several months, depending on how many adjustments and documentations are required.
Data on the functioning, safety measures and risk assessments of the AI application must be clearly and fully documented.
The EU-AI-ACT ensures greater security and transparency, which makes the use of AI safer, but also lays down strict rules for developers.
Sectors such as healthcare, transport, human resources and public safety are the most affected by the EU AI ACT, as they often work with high-risk AI systems.
The risk assessment should be carried out regularly, especially in the event of changes to the AI application or new legislation.
You must ensure that these AI systems comply with the requirements of the EU AI ACT and are used safely.
National authorities in each EU country monitor compliance with the EU AI-ACT and carry out checks.
Therefore pAIper.one - the comprehensive software and support for the EU-AI-ACT
With pAIper.one, we offer specialized digital software products for compliance with legal requirements (AI-ACT) in the field of artificial intelligence. Our software solutions, the AI-ACT Governance Platform, AI-ACT Compliance Assistant and AI-ACT Compliance Community, help companies to operate safely within the AI-ACT legal framework. In addition to our digital products, we offer topic-specific workshops and training courses together with our partners and lawyers. These are offered exclusively via the AI-ACT Compliance Community.
Made in Austria