Back

Europe > European Union

REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)



AD ID

0003972

AD STATUS

REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

ORIGINATOR

European Union

TYPE

Regulations

AVAILABILITY

Free

SYNONYMS

Artificial Intelligence Act

REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

EFFECTIVE

2024-06-13

ADDED

The document as a whole was last reviewed and released on 2024-10-07T00:00:00-0700.

AD ID

0003972

AD STATUS

Free

ORIGINATOR

European Union

TYPE

Regulations

AVAILABILITY

SYNONYMS

Artificial Intelligence Act

REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

EFFECTIVE

2024-06-13

ADDED

The document as a whole was last reviewed and released on 2024-10-07T00:00:00-0700.


Important Notice

This Authority Document In Depth Report is copyrighted - © 2024 - Network Frontiers LLC. All rights reserved. Copyright in the Authority Document analyzed herein is held by its authors. Network Frontiers makes no claims of copyright in this Authority Document.

This Authority Document In Depth Report is provided for informational purposes only and does not constitute, and should not be construed as, legal advice. The reader is encouraged to consult with an attorney experienced in these areas for further explanation and advice.

This Authority Document In Depth Report provides analysis and guidance for use and implementation of the Authority Document but it is not a substitute for the original authority document itself. Readers should refer to the original authority document as the definitive resource on obligations and compliance requirements.

The process we used to tag and map this document

This document has been mapped into the Unified Compliance Framework using a patented methodology and patented tools (you can research our patents HERE). The mapping team has taken every effort to ensure the quality of mapping is of the highest degree. To learn more about the process we use to map Authority Documents, or to become involved in that process, click HERE.

Controls and asociated Citations breakdown

When the UCF Mapping Teams tag Citations and their associated mandates within an Authority Document, those Citations and Mandates are tied to Common Controls. In addition, and by virtue of those Citations and mandates being tied to Common Controls, there are three sets of meta data that are associated with each Citation; Controls by Impact Zone, Controls by Type, and Controls by Classification.

The online version of the mapping analysis you see here is just a fraction of the work the UCF Mapping Team has done. The downloadable version of this document, available within the Common Controls Hub (available HERE) contains the following:

Document implementation analysis – statistics about the document’s alignment with Common Controls as compared to other Authority Documents and statistics on usage of key terms and non-standard terms.

Citation and Mandate Tagging and Mapping – A complete listing of each and every Citation we found within REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) that have been tagged with their primary and secondary nouns and primary and secondary verbs in three column format. The first column shows the Citation (the marker within the Authority Document that points to where we found the guidance). The second column shows the Citation guidance per se, along with the tagging for the mandate we found within the Citation. The third column shows the Common Control ID that the mandate is linked to, and the final column gives us the Common Control itself.

Dictionary Terms – The dictionary terms listed for REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) are based upon terms either found within the Authority Document’s defined terms section(which most legal documents have), its glossary, and for the most part, as tagged within each mandate. The terms with links are terms that are the standardized version of the term.



Common Controls and
mandates by Impact Zone
240 Mandated Controls - bold    
64 Implied Controls - italic     779 Implementation

An Impact Zone is a hierarchical way of organizing our suite of Common Controls — it is a taxonomy. The top levels of the UCF hierarchy are called Impact Zones. Common Controls are mapped within the UCF’s Impact Zones and are maintained in a legal hierarchy within that Impact Zone. Each Impact Zone deals with a separate area of policies, standards, and procedures: technology acquisition, physical security, continuity, records management, etc.


The UCF created its taxonomy by looking at the corpus of standards and regulations through the lens of unification and a view toward how the controls impact the organization. Thus, we created a hierarchical structure for each impact zone that takes into account regulatory and standards bodies, doctrines, and language.

Number of Controls
1083 Total
  • Acquisition or sale of facilities, technology, and services
    16
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Establish, implement, and maintain system acquisition contracts. CC ID 14758 Establish/Maintain Documentation Preventive
    Disseminate and communicate the system documentation to interested personnel and affected parties. CC ID 14285
    [Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; Article 23 1.(c)
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. Article 23 6.]
    Communicate Preventive
    Register new systems with the program office or other applicable stakeholder. CC ID 13986
    [In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. Article 49 1.
    Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. Article 49 1.
    Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. Article 49 2.
    Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. Article 49 2.
    Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. Article 49 3.
    Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. Article 49 3.]
    Business Processes Preventive
    Establish, implement, and maintain a consumer complaint management program. CC ID 04570
    [In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities. Article 85 ¶ 2
    Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. A complaint shall be duly reasoned and indicate at least: Article 89 2.]
    Business Processes Preventive
    Document consumer complaints. CC ID 13903
    [{natural persons} Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority. Article 85 ¶ 1
    A complaint shall be duly reasoned and indicate at least: the point of contact of the provider of the general-purpose AI model concerned; Article 89 2.(a)
    A complaint shall be duly reasoned and indicate at least: a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the downstream provider considers that the provider of the general-purpose AI model concerned infringed this Regulation; Article 89 2.(b)
    {is relevant} A complaint shall be duly reasoned and indicate at least: any other information that the downstream provider that sent the request considers relevant, including, where appropriate, information gathered on its own initiative. Article 89 2.(c)]
    Business Processes Preventive
    Assess consumer complaints and litigation. CC ID 16521 Investigate Preventive
    Notify the complainant about their rights after receiving a complaint. CC ID 16794 Communicate Preventive
    Include how to access information from the dispute resolution body in the consumer complaint management program. CC ID 13816 Establish/Maintain Documentation Preventive
    Include any requirements for using information from the dispute resolution body in the consumer complaint management program. CC ID 13815 Establish/Maintain Documentation Preventive
    Post contact information in an easily seen location at facilities. CC ID 13812 Communicate Preventive
    Provide users a list of the available dispute resolution bodies. CC ID 13814 Communicate Preventive
    Post the dispute resolution body's contact information on the organization's website. CC ID 13811 Communicate Preventive
    Disseminate and communicate the consumer complaint management program to interested personnel and affected parties. CC ID 16795 Communicate Preventive
    Establish, implement, and maintain notice and take-down procedures. CC ID 09963 Establish/Maintain Documentation Preventive
    Analyze the digital content hosted by the organization for any electronic material associated with the take-down request. CC ID 09974 Business Processes Detective
    Process product return requests. CC ID 11598 Acquisition/Sale of Assets or Services Corrective
  • Audits and risk management
    144
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Audits and risk management CC ID 00677 IT Impact Zone IT Impact Zone
    Include a commitment to cooperate with applicable statutory bodies in the Statement of Compliance. CC ID 12370
    [Importers shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it. Article 23 7.
    Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation. Article 25 2.
    Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation. Article 26 12.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to the high-risk AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system; Article 22 3.(d)
    Distributors shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by it. Article 24 6.
    Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation. Article 53 3.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to the general-purpose AI model, including when the model is integrated into AI systems placed on the market or put into service in the Union. Article 54 3.(d)
    The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. Article 73 6. ¶ 2]
    Establish/Maintain Documentation Preventive
    Accept the attestation engagement when all preconditions are met. CC ID 13933 Business Processes Preventive
    Determine the effectiveness of in scope controls. CC ID 06984
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.]
    Testing Detective
    Review audit reports to determine the effectiveness of in scope controls. CC ID 12156 Audits and Risk Management Detective
    Interview stakeholders to determine the effectiveness of in scope controls. CC ID 12154 Audits and Risk Management Detective
    Review documentation to determine the effectiveness of in scope controls. CC ID 16522 Process or Activity Preventive
    Review policies and procedures to determine the effectiveness of in scope controls. CC ID 12144 Audits and Risk Management Detective
    Evaluate personnel status changes to determine the effectiveness of in scope controls. CC ID 16556 Audits and Risk Management Detective
    Determine whether individuals performing a control have the appropriate staff qualifications, staff clearances, and staff competencies. CC ID 16555 Audits and Risk Management Detective
    Establish, implement, and maintain a risk management program. CC ID 12051
    [A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. Article 9 1.]
    Establish/Maintain Documentation Preventive
    Include the scope of risk management activities in the risk management program. CC ID 13658 Establish/Maintain Documentation Preventive
    Document and justify any exclusions from the scope of the risk management activities in the risk management program. CC ID 15336 Business Processes Detective
    Integrate the risk management program with the organization's business activities. CC ID 13661 Business Processes Preventive
    Integrate the risk management program into daily business decision-making. CC ID 13659 Business Processes Preventive
    Include managing mobile risks in the risk management program. CC ID 13535 Establish/Maintain Documentation Preventive
    Take into account if the system will be accessed by or have an impact on children in the risk management program. CC ID 14992
    [When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups. Article 9 9.]
    Audits and Risk Management Preventive
    Include regular updating in the risk management system. CC ID 14990
    [{continuous life cycle} The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: Article 9 2.]
    Business Processes Preventive
    Establish, implement, and maintain a risk management policy. CC ID 17192 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain risk management strategies. CC ID 13209
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a). Article 9 2.(d)]
    Establish/Maintain Documentation Preventive
    Include off-site storage of supplies in the risk management strategies. CC ID 13221 Establish/Maintain Documentation Preventive
    Include data quality in the risk management strategies. CC ID 15308 Data and Information Management Preventive
    Include minimizing service interruptions in the risk management strategies. CC ID 13215 Establish/Maintain Documentation Preventive
    Include off-site storage in the risk mitigation strategies. CC ID 13213 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain the risk assessment framework. CC ID 00685 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a risk assessment program. CC ID 00687 Establish/Maintain Documentation Preventive
    Employ third parties when implementing a risk assessment, as necessary. CC ID 16306 Human Resources Management Detective
    Establish, implement, and maintain insurance requirements. CC ID 16562 Establish/Maintain Documentation Preventive
    Disseminate and communicate insurance options to interested personnel and affected parties. CC ID 16572 Communicate Preventive
    Disseminate and communicate insurance requirements to interested personnel and affected parties. CC ID 16567 Communicate Preventive
    Purchase insurance on behalf of interested personnel and affected parties. CC ID 16571 Acquisition/Sale of Assets or Services Corrective
    Address cybersecurity risks in the risk assessment program. CC ID 13193 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain fundamental rights impact assessments. CC ID 17217
    [Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of: Article 27 1.
    The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information. Article 27 2.]
    Audits and Risk Management Preventive
    Include the categories of data used by the system in the fundamental rights impact assessment. CC ID 17248 Establish/Maintain Documentation Preventive
    Include metrics in the fundamental rights impact assessment. CC ID 17249 Establish/Maintain Documentation Preventive
    Include the benefits of the system in the fundamental rights impact assessment. CC ID 17244 Establish/Maintain Documentation Preventive
    Include user safeguards in the fundamental rights impact assessment. CC ID 17255 Establish/Maintain Documentation Preventive
    Include the outputs produced by the system in the fundamental rights impact assessment. CC ID 17247 Establish/Maintain Documentation Preventive
    Include the purpose in the fundamental rights impact assessment. CC ID 17243 Establish/Maintain Documentation Preventive
    Include monitoring procedures in the fundamental rights impact assessment. CC ID 17254 Establish/Maintain Documentation Preventive
    Include risk management measures in the fundamental rights impact assessment. CC ID 17224
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms. Article 27 1.(f)]
    Establish/Maintain Documentation Preventive
    Include human oversight measures in the fundamental rights impact assessment. CC ID 17223
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the implementation of human oversight measures, according to the instructions for use; Article 27 1.(e)]
    Establish/Maintain Documentation Preventive
    Include risks in the fundamental rights impact assessment. CC ID 17222
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13; Article 27 1.(d)]
    Establish/Maintain Documentation Preventive
    Include affected parties in the fundamental rights impact assessment. CC ID 17221
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the categories of natural persons and groups likely to be affected by its use in the specific context; Article 27 1.(c)]
    Establish/Maintain Documentation Preventive
    Include the frequency in the fundamental rights impact assessment. CC ID 17220
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; Article 27 1.(b)]
    Establish/Maintain Documentation Preventive
    Include the usage duration in the fundamental rights impact assessment. CC ID 17219
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; Article 27 1.(b)]
    Establish/Maintain Documentation Preventive
    Include system use in the fundamental rights impact assessment. CC ID 17218
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; Article 27 1.(a)]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain Data Protection Impact Assessments. CC ID 14830
    [Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680. Article 26 9.]
    Process or Activity Preventive
    Include an assessment of the relationship between the data subject and the parties processing the data in the Data Protection Impact Assessment. CC ID 16371 Establish/Maintain Documentation Preventive
    Disseminate and communicate the Data Protection Impact Assessment to interested personnel and affected parties. CC ID 15313 Communicate Preventive
    Include consideration of the data subject's expectations in the Data Protection Impact Assessment. CC ID 16370 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a risk assessment policy. CC ID 14026 Establish/Maintain Documentation Preventive
    Include compliance requirements in the risk assessment policy. CC ID 14121 Establish/Maintain Documentation Preventive
    Include coordination amongst entities in the risk assessment policy. CC ID 14120 Establish/Maintain Documentation Preventive
    Include management commitment in the risk assessment policy. CC ID 14119 Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the risk assessment policy. CC ID 14118 Establish/Maintain Documentation Preventive
    Include the scope in the risk assessment policy. CC ID 14117 Establish/Maintain Documentation Preventive
    Include the purpose in the risk assessment policy. CC ID 14116 Establish/Maintain Documentation Preventive
    Disseminate and communicate the risk assessment policy to interested personnel and affected parties. CC ID 14115 Communicate Preventive
    Analyze the organization's information security environment. CC ID 13122 Technical Security Preventive
    Engage appropriate parties to assist with risk assessments, as necessary. CC ID 12153 Human Resources Management Preventive
    Employ risk assessment procedures that take into account risk factors. CC ID 16560 Audits and Risk Management Preventive
    Review the risk profiles, as necessary. CC ID 16561 Audits and Risk Management Detective
    Include risks to critical personnel and assets in the threat and risk classification scheme. CC ID 00698
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; Article 9 2.(a)]
    Audits and Risk Management Preventive
    Approve the threat and risk classification scheme. CC ID 15693 Business Processes Preventive
    Disseminate and communicate the risk assessment procedures to interested personnel and affected parties. CC ID 14136 Communicate Preventive
    Perform risk assessments for all target environments, as necessary. CC ID 06452
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; Article 55 1.(b)]
    Testing Preventive
    Include the probability and potential impact of pandemics in the scope of the risk assessment. CC ID 13241 Establish/Maintain Documentation Preventive
    Document any reasons for modifying or refraining from modifying the organization's risk assessment when the risk assessment has been reviewed. CC ID 13312 Establish/Maintain Documentation Preventive
    Create a risk assessment report based on the risk assessment results. CC ID 15695 Establish/Maintain Documentation Preventive
    Conduct external audits of risk assessments, as necessary. CC ID 13308 Audits and Risk Management Detective
    Notify the organization upon completion of the external audits of the organization's risk assessment. CC ID 13313 Communicate Preventive
    Establish, implement, and maintain a risk assessment awareness and training program. CC ID 06453
    [In identifying the most appropriate risk management measures, the following shall be ensured: provision of information required pursuant to Article 13 and, where appropriate, training to deployers. Article 9 5. ¶ 2 (c)]
    Business Processes Preventive
    Disseminate and communicate information about risks to all interested personnel and affected parties. CC ID 06718
    [In identifying the most appropriate risk management measures, the following shall be ensured: provision of information required pursuant to Article 13 and, where appropriate, training to deployers. Article 9 5. ¶ 2 (c)
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Behavior Preventive
    Evaluate the effectiveness of threat and vulnerability management procedures. CC ID 13491 Investigate Detective
    Include recovery of the critical path in the Business Impact Analysis. CC ID 13224 Establish/Maintain Documentation Preventive
    Include acceptable levels of data loss in the Business Impact Analysis. CC ID 13264 Establish/Maintain Documentation Preventive
    Include Recovery Point Objectives in the Business Impact Analysis. CC ID 13223 Establish/Maintain Documentation Preventive
    Include the Recovery Time Objectives in the Business Impact Analysis. CC ID 13222 Establish/Maintain Documentation Preventive
    Include pandemic risks in the Business Impact Analysis. CC ID 13219 Establish/Maintain Documentation Preventive
    Disseminate and communicate the Business Impact Analysis to interested personnel and affected parties. CC ID 15300 Communicate Preventive
    Establish, implement, and maintain a risk register. CC ID 14828 Establish/Maintain Documentation Preventive
    Analyze and quantify the risks to in scope systems and information. CC ID 00701
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; Article 9 2.(a)
    The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72; Article 9 2.(c)
    The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse; Article 9 2.(b)
    Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.]
    Audits and Risk Management Preventive
    Assess the potential level of business impact risk associated with the loss of personnel. CC ID 17172 Process or Activity Detective
    Assess the potential level of business impact risk associated with individuals. CC ID 17170 Process or Activity Detective
    Identify changes to in scope systems that could threaten communication between business units. CC ID 13173 Investigate Detective
    Assess the potential level of business impact risk associated with non-compliance. CC ID 17169 Process or Activity Detective
    Assess the potential level of business impact risk associated with reputational damage. CC ID 15335 Audits and Risk Management Detective
    Assess the potential level of business impact risk associated with the natural environment. CC ID 17171 Process or Activity Detective
    Approve the risk acceptance level, as necessary. CC ID 17168 Process or Activity Preventive
    Perform a gap analysis to review in scope controls for identified risks and implement new controls, as necessary. CC ID 00704
    [In identifying the most appropriate risk management measures, the following shall be ensured: elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system; Article 9 5. ¶ 2 (a)]
    Establish/Maintain Documentation Detective
    Document the results of the gap analysis. CC ID 16271 Establish/Maintain Documentation Preventive
    Determine the effectiveness of risk control measures. CC ID 06601
    [The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. Article 9 4.]
    Testing Detective
    Establish, implement, and maintain a risk treatment plan. CC ID 11983
    [In identifying the most appropriate risk management measures, the following shall be ensured: where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated; Article 9 5. ¶ 2 (b)
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; Article 55 1.(b)]
    Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the risk treatment plan. CC ID 16991
    [With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. Article 9 5. ¶ 3]
    Establish/Maintain Documentation Preventive
    Include time information in the risk treatment plan. CC ID 16993 Establish/Maintain Documentation Preventive
    Include allocation of resources in the risk treatment plan. CC ID 16989 Establish/Maintain Documentation Preventive
    Include the date of the risk assessment in the risk treatment plan. CC ID 16321 Establish/Maintain Documentation Preventive
    Include the release status of the risk assessment in the risk treatment plan. CC ID 16320 Audits and Risk Management Preventive
    Include requirements for monitoring and reporting in the risk treatment plan, as necessary. CC ID 13620 Establish/Maintain Documentation Preventive
    Include a description of usage in the risk treatment plan. CC ID 11977
    [With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. Article 9 5. ¶ 3]
    Establish/Maintain Documentation Preventive
    Document all constraints applied to the risk treatment plan, as necessary. CC ID 13619 Establish/Maintain Documentation Preventive
    Disseminate and communicate the risk treatment plan to interested personnel and affected parties. CC ID 15694 Communicate Preventive
    Document residual risk in a residual risk report. CC ID 13664 Establish/Maintain Documentation Corrective
    Review and approve material risks documented in the residual risk report, as necessary. CC ID 13672
    [The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. Article 9 5. ¶ 1]
    Business Processes Preventive
    Establish, implement, and maintain an artificial intelligence risk management program. CC ID 16220 Establish/Maintain Documentation Preventive
    Include diversity and equal opportunity in the artificial intelligence risk management program. CC ID 16255 Establish/Maintain Documentation Preventive
    Analyze the impact of artificial intelligence systems on business operations. CC ID 16356 Business Processes Preventive
    Analyze the impact of artificial intelligence systems on society. CC ID 16317 Audits and Risk Management Detective
    Analyze the impact of artificial intelligence systems on individuals. CC ID 16316 Audits and Risk Management Detective
    Establish, implement, and maintain a cybersecurity risk management program. CC ID 16827 Audits and Risk Management Preventive
    Include a commitment to continuous improvement In the cybersecurity risk management program. CC ID 16839 Establish/Maintain Documentation Preventive
    Monitor the effectiveness of the cybersecurity risk management program. CC ID 16831 Monitor and Evaluate Occurrences Preventive
    Establish, implement, and maintain a cybersecurity risk management policy. CC ID 16834 Establish/Maintain Documentation Preventive
    Disseminate and communicate the cybersecurity risk management policy to interested personnel and affected parties. CC ID 16832 Communicate Preventive
    Disseminate and communicate the cybersecurity risk management program to interested personnel and affected parties. CC ID 16829 Communicate Preventive
    Establish, implement, and maintain a cybersecurity risk management strategy. CC ID 11991
    [The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. Article 15 5. ¶ 2]
    Establish/Maintain Documentation Preventive
    Include a risk prioritization approach in the Cybersecurity Risk Management Strategy. CC ID 12276 Establish/Maintain Documentation Preventive
    Include defense in depth strategies in the cybersecurity risk management strategy. CC ID 15582 Establish/Maintain Documentation Preventive
    Disseminate and communicate the cybersecurity risk management strategy to interested personnel and affected parties. CC ID 16825 Communicate Preventive
    Acquire cyber insurance, as necessary. CC ID 12693 Business Processes Preventive
    Establish, implement, and maintain a cybersecurity supply chain risk management program. CC ID 16826 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain cybersecurity supply chain risk management procedures. CC ID 16830 Establish/Maintain Documentation Preventive
    Monitor the effectiveness of the cybersecurity supply chain risk management program. CC ID 16828 Monitor and Evaluate Occurrences Preventive
    Establish, implement, and maintain a supply chain risk management policy. CC ID 14663 Establish/Maintain Documentation Preventive
    Include compliance requirements in the supply chain risk management policy. CC ID 14711 Establish/Maintain Documentation Preventive
    Include coordination amongst entities in the supply chain risk management policy. CC ID 14710 Establish/Maintain Documentation Preventive
    Include management commitment in the supply chain risk management policy. CC ID 14709 Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the supply chain risk management policy. CC ID 14708 Establish/Maintain Documentation Preventive
    Include the scope in the supply chain risk management policy. CC ID 14707 Establish/Maintain Documentation Preventive
    Include the purpose in the supply chain risk management policy. CC ID 14706 Establish/Maintain Documentation Preventive
    Disseminate and communicate the supply chain risk management policy to all interested personnel and affected parties. CC ID 14662 Communicate Preventive
    Establish, implement, and maintain a supply chain risk management plan. CC ID 14713 Establish/Maintain Documentation Preventive
    Include processes for monitoring and reporting in the supply chain risk management plan. CC ID 15619 Establish/Maintain Documentation Preventive
    Include dates in the supply chain risk management plan. CC ID 15617 Establish/Maintain Documentation Preventive
    Include implementation milestones in the supply chain risk management plan. CC ID 15615 Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the supply chain risk management plan. CC ID 15613 Establish/Maintain Documentation Preventive
    Include supply chain risk management procedures in the risk management program. CC ID 13190 Establish/Maintain Documentation Preventive
    Disseminate and communicate the supply chain risk management procedures to all interested personnel and affected parties. CC ID 14712 Communicate Preventive
    Assign key stakeholders to review and approve supply chain risk management procedures. CC ID 13199 Human Resources Management Preventive
    Analyze supply chain risk management procedures, as necessary. CC ID 13198 Process or Activity Detective
    Disseminate and communicate the risk management policy to interested personnel and affected parties. CC ID 13792 Communicate Preventive
    Establish, implement, and maintain a disclosure report. CC ID 15521 Establish/Maintain Documentation Preventive
    Disseminate and communicate the disclosure report to interested personnel and affected parties. CC ID 15667
    [{market surveillance authority} Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment. Article 26 10. ¶ 6]
    Communicate Preventive
  • Human Resources management
    66
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Human Resources management CC ID 00763 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain high level operational roles and responsibilities. CC ID 00806 Establish Roles Preventive
    Define and assign the authorized representatives roles and responsibilities. CC ID 15033
    [The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. Article 22 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 22 3.
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the provider has appointed an authorised representative in accordance with Article 22(1). Article 23 1.(d)
    Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. Article 54 1.
    The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. Article 54 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 54 3.]
    Human Resources Management Preventive
    Train all personnel and third parties, as necessary. CC ID 00785
    [Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. Article 4 ¶ 1]
    Behavior Preventive
    Provide new hires limited network access to complete computer-based training. CC ID 17008 Training Preventive
    Include evidence of experience in applications for professional certification. CC ID 16193 Establish/Maintain Documentation Preventive
    Include supporting documentation in applications for professional certification. CC ID 16195 Establish/Maintain Documentation Preventive
    Submit applications for professional certification. CC ID 16192 Training Preventive
    Hire third parties to conduct training, as necessary. CC ID 13167 Human Resources Management Preventive
    Approve training plans, as necessary. CC ID 17193 Training Preventive
    Train personnel to recognize conditions of diseases or sicknesses, as necessary. CC ID 14383 Training Detective
    Include prevention techniques in training regarding diseases or sicknesses. CC ID 14387 Training Preventive
    Include the concept of disease clusters when training to recognize conditions of diseases or sicknesses. CC ID 14386 Training Preventive
    Train personnel to identify and communicate symptoms of exposure to disease or sickness. CC ID 14385 Training Detective
    Establish, implement, and maintain a list of tasks to include in the training plan. CC ID 17101 Training Preventive
    Designate training facilities in the training plan. CC ID 16200 Training Preventive
    Include portions of the visitor control program in the training plan. CC ID 13287 Establish/Maintain Documentation Preventive
    Include insider threats in the security awareness program. CC ID 16963 Training Preventive
    Conduct personal data processing training. CC ID 13757 Training Preventive
    Include in personal data processing training how to provide the contact information for the categories of personal data the organization may disclose. CC ID 13758 Training Preventive
    Complete security awareness training prior to being granted access to information systems or data. CC ID 17009 Training Preventive
    Establish, implement, and maintain a security awareness and training policy. CC ID 14022 Establish/Maintain Documentation Preventive
    Include compliance requirements in the security awareness and training policy. CC ID 14092 Establish/Maintain Documentation Preventive
    Include coordination amongst entities in the security awareness and training policy. CC ID 14091 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain security awareness and training procedures. CC ID 14054 Establish/Maintain Documentation Preventive
    Disseminate and communicate the security awareness and training procedures to interested personnel and affected parties. CC ID 14138 Communicate Preventive
    Include management commitment in the security awareness and training policy. CC ID 14049 Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the security awareness and training policy. CC ID 14048 Establish/Maintain Documentation Preventive
    Include the scope in the security awareness and training policy. CC ID 14047 Establish/Maintain Documentation Preventive
    Include the purpose in the security awareness and training policy. CC ID 14045 Establish/Maintain Documentation Preventive
    Include configuration management procedures in the security awareness program. CC ID 13967 Establish/Maintain Documentation Preventive
    Include media protection in the security awareness program. CC ID 16368 Training Preventive
    Document security awareness requirements. CC ID 12146 Establish/Maintain Documentation Preventive
    Include identity and access management in the security awareness program. CC ID 17013 Training Preventive
    Include the encryption process in the security awareness program. CC ID 17014 Training Preventive
    Include physical security in the security awareness program. CC ID 16369 Training Preventive
    Include data management in the security awareness program. CC ID 17010 Training Preventive
    Include e-mail and electronic messaging in the security awareness program. CC ID 17012 Training Preventive
    Include updates on emerging issues in the security awareness program. CC ID 13184 Training Preventive
    Include cybersecurity in the security awareness program. CC ID 13183 Training Preventive
    Include implications of non-compliance in the security awareness program. CC ID 16425 Training Preventive
    Include social networking in the security awareness program. CC ID 17011 Training Preventive
    Include the acceptable use policy in the security awareness program. CC ID 15487 Training Preventive
    Include a requirement to train all new hires and interested personnel in the security awareness program. CC ID 11800 Establish/Maintain Documentation Preventive
    Include remote access in the security awareness program. CC ID 13892 Establish/Maintain Documentation Preventive
    Document the goals of the security awareness program. CC ID 12145 Establish/Maintain Documentation Preventive
    Compare current security awareness assessment reports to the security awareness baseline. CC ID 12150 Establish/Maintain Documentation Preventive
    Establish and maintain a steering committee to guide the security awareness program. CC ID 12149 Human Resources Management Preventive
    Document the scope of the security awareness program. CC ID 12148 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a security awareness baseline. CC ID 12147 Establish/Maintain Documentation Preventive
    Encourage interested personnel to obtain security certification. CC ID 11804 Human Resources Management Preventive
    Train all personnel and third parties on how to recognize and report system failures. CC ID 13475 Training Preventive
    Establish, implement, and maintain an environmental management system awareness program. CC ID 15200 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a Code of Conduct. CC ID 04897
    [Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of any interested stakeholders and their representative organisations, including civil society organisations and academia. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. Article 95 3.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a code of conduct for financial recommendations. CC ID 16649 Establish/Maintain Documentation Preventive
    Include anti-coercion requirements and anti-tying requirements in the Code of Conduct. CC ID 16720 Establish/Maintain Documentation Preventive
    Include limitations on referrals for products and services in the Code of Conduct. CC ID 16719 Behavior Preventive
    Include classifications of ethics violations in the Code of Conduct. CC ID 14769 Establish/Maintain Documentation Preventive
    Include definitions of ethics violations in the Code of Conduct. CC ID 14768 Establish/Maintain Documentation Preventive
    Include exercising due professional care in the Code of Conduct. CC ID 14210 Establish/Maintain Documentation Preventive
    Include health and safety provisions in the Code of Conduct. CC ID 16206 Establish/Maintain Documentation Preventive
    Include responsibilities to the public trust in the Code of Conduct. CC ID 14209 Establish/Maintain Documentation Preventive
    Include environmental responsibility criteria in the Code of Conduct. CC ID 16209 Establish/Maintain Documentation Preventive
    Include social responsibility criteria in the Code of Conduct. CC ID 16210 Establish/Maintain Documentation Preventive
    Include labor rights criteria in the Code of Conduct. CC ID 16208 Establish/Maintain Documentation Preventive
    Include the employee's legal responsibilities and rights in the Terms and Conditions of employment. CC ID 15701 Establish/Maintain Documentation Preventive
  • Leadership and high level objectives
    50
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Leadership and high level objectives CC ID 00597 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain a reporting methodology program. CC ID 02072 Business Processes Preventive
    Establish, implement, and maintain an internal reporting program. CC ID 12409
    [Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. Article 26 7.]
    Business Processes Preventive
    Define the thresholds for escalation in the internal reporting program. CC ID 14332 Establish/Maintain Documentation Preventive
    Define the thresholds for reporting in the internal reporting program. CC ID 14331 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an external reporting program. CC ID 12876 Communicate Preventive
    Include reporting to governing bodies in the external reporting plan. CC ID 12923
    [Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 22 3.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. Article 22 4.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 54 3.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. Article 54 5.]
    Communicate Preventive
    Submit confidential treatment applications to interested personnel and affected parties. CC ID 16592 Communicate Preventive
    Include the reasons for objections to public disclosure in confidential treatment applications. CC ID 16594 Establish/Maintain Documentation Preventive
    Include contact information for the interested personnel and affected parties the report was filed with in the confidential treatment application. CC ID 16595 Establish/Maintain Documentation Preventive
    Include the information that was omitted in the confidential treatment application. CC ID 16593 Establish/Maintain Documentation Preventive
    Request extensions for submissions to governing bodies, as necessary. CC ID 16955 Process or Activity Preventive
    Analyze organizational objectives, functions, and activities. CC ID 00598 Monitor and Evaluate Occurrences Preventive
    Establish, implement, and maintain data governance and management practices. CC ID 14998
    [{training data} {validation data} {testing data} Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: Article 10 2.]
    Establish/Maintain Documentation Preventive
    Address shortcomings of the data sets in the data governance and management practices. CC ID 15087
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. Article 10 2.(h)]
    Establish/Maintain Documentation Preventive
    Include any shortcomings of the data sets in the data governance and management practices. CC ID 15086
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. Article 10 2.(h)]
    Establish/Maintain Documentation Preventive
    Include bias for data sets in the data governance and management practices. CC ID 15085
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; Article 10 2.(f)
    Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: appropriate measures to detect, prevent and mitigate possible biases identified according to point (f); Article 10 2.(g)]
    Establish/Maintain Documentation Preventive
    Include the data source in the data governance and management practices. CC ID 17211
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)]
    Data and Information Management Preventive
    Include a data strategy in the data governance and management practices. CC ID 15304 Establish/Maintain Documentation Preventive
    Include data monitoring in the data governance and management practices. CC ID 15303 Establish/Maintain Documentation Preventive
    Include an assessment of the data sets in the data governance and management practices. CC ID 15084
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: an assessment of the availability, quantity and suitability of the data sets that are needed; Article 10 2.(e)]
    Establish/Maintain Documentation Preventive
    Include assumptions for the formulation of data sets in the data governance and management practices. CC ID 15083
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent; Article 10 2.(d)]
    Establish/Maintain Documentation Preventive
    Include data collection for data sets in the data governance and management practices. CC ID 15082
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)
    Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)]
    Establish/Maintain Documentation Preventive
    Include data preparations for data sets in the data governance and management practices. CC ID 15081
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation; Article 10 2.(c)]
    Establish/Maintain Documentation Preventive
    Include design choices for data sets in the data governance and management practices. CC ID 15080
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the relevant design choices; Article 10 2.(a)]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a data classification scheme. CC ID 11628 Establish/Maintain Documentation Preventive
    Take into account the characteristics of the geographical, behavioral and functional setting for all datasets. CC ID 15046
    [Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used. Article 10 4.]
    Data and Information Management Preventive
    Establish, implement, and maintain a Quality Management framework. CC ID 07196 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a Quality Management policy. CC ID 13694
    [{put in place} Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: Article 17 1.]
    Establish/Maintain Documentation Preventive
    Include a commitment to satisfy applicable requirements in the Quality Management policy. CC ID 13700
    [Quality management system shall include at least the following aspects: a strategy for ="background-color:#F0BBBC;" class="term_primary-noun">regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system; Article 17 1.(a)]
    Establish/Maintain Documentation Preventive
    Tailor the Quality Management policy to support the organization's strategic direction. CC ID 13699
    [{quality management system} The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. Article 17 2.]
    Establish/Maintain Documentation Preventive
    Include a commitment to continual improvement of the Quality Management system in the Quality Management policy. CC ID 13698 Establish/Maintain Documentation Preventive
    Document the measurements used by Quality Assurance and Quality Control testing. CC ID 07200
    [Quality management system shall include at least the following aspects: techniques, procedures and systematic actions to be used for the development, quality control and <span style="background-color:#F0BBBC;" class="term_primary-noun">quality assurance of the high-risk AI system; Article 17 1.(c)]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a Quality Management program. CC ID 07201
    [{put in place} Providers of high-risk AI systems shall: have a quality management system in place which complies with Article 17; Article 16 ¶ 1 (c)
    {put in place} Providers of high-risk AI systems shall put a or:#F0BBBC;" class="term_primary-noun">quality management system
    in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: Article 17 1.]
    Establish/Maintain Documentation Preventive
    Notify affected parties and interested personnel of quality management system approvals that have been refused, suspended, or withdrawn. CC ID 15045
    [Each notified body shall inform the other notified bodies of: quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued; Article 45 2.(a)]
    Communicate Preventive
    Notify affected parties and interested personnel of quality management system approvals that have been issued. CC ID 15036
    [Each notified body shall inform the other notified bodies of: quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has -color:#B7D8ED;" class="term_primary-verb">issued; Article 45 2.(a)
    Notified bodies shall inform the notifying authority of the following: any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; Article 45 1.(a)]
    Communicate Preventive
    Include quality objectives in the Quality Management program. CC ID 13693 Establish/Maintain Documentation Preventive
    Include monitoring and analysis capabilities in the quality management program. CC ID 17153 Monitor and Evaluate Occurrences Preventive
    Include records management in the quality management system. CC ID 15055
    [Quality management system shall include at least the following aspects: systems and procedures for record-keeping of all relevant documentation and information; Article 17 1.(k)]
    Establish/Maintain Documentation Preventive
    Include risk management in the quality management system. CC ID 15054
    [Quality management system shall include at least the following aspects: the risk management system referred to in Article 9; Article 17 1.(g)]
    Establish/Maintain Documentation Preventive
    Include data management procedures in the quality management system. CC ID 15052
    [Quality management system shall include at least the following aspects: systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems; Article 17 1.(f)]
    Establish/Maintain Documentation Preventive
    Include a post-market monitoring system in the quality management system. CC ID 15027
    [Quality management system shall include at least the following aspects: the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72; Article 17 1.(h)]
    Establish/Maintain Documentation Preventive
    Include operational roles and responsibilities in the quality management system. CC ID 15028
    [Quality management system shall include at least the following aspects: an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph. Article 17 1.(m)]
    Establish/Maintain Documentation Preventive
    Include resource management in the quality management system. CC ID 15026
    [Quality management system shall include at least the following aspects: resource management, including security-of-supply related measures; Article 17 1.(l)]
    Establish/Maintain Documentation Preventive
    Include communication protocols in the quality management system. CC ID 15025
    [Quality management system shall include at least the following aspects: the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties; Article 17 1.(j)]
    Establish/Maintain Documentation Preventive
    Include incident reporting procedures in the quality management system. CC ID 15023
    [Quality management system shall include at least the following aspects: procedures related to the reporting of a serious incident in accordance with Article 73; Article 17 1.(i)]
    Establish/Maintain Documentation Preventive
    Include technical specifications in the quality management system. CC ID 15021
    [Quality management system shall include at least the following aspects: technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that the high-risk AI system complies with those requirements; Article 17 1.(e)]
    Establish/Maintain Documentation Preventive
    Include system testing standards in the Quality Management program. CC ID 01018
    [Quality management system shall include at least the following aspects: techniques, procedures and systematic actions to be used for the design, design control and tyle="background-color:#F0BBBC;" class="term_primary-noun">design verification of the high-risk AI system; Article 17 1.(b)
    {test procedure} Quality management system shall include at least the following aspects: examination, test and imary-noun">validation procedures
    to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out; Article 17 1.(d)]
    Establish/Maintain Documentation Preventive
    Include explanations, compensating controls, or risk acceptance in the compliance exceptions Exceptions document. CC ID 01631
    [Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto. Article 41 5.]
    Establish/Maintain Documentation Preventive
    Document and evaluate the decision outcomes from the decision-making process. CC ID 06918
    [Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. Article 44 3. ¶ 1
    Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. Article 52 5.]
    Establish/Maintain Documentation Preventive
  • Monitoring and measurement
    153
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Monitoring and measurement CC ID 00636 IT Impact Zone IT Impact Zone
    Monitor the usage and capacity of critical assets. CC ID 14825 Monitor and Evaluate Occurrences Detective
    Monitor the usage and capacity of Information Technology assets. CC ID 00668
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Monitor and Evaluate Occurrences Detective
    Monitor systems for errors and faults. CC ID 04544 Monitor and Evaluate Occurrences Detective
    Report errors and faults to the appropriate personnel, as necessary. CC ID 14296
    [Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Communicate Corrective
    Establish, implement, and maintain monitoring and logging operations. CC ID 00637
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: monitoring the operation of high-risk AI systems referred to in Article 26(5). Article 12 2.(c)
    For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)]
    Log Management Detective
    Establish, implement, and maintain an audit and accountability policy. CC ID 14035 Establish/Maintain Documentation Preventive
    Include compliance requirements in the audit and accountability policy. CC ID 14103 Establish/Maintain Documentation Preventive
    Include coordination amongst entities in the audit and accountability policy. CC ID 14102 Establish/Maintain Documentation Preventive
    Include the purpose in the audit and accountability policy. CC ID 14100 Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the audit and accountability policy. CC ID 14098 Establish/Maintain Documentation Preventive
    Include management commitment in the audit and accountability policy. CC ID 14097 Establish/Maintain Documentation Preventive
    Include the scope in the audit and accountability policy. CC ID 14096 Establish/Maintain Documentation Preventive
    Disseminate and communicate the audit and accountability policy to interested personnel and affected parties. CC ID 14095 Communicate Preventive
    Establish, implement, and maintain audit and accountability procedures. CC ID 14057 Establish/Maintain Documentation Preventive
    Disseminate and communicate the audit and accountability procedures to interested personnel and affected parties. CC ID 14137 Communicate Preventive
    Enable monitoring and logging operations on all assets that meet the organizational criteria to maintain event logs. CC ID 06312
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification; Article 12 2.(a)]
    Log Management Preventive
    Review and approve the use of continuous security management systems. CC ID 13181 Process or Activity Preventive
    Monitor and evaluate system telemetry data. CC ID 14929 Actionable Reports or Measurements Detective
    Establish, implement, and maintain an intrusion detection and prevention program. CC ID 15211 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an intrusion detection and prevention policy. CC ID 15169 Establish/Maintain Documentation Preventive
    Monitor systems for inappropriate usage and other security violations. CC ID 00585
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)]
    Monitor and Evaluate Occurrences Detective
    Prohibit the sale or transfer of a debt caused by identity theft. CC ID 16983 Acquisition/Sale of Assets or Services Preventive
    Operationalize key monitoring and logging concepts to ensure the audit trails capture sufficient information. CC ID 00638
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: facilitating the post-market monitoring referred to in Article 72; and Article 12 2.(b)]
    Log Management Detective
    Establish, implement, and maintain an event logging policy. CC ID 15217 Establish/Maintain Documentation Preventive
    Include the system components that generate audit records in the event logging procedures. CC ID 16426 Data and Information Management Preventive
    Overwrite the oldest records when audit logging fails. CC ID 14308 Data and Information Management Preventive
    Include identity information of suspects in the suspicious activity report. CC ID 16648 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain log analysis tools. CC ID 17056 Technical Security Preventive
    Correlate log entries to security controls to verify the security control's effectiveness. CC ID 13207 Log Management Detective
    Identify cybersecurity events in event logs and audit logs. CC ID 13206 Technical Security Detective
    Enable logging for all systems that meet a traceability criteria. CC ID 00640
    [High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system. Article 12 1.]
    Log Management Detective
    Enable the logging capability to capture enough information to ensure the system is functioning according to its intended purpose throughout its life cycle. CC ID 15001 Configuration Preventive
    Monitor for and react to when suspicious activities are detected. CC ID 00586
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3
    The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Monitor and Evaluate Occurrences Detective
    Erase payment applications when suspicious activity is confirmed. CC ID 12193 Technical Security Corrective
    Establish, implement, and maintain network monitoring operations. CC ID 16444 Monitor and Evaluate Occurrences Preventive
    Monitor and evaluate the effectiveness of detection tools. CC ID 13505 Investigate Detective
    Monitor and review retail payment activities, as necessary. CC ID 13541 Monitor and Evaluate Occurrences Detective
    Determine if high rates of retail payment activities are from Originating Depository Financial Institutions. CC ID 13546 Investigate Detective
    Review retail payment service reports, as necessary. CC ID 13545 Investigate Detective
    Include the correlation and analysis of information obtained during testing in the continuous monitoring program. CC ID 14250 Process or Activity Detective
    Log account usage times. CC ID 07099 Log Management Detective
    Log account usage durations. CC ID 12117 Monitor and Evaluate Occurrences Detective
    Report inappropriate usage of user accounts to the appropriate personnel. CC ID 14243 Communicate Detective
    Test compliance controls for proper functionality. CC ID 00660
    [High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. Article 9 6.]
    Testing Detective
    Include error details, identifying the root causes, and mitigation actions in the testing procedures. CC ID 11827
    [Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a testing program. CC ID 00654
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; Article 55 1.(a)]
    Behavior Preventive
    Conduct Red Team exercises, as necessary. CC ID 12131 Technical Security Detective
    Establish, implement, and maintain a security assessment and authorization policy. CC ID 14031 Establish/Maintain Documentation Preventive
    Include coordination amongst entities in the security assessment and authorization policy. CC ID 14222 Establish/Maintain Documentation Preventive
    Include the scope in the security assessment and authorization policy. CC ID 14220 Establish/Maintain Documentation Preventive
    Include the purpose in the security assessment and authorization policy. CC ID 14219 Establish/Maintain Documentation Preventive
    Disseminate and communicate the security assessment and authorization policy to interested personnel and affected parties. CC ID 14218 Communicate Preventive
    Include management commitment in the security assessment and authorization policy. CC ID 14189 Establish/Maintain Documentation Preventive
    Include compliance requirements in the security assessment and authorization policy. CC ID 14183 Establish/Maintain Documentation Preventive
    Include roles and responsibilities in the security assessment and authorization policy. CC ID 14179 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain security assessment and authorization procedures. CC ID 14056 Establish/Maintain Documentation Preventive
    Disseminate and communicate security assessment and authorization procedures to interested personnel and affected parties. CC ID 14224 Communicate Preventive
    Test security systems and associated security procedures, as necessary. CC ID 11901
    [{testing in real-world conditions} Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5. Article 60 1. ¶ 1]
    Technical Security Detective
    Employ third parties to carry out testing programs, as necessary. CC ID 13178 Human Resources Management Preventive
    Enable security controls which were disabled to conduct testing. CC ID 17031 Testing Preventive
    Document improvement actions based on test results and exercises. CC ID 16840 Establish/Maintain Documentation Preventive
    Disable dedicated accounts after testing is complete. CC ID 17033 Testing Preventive
    Protect systems and data during testing in the production environment. CC ID 17198 Testing Preventive
    Delete personal data upon data subject's withdrawal from testing. CC ID 17238
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Data and Information Management Preventive
    Define the criteria to conduct testing in the production environment. CC ID 17197
    [{high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market surveillance authority in the Member State where the testing in real world conditions is to be conducted; Article 60 4.(a)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider, with the exception of providers or prospective providers of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, and high-risk AI systems referred to in point 2 of Annex III has registered the testing in real world conditions in accordance with Article 71(4) with a Union-wide unique single identification number and with the information specified in Annex IX; the provider or prospective provider of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, has registered the testing in real-world conditions in the secure non-public section of the EU database according to Article 49(4), point (d), with a Union-wide unique single identification number and with the information specified therein; the provider or prospective provider of high-risk AI systems referred to in point 2 of Annex III has registered the testing in real-world conditions in accordance with Article 49(5); Article 60 4.(c)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider conducting the testing in real world conditions is established in the Union or has appointed a legal representative who is established in the Union; Article 60 4.(d)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: data collected and processed for the purpose of the testing in real world conditions shall be transferred to third countries only provided that appropriate and applicable safeguards under Union law are implemented; Article 60 4.(e)
    {high-risk AI systems} {outside AI regulatory sandbox} {no longer than necessary} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than six months, which may be extended for an additional period of six months, subject to prior notification by the provider or prospective provider to the market surveillance authority, accompanied by an explanation of the need for such an extension; Article 60 4.(f)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the subjects of the testing in real world conditions who are persons belonging to vulnerable groups due to their age or disability, are appropriately protected; Article 60 4.(g)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the testing in real world conditions is effectively overseen by the provider or prospective provider, as well as by deployers or prospective deployers through persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks; Article 60 4.(j)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded. Article 60 4.(k)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; Article 60 4.(h)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; Article 60 4.(h)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subjects, and their personal data shall be deleted after the test is performed; Article 60 4.(i)]
    Testing Preventive
    Obtain the data subject's consent to participate in testing in real-world conditions. CC ID 17232
    [For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation; Article 61 1.(a)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects’ participation; Article 61 1.(b)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification; Article 61 1.(c)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of the AI system; Article 61 1.(d)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained. Article 61 1.(e)]
    Behavior Preventive
    Suspend testing in a production environment, as necessary. CC ID 17231
    [Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Testing Preventive
    Define the test requirements for each testing program. CC ID 13177
    [The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. Article 9 8.]
    Establish/Maintain Documentation Preventive
    Test in scope systems for segregation of duties, as necessary. CC ID 13906 Testing Detective
    Include test requirements for the use of production data in the testing program. CC ID 17201 Testing Preventive
    Include test requirements for the use of human subjects in the testing program. CC ID 16222
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Testing Preventive
    Test the in scope system in accordance with its intended purpose. CC ID 14961
    [The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. Article 9 8.
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; Article 55 1.(a)]
    Testing Preventive
    Perform network testing in accordance with organizational standards. CC ID 16448 Testing Preventive
    Notify interested personnel and affected parties prior to performing testing. CC ID 17034 Communicate Preventive
    Test user accounts in accordance with organizational standards. CC ID 16421 Testing Preventive
    Identify risk management measures when testing in scope systems. CC ID 14960
    [High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. Article 9 6.]
    Process or Activity Detective
    Include mechanisms for emergency stops in the testing program. CC ID 14398 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain conformity assessment procedures. CC ID 15032
    [For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: the internal control referred to in Annex VI; or Article 43 1.(a)
    For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII. Article 43 1.(b)
    In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where: Article 43 1. ¶ 1
    For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. Article 43 2.]
    Establish/Maintain Documentation Preventive
    Share conformity assessment results with affected parties and interested personnel. CC ID 15113
    [{high-risk artificial intelligence system} A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. Article 6 4.
    Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results. Article 45 3.]
    Communicate Preventive
    Notify affected parties and interested personnel of technical documentation assessment certificates that have been issued. CC ID 15112
    [Each notified body shall inform the other notified bodies of: Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. Article 45 2.(b)
    Notified bodies shall inform the notifying authority of the following: any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; Article 45 1.(a)]
    Communicate Preventive
    Notify affected parties and interested personnel of technical documentation assessment certificates that have been refused, withdrawn, suspended or restricted. CC ID 15111
    [Each notified body shall inform the other notified bodies of: Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. Article 45 2.(b)]
    Communicate Preventive
    Create technical documentation assessment certificates in an official language. CC ID 15110
    [Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established. Article 44 1.]
    Establish/Maintain Documentation Preventive
    Request an extension of the validity period of technical documentation assessment certificates, as necessary. CC ID 17228
    [Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.]
    Process or Activity Preventive
    Define the validity period for technical documentation assessment certificates. CC ID 17227
    [Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.
    Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.]
    Process or Activity Preventive
    Opt out of third party conformity assessments when the system meets harmonized standards. CC ID 15096
    [Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter. Article 43 3. ¶ 3]
    Testing Preventive
    Perform conformity assessments, as necessary. CC ID 15095
    [Providers of high-risk AI systems shall: ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its being placed on the market or put into service; Article 16 ¶ 1 (f)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the high-risk AI system; Article 23 1.(a)
    For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. Article 43 3. ¶ 1
    High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer. Article 43 4. ¶ 1
    By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Testing Detective
    Define the test frequency for each testing program. CC ID 13176
    [{testing in real-world conditions} Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more deployers or prospective deployers. Article 60 2.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a port scan baseline for all in scope systems. CC ID 12134 Technical Security Detective
    Establish, implement, and maintain a stress test program for identification cards or badges. CC ID 15424 Establish/Maintain Documentation Preventive
    Use dedicated user accounts when conducting penetration testing. CC ID 13728 Testing Detective
    Remove dedicated user accounts after penetration testing is concluded. CC ID 13729 Testing Corrective
    Ensure protocols are free from injection flaws. CC ID 16401 Process or Activity Preventive
    Prevent adversaries from disabling or compromising security controls. CC ID 17057 Technical Security Preventive
    Establish, implement, and maintain a business line testing strategy. CC ID 13245 Establish/Maintain Documentation Preventive
    Include facilities in the business line testing strategy. CC ID 13253 Establish/Maintain Documentation Preventive
    Include electrical systems in the business line testing strategy. CC ID 13251 Establish/Maintain Documentation Preventive
    Include mechanical systems in the business line testing strategy. CC ID 13250 Establish/Maintain Documentation Preventive
    Include Heating Ventilation and Air Conditioning systems in the business line testing strategy. CC ID 13248 Establish/Maintain Documentation Preventive
    Include emergency power supplies in the business line testing strategy. CC ID 13247 Establish/Maintain Documentation Preventive
    Include environmental controls in the business line testing strategy. CC ID 13246 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a vulnerability management program. CC ID 15721 Establish/Maintain Documentation Preventive
    Conduct scanning activities in a test environment. CC ID 17036 Testing Preventive
    Disseminate and communicate the vulnerability scan results to interested personnel and affected parties. CC ID 16418 Communicate Preventive
    Maintain vulnerability scan reports as organizational records. CC ID 12092 Records Management Preventive
    Perform vulnerability scans prior to installing payment applications. CC ID 12192 Technical Security Detective
    Implement scanning tools, as necessary. CC ID 14282 Technical Security Detective
    Test the system for unvalidated input. CC ID 01318
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Testing Detective
    Approve the vulnerability management program. CC ID 15722 Process or Activity Preventive
    Assign ownership of the vulnerability management program to the appropriate role. CC ID 15723 Establish Roles Preventive
    Document and maintain test results. CC ID 17028
    [{high-risk artificial intelligence system} A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. Article 6 4.]
    Testing Preventive
    Include the pass or fail test status in the test results. CC ID 17106 Establish/Maintain Documentation Preventive
    Include time information in the test results. CC ID 17105 Establish/Maintain Documentation Preventive
    Include a description of the system tested in the test results. CC ID 17104 Establish/Maintain Documentation Preventive
    Disseminate and communicate the test results to interested personnel and affected parties. CC ID 17103
    [{fundamental rights impact assessment} Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify. Article 27 3.]
    Communicate Preventive
    Disallow the use of payment applications when a vulnerability scan report indicates vulnerabilities are present. CC ID 12188 Configuration Corrective
    Establish, implement, and maintain an exception management process for vulnerabilities that cannot be remediated. CC ID 13859 Technical Security Corrective
    Establish, implement, and maintain a compliance monitoring policy. CC ID 00671
    [The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a metrics policy. CC ID 01654 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an approach for compliance monitoring. CC ID 01653 Establish/Maintain Documentation Preventive
    Monitor personnel and third parties for compliance to the organizational compliance framework. CC ID 04726 Monitor and Evaluate Occurrences Detective
    Identify and document instances of non-compliance with the compliance framework. CC ID 06499
    [Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.]
    Establish/Maintain Documentation Preventive
    Correct compliance violations. CC ID 13515
    [Providers of high-risk AI systems shall: take the necessary corrective actions and provide information as required in Article 20; Article 16 ¶ 1 (j)
    Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. Article 20 1.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.
    Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Article 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation. Article 79 2. ¶ 2
    The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. Article 79 4.
    {high-risk AI system} Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. Article 80 2.
    {high-risk AI system} Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. Article 80 2.
    The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. Article 80 4.
    The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. Article 80 5.
    The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1. Article 82 2.
    Where the market surveillance authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe: Article 83 1.
    Where, having performed an evaluation under Article 79, after consulting the relevant national public authority referred to in Article 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system complies with this Regulation, it nevertheless presents a risk to the health or safety of persons, to fundamental rights, or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe. Article 82 1.]
    Process or Activity Corrective
    Establish, implement, and maintain disciplinary action notices. CC ID 16577 Establish/Maintain Documentation Preventive
    Include a copy of the order in the disciplinary action notice. CC ID 16606 Establish/Maintain Documentation Preventive
    Include the sanctions imposed in the disciplinary action notice. CC ID 16599 Establish/Maintain Documentation Preventive
    Include the effective date of the sanctions in the disciplinary action notice. CC ID 16589 Establish/Maintain Documentation Preventive
    Include the requirements that were violated in the disciplinary action notice. CC ID 16588 Establish/Maintain Documentation Preventive
    Include responses to charges from interested personnel and affected parties in the disciplinary action notice. CC ID 16587 Establish/Maintain Documentation Preventive
    Include the reasons for imposing sanctions in the disciplinary action notice. CC ID 16586 Establish/Maintain Documentation Preventive
    Disseminate and communicate the disciplinary action notice to interested personnel and affected parties. CC ID 16585 Communicate Preventive
    Include required information in the disciplinary action notice. CC ID 16584 Establish/Maintain Documentation Preventive
    Include a justification for actions taken in the disciplinary action notice. CC ID 16583 Establish/Maintain Documentation Preventive
    Include a statement on the conclusions of the investigation in the disciplinary action notice. CC ID 16582 Establish/Maintain Documentation Preventive
    Include the investigation results in the disciplinary action notice. CC ID 16581 Establish/Maintain Documentation Preventive
    Include a description of the causes of the actions taken in the disciplinary action notice. CC ID 16580 Establish/Maintain Documentation Preventive
    Include the name of the person responsible for the charges in the disciplinary action notice. CC ID 16579 Establish/Maintain Documentation Preventive
    Include contact information in the disciplinary action notice. CC ID 16578 Establish/Maintain Documentation Preventive
    Report on the percentage of needed external audits that have been completed and reviewed. CC ID 11632 Actionable Reports or Measurements Detective
    Convert data into standard units before reporting metrics. CC ID 15507 Process or Activity Corrective
    Establish, implement, and maintain occupational health and safety management metrics program. CC ID 15915 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a privacy metrics program. CC ID 15494 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain environmental management system performance metrics. CC ID 15191 Actionable Reports or Measurements Preventive
    Establish, implement, and maintain waste management metrics. CC ID 16152 Actionable Reports or Measurements Preventive
    Establish, implement, and maintain emissions management metrics. CC ID 16145 Actionable Reports or Measurements Preventive
    Establish, implement, and maintain financial management metrics. CC ID 16749 Actionable Reports or Measurements Preventive
    Establish, implement, and maintain a network activity baseline. CC ID 13188 Technical Security Detective
    Delay the reporting of incident management metrics, as necessary. CC ID 15501 Communicate Preventive
    Establish, implement, and maintain an Electronic Health Records measurement metrics program. CC ID 06221 Establish/Maintain Documentation Preventive
    Include transfer procedures in the log management program. CC ID 17077 Establish/Maintain Documentation Preventive
    Restrict access to logs to authorized individuals. CC ID 01342
    [Upon a reasoned request by a competent authority, providers shall also give the requesting competent authority, as applicable, access to the automatically generated logs of the high-risk AI system referred to in Article 12(1), to the extent such logs are under their control. Article 21 2.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; Article 22 3.(c)]
    Log Management Preventive
    Establish, implement, and maintain a cross-organizational audit sharing agreement. CC ID 10595 Establish/Maintain Documentation Preventive
    Report compliance monitoring statistics to the Board of Directors and other critical stakeholders, as necessary. CC ID 00676
    [Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. Article 20 1.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.]
    Actionable Reports or Measurements Corrective
  • Operational and Systems Continuity
    1
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Execute fail-safe procedures when an emergency occurs. CC ID 07108
    [{backup plans} The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. Article 15 4. ¶ 2]
    Systems Continuity Preventive
  • Operational management
    196
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Operational management CC ID 00805 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain a Governance, Risk, and Compliance framework. CC ID 01406 Establish/Maintain Documentation Preventive
    Analyze the effect of the Governance, Risk, and Compliance capability to achieve organizational objectives. CC ID 12809
    [{is transparent} High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. Article 13 1.]
    Audits and Risk Management Preventive
    Establish, implement, and maintain an information security program. CC ID 00812 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain operational control procedures. CC ID 00831 Establish/Maintain Documentation Preventive
    Include alternative actions in the operational control procedures. CC ID 17096
    [Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 53 4.
    Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 55 2.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a Standard Operating Procedures Manual. CC ID 00826
    [{is accessible} {is comprehensible} High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. Article 13 2.
    {is accessible} {is comprehensible} High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. Article 13 2.]
    Establish/Maintain Documentation Preventive
    Use systems in accordance with the standard operating procedures manual. CC ID 15049
    [Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 53 4.]
    Process or Activity Preventive
    Include system use information in the standard operating procedures manual. CC ID 17240 Establish/Maintain Documentation Preventive
    Include metrics in the standard operating procedures manual. CC ID 14988
    [The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. Article 15 3.]
    Establish/Maintain Documentation Preventive
    Include maintenance measures in the standard operating procedures manual. CC ID 14986
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Establish/Maintain Documentation Preventive
    Include logging procedures in the standard operating procedures manual. CC ID 17214
    [The instructions for use shall contain at least the following information: where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12. Article 13 3.(f)]
    Establish/Maintain Documentation Preventive
    Include the expected lifetime of the system in the standard operating procedures manual. CC ID 14984
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Establish/Maintain Documentation Preventive
    Include resources in the standard operating procedures manual. CC ID 17212
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Establish/Maintain Documentation Preventive
    Include technical measures used to interpret output in the standard operating procedures manual. CC ID 14982
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output; Article 13 3.(b)(iv)
    The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately; Article 13 3.(b)(vii)
    The instructions for use shall contain at least the following information: the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; Article 13 3.(d)]
    Establish/Maintain Documentation Preventive
    Include human oversight measures in the standard operating procedures manual. CC ID 17213
    [The instructions for use shall contain at least the following information: the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; Article 13 3.(d)]
    Establish/Maintain Documentation Preventive
    Include predetermined changes in the standard operating procedures manual. CC ID 14977
    [The instructions for use shall contain at least the following information: the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; Article 13 3.(c)]
    Establish/Maintain Documentation Preventive
    Include specifications for input data in the standard operating procedures manual. CC ID 14975
    [{training data} {validation data} {testing data} The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system; Article 13 3.(b)(vi)
    Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. Article 26 4.]
    Establish/Maintain Documentation Preventive
    Include risks to health and safety or fundamental rights in the standard operating procedures manual. CC ID 14973
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2); Article 13 3.(b)(iii)]
    Establish/Maintain Documentation Preventive
    Include circumstances that may impact the system in the standard operating procedures manual. CC ID 14972
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; Article 13 3.(b)(ii)]
    Establish/Maintain Documentation Preventive
    Include what the system was tested and validated for in the standard operating procedures manual. CC ID 14969
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; Article 13 3.(b)(ii)]
    Establish/Maintain Documentation Preventive
    Adhere to operating procedures as defined in the Standard Operating Procedures Manual. CC ID 06328
    [{technical measures} Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6. Article 26 1.]
    Business Processes Preventive
    Update operating procedures that contribute to user errors. CC ID 06935 Establish/Maintain Documentation Corrective
    Include the intended purpose in the standard operating procedures manual. CC ID 14967
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: its intended purpose; Article 13 3.(b)(i)]
    Establish/Maintain Documentation Preventive
    Include information on system performance in the standard operating procedures manual. CC ID 14965
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: Article 13 3.(b)
    The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used; Article 13 3.(b)(v)]
    Establish/Maintain Documentation Preventive
    Include contact details in the standard operating procedures manual. CC ID 14962
    [The instructions for use shall contain at least the following information: the identity and the contact details of the provider and, where applicable, of its authorised representative; Article 13 3.(a)]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain information sharing agreements. CC ID 15645 Business Processes Preventive
    Provide support for information sharing activities. CC ID 15644 Process or Activity Preventive
    Include that explicit management authorization must be given for the use of all technologies and their documentation in the Acceptable Use Policy. CC ID 01351
    [The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. Article 46 3.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an Intellectual Property Right program. CC ID 00821
    [Providers of general-purpose AI models shall: put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; Article 53 1.(c)]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain Intellectual Property Rights protection procedures. CC ID 11512
    [Providers of general-purpose AI models shall: put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; Article 53 1.(c)]
    Establish/Maintain Documentation Preventive
    Implement and comply with the Governance, Risk, and Compliance framework. CC ID 00818 Business Processes Preventive
    Comply with all implemented policies in the organization's compliance framework. CC ID 06384
    [In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. Article 8 1.
    Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under applicable Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplication and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I. Article 8 2.
    Providers of high-risk AI systems shall: ensure that their high-risk AI systems are compliant with the requirements set out in Section 2; Article 16 ¶ 1 (a)
    Providers of high-risk AI systems shall: comply with the registration obligations referred to in Article 49(1); Article 16 ¶ 1 (i)
    Providers of high-risk AI systems shall: ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882. Article 16 ¶ 1 (l)
    {quality management system} The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. Article 17 2.
    For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the obligation to put in place a quality management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be taken into account. Article 17 4.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. Article 22 3.(e)
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that those standards cover those requirements or obligations. Article 40 1.
    High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations. Article 41 3.
    High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used shall be presumed to comply with the relevant requirements laid down in Article 10(4). Article 42 1.
    High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. Article 42 2.
    {keep up to date} By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. Article 47 4.
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and Article 53 1.(b)(i)
    Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 55 2.
    For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the background-color:#F0BBBC;" class="term_primary-noun">rules on internal n">governance arrangements, processes and mechanisms pursuant to the relevant financial service law. For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law. Article 26 5. ¶ 2]
    Establish/Maintain Documentation Preventive
    Classify assets according to the Asset Classification Policy. CC ID 07186
    [A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; Article 51 1.(a)
    A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII. Article 51 1.(b)]
    Establish Roles Preventive
    Classify virtual systems by type and purpose. CC ID 16332 Business Processes Preventive
    Establish, implement, and maintain a customer service program. CC ID 00846 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an Incident Management program. CC ID 00853 Business Processes Preventive
    Include incident monitoring procedures in the Incident Management program. CC ID 01207 Establish/Maintain Documentation Preventive
    Contain the incident to prevent further loss. CC ID 01751 Process or Activity Corrective
    Record actions taken by investigators during a forensic investigation in the forensic investigation report. CC ID 07095 Establish/Maintain Documentation Preventive
    Include corrective actions in the forensic investigation report. CC ID 17070
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Establish/Maintain Documentation Preventive
    Share incident information with interested personnel and affected parties. CC ID 01212
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; Article 55 1.(c)
    Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.
    Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. Article 73 1.
    For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulations (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those referred to in Article 3, point (49)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the Member States where the incident occurred. Article 73 10.]
    Data and Information Management Corrective
    Redact restricted data before sharing incident information. CC ID 16994 Data and Information Management Preventive
    Notify interested personnel and affected parties of an extortion payment in the event of a cybersecurity event. CC ID 16539 Communicate Preventive
    Notify interested personnel and affected parties of the reasons for the extortion payment, along with any alternative solutions. CC ID 16538 Communicate Preventive
    Document the justification for not reporting incidents to interested personnel and affected parties. CC ID 16547 Establish/Maintain Documentation Preventive
    Report to breach notification organizations the reasons for a delay in sending breach notifications. CC ID 16797 Communicate Preventive
    Report to breach notification organizations the distribution list to which the organization will send data loss event notifications. CC ID 16782 Communicate Preventive
    Conduct incident investigations, as necessary. CC ID 13826
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Process or Activity Detective
    Analyze the behaviors of individuals involved in the incident during incident investigations. CC ID 14042 Investigate Detective
    Identify the affected parties during incident investigations. CC ID 16781 Investigate Detective
    Interview suspects during incident investigations, as necessary. CC ID 14041 Investigate Detective
    Interview victims and witnesses during incident investigations, as necessary. CC ID 14038 Investigate Detective
    Document any potential harm in the incident finding when concluding the incident investigation. CC ID 13830 Establish/Maintain Documentation Preventive
    Destroy investigative materials, as necessary. CC ID 17082 Data and Information Management Preventive
    Log incidents in the Incident Management audit log. CC ID 00857
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; Article 55 1.(c)]
    Establish/Maintain Documentation Preventive
    Include who the incident was reported to in the incident management audit log. CC ID 16487 Log Management Preventive
    Include the information that was exchanged in the incident management audit log. CC ID 16995 Log Management Preventive
    Include corrective actions in the incident management audit log. CC ID 16466 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain incident reporting time frame standards. CC ID 12142
    [The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident. Article 73 2. ¶ 1
    {be no later than} Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident as defined in Article 3, point (49)(b), the report referred to in paragraph 1 of this Article shall be provided immediately, and not later than two days after the provider or, where applicable, the deployer becomes aware of that incident. Article 73 3.
    {be no later than} Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately after the provider or the deployer has established, or as soon as it suspects, a causal relationship between the high-risk AI system and the serious incident, but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident. Article 73 4.]
    Communicate Preventive
    Establish, implement, and maintain an Incident Response program. CC ID 00579 Establish/Maintain Documentation Preventive
    Create an incident response report. CC ID 12700 Establish/Maintain Documentation Preventive
    Include a root cause analysis of the incident in the incident response report. CC ID 12701
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Establish/Maintain Documentation Preventive
    Mitigate reported incidents. CC ID 12973
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Actionable Reports or Measurements Preventive
    Shut down systems when an integrity violation is detected, as necessary. CC ID 10679
    [Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Technical Security Corrective
    Refrain from altering the state of compromised systems when collecting digital forensic evidence. CC ID 08671
    [The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. Article 73 6. ¶ 2]
    Investigate Detective
    Establish, implement, and maintain a disability accessibility program. CC ID 06191
    [The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. Article 50 5.]
    Establish/Maintain Documentation Preventive
    Separate foreground from background when designing and building content. CC ID 15125 Systems Design, Build, and Implementation Preventive
    Establish, implement, and maintain web content accessibility guidelines. CC ID 14949 Establish/Maintain Documentation Preventive
    Conduct web accessibility testing in accordance with organizational standards. CC ID 16950 Testing Preventive
    Configure focus order in a meaningful way. CC ID 15206 Configuration Preventive
    Configure keyboard interfaces to provide all the functionality that is available for the associated website content. CC ID 15151 Configuration Preventive
    Programmatically set the states, properties, and values of user interface components. CC ID 15150 Configuration Preventive
    Notify users of changes to user interface components. CC ID 15149 Configuration Preventive
    Refrain from designing content in a way that is known to cause seizures or physical reactions. CC ID 15203 Configuration Preventive
    Configure content to be compatible with various user agents and assistive technologies. CC ID 15147 Configuration Preventive
    Configure content to be interpreted by various user agents and assistive technologies. CC ID 15146 Configuration Preventive
    Provide captions for prerecorded audio content. CC ID 15204 Configuration Preventive
    Ensure user interface component names include the same text that is presented visually. CC ID 15145 Configuration Preventive
    Configure user interface components to operate device motion and user motion functionality. CC ID 15144 Configuration Preventive
    Configure single pointer functionality to organizational standards. CC ID 15143 Configuration Preventive
    Configure the keyboard operable user interface so the keyboard focus indicator is visible. CC ID 15142 Configuration Preventive
    Provide users with alternative methods to inputting data in online forms. CC ID 16951 Data and Information Management Preventive
    Provide users the ability to disable user motion and device motion. CC ID 15205 Configuration Preventive
    Refrain from duplicating attributes in website content using markup languages. CC ID 15141 Configuration Preventive
    Use unique identifiers when using markup languages. CC ID 15140 Configuration Preventive
    Programmatically determine the status messages to convey to users. CC ID 15139 Configuration Preventive
    Advise users on how to navigate content. CC ID 15138 Communicate Preventive
    Allow users the ability to move focus with the keyboard. CC ID 15136 Configuration Preventive
    Avoid using images of text to convey information. CC ID 15202 Configuration Preventive
    Allow users to pause, stop, or hide moving, blinking or scrolling information. CC ID 15135 Configuration Preventive
    Display website content without loss of information or functionality and without requiring scrolling in two dimensions. CC ID 15134 Configuration Preventive
    Use images of text to convey information, as necessary. CC ID 15132 Configuration Preventive
    Refrain from using color as the only visual means to distinguish content. CC ID 15130 Configuration Preventive
    Refrain from restricting content to a single display orientation. CC ID 15129 Configuration Preventive
    Use text to convey information on web pages, as necessary. CC ID 15128 Configuration Preventive
    Configure the contrast ratio to organizational standards. CC ID 15127 Configuration Preventive
    Programmatically determine the correct reading sequence. CC ID 15126 Configuration Preventive
    Refrain from creating instructions for content that rely on sensory characteristics of components. CC ID 15124 Establish/Maintain Documentation Preventive
    Programmatically determine the information, structure, and relationships conveyed through the presentation. CC ID 15123 Configuration Preventive
    Provide audio descriptions for all prerecorded video content. CC ID 15122 Configuration Preventive
    Provide alternative forms of CAPTCHA, as necessary. CC ID 15121 Configuration Preventive
    Provide alternatives for time-based media. CC ID 15119 Configuration Preventive
    Configure non-text content to be ignored by assistive technology when it is pure decoration or not presented to users. CC ID 15118 Configuration Preventive
    Configure non-text content with a descriptive identification. CC ID 15117 Configuration Preventive
    Provide text alternatives for non-text content, as necessary. CC ID 15078 Configuration Preventive
    Implement functionality for a single pointer so an up-event reverses the outcome of a down-event. CC ID 15076 Configuration Preventive
    Implement functionality for a single pointer so the completion of a down-event is essential. CC ID 15075 Configuration Preventive
    Implement functionality to abort or undo the function when using a single pointer. CC ID 15074 Configuration Preventive
    Implement functionality for a single pointer so the up-event signals the completion of a function. CC ID 15073 Configuration Preventive
    Implement functionality for a single pointer so the down-event is not used to execute any part of a function. CC ID 15072 Configuration Preventive
    Allow users the ability to use various input devices. CC ID 15071 Configuration Preventive
    Implement mechanisms to allow users the ability to bypass repeated blocks of website content. CC ID 15068 Configuration Preventive
    Implement flashes below the general flash and red flash thresholds on web pages. CC ID 15067 Configuration Preventive
    Configure content to be presentable in a manner that is clear and conspicuous to all users. CC ID 15066
    [The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. Article 50 5.]
    Configuration Preventive
    Configure non-text content that is a control or accepts user input with a name that describes its purpose. CC ID 15065 Configuration Preventive
    Allow users the ability to modify time limits in website content a defined number of times. CC ID 15064 Configuration Preventive
    Provide users with a simple method to extend the time limits set by content. CC ID 15063 Configuration Preventive
    Allow users the ability to disable time limits set by content. CC ID 15062 Configuration Preventive
    Warn users before time limits set by content are about to expire. CC ID 15061 Configuration Preventive
    Allow users the ability to modify time limits set by website or native applications. CC ID 15060 Configuration Preventive
    Provide users time to read and use website content, as necessary. CC ID 15059 Configuration Preventive
    Activate keyboard shortcuts on user interface components only when the appropriate component has focus. CC ID 15058 Configuration Preventive
    Provide users a mechanism to turn off keyboard shortcuts, as necessary. CC ID 15057 Configuration Preventive
    Configure all functionality to be accessible with a keyboard. CC ID 15056 Configuration Preventive
    Establish, implement, and maintain a registration database. CC ID 15048
    [The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or, where applicable, by the authorised representative. Article 71 2.
    The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on behalf of, a public authority, agency or body, in accordance with Article 49(3) and (4). Article 71 3.]
    Data and Information Management Preventive
    Grant registration after competence and integrity is verified. CC ID 16802 Behavior Detective
    Implement access restrictions for information in the registration database. CC ID 17235
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Data and Information Management Preventive
    Include registration numbers in the registration database. CC ID 17272 Data and Information Management Preventive
    Include electronic signatures in the registration database. CC ID 17281 Data and Information Management Preventive
    Include other registrations in the registration database. CC ID 17274 Data and Information Management Preventive
    Include the owners and shareholders in the registration database. CC ID 17273 Data and Information Management Preventive
    Include contact details in the registration database. CC ID 15109
    [The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. Article 71 5.]
    Establish/Maintain Documentation Preventive
    Include personal data in the registration database, as necessary. CC ID 15108
    [The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. Article 71 5.]
    Establish/Maintain Documentation Preventive
    Publish the registration information in the registration database in an official language. CC ID 17280 Data and Information Management Preventive
    Make the registration database available to the public. CC ID 15107
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Communicate Preventive
    Maintain non-public information in a protected area in the registration database. CC ID 17237
    [For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in: Article 49 4.]
    Data and Information Management Preventive
    Impose conditions or restrictions on the termination or suspension of a registration. CC ID 16796 Business Processes Preventive
    Publish the IP addresses being used by each external customer in the registration database. CC ID 16403 Data and Information Management Preventive
    Update registration information upon changes. CC ID 17275 Data and Information Management Preventive
    Maintain the accuracy of registry information published in registration databases. CC ID 16402
    [For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. Article 22 3.(e)]
    Data and Information Management Preventive
    Maintain ease of use for information in the registration database. CC ID 17239
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Data and Information Management Preventive
    Include all required information in the registration database. CC ID 15106 Data and Information Management Preventive
    Establish, implement, and maintain an artificial intelligence system. CC ID 14943
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.]
    Systems Design, Build, and Implementation Preventive
    Provide affected parties with the role of artificial intelligence in decision making. CC ID 17236
    [Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken. Article 86 1.]
    Communicate Preventive
    Provide the reasons for adverse decisions made by artificial intelligence systems. CC ID 17253 Process or Activity Preventive
    Authorize artificial intelligence systems for use under defined conditions. CC ID 17210
    [{not be taken} {adverse effect} The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. Article 5 3. ¶ 2]
    Process or Activity Preventive
    Refrain from notifying users when images, videos, or audio have been artificially generated or manipulated if use of the artificial intelligence system is authorized by law. CC ID 15051
    [Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Article 50 4. ¶ 1
    Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. Article 50 4. ¶ 2]
    Communicate Preventive
    Establish, implement, and maintain a post-market monitoring system. CC ID 15050
    [Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system. Article 72 1.
    The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.
    The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.
    The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). Article 72 3.]
    Monitor and Evaluate Occurrences Preventive
    Include mitigation measures to address biased output during the development of artificial intelligence systems. CC ID 15047
    [High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. Article 15 4. ¶ 3
    High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. Article 15 4. ¶ 3]
    Systems Design, Build, and Implementation Corrective
    Limit artificial intelligence systems authorizations to the time period until conformity assessment procedures are complete. CC ID 15043
    [By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Business Processes Preventive
    Terminate authorizations for artificial intelligence systems when conformity assessment procedures are complete. CC ID 15042 Business Processes Preventive
    Authorize artificial intelligence systems to be put into service for exceptional reasons while conformity assessment procedures are being conducted. CC ID 15039
    [In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.]
    Business Processes Preventive
    Discard the outputs of the artificial intelligence system when authorizations are denied. CC ID 17225
    [In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.]
    Process or Activity Preventive
    Assess the trustworthiness of artificial intelligence systems. CC ID 16319 Business Processes Detective
    Authorize artificial intelligence systems to be placed on the market for exceptional reasons while conformity assessment procedures are being conducted. CC ID 15037
    [By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Business Processes Preventive
    Withdraw authorizations that are unjustified. CC ID 15035 Business Processes Corrective
    Ensure the transport conditions for artificial intelligence systems refrain from compromising compliance. CC ID 15031
    [{storage conditions} Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. Article 23 4.
    {storage conditions} Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. Article 24 3.]
    Business Processes Detective
    Ensure the storage conditions for artificial intelligence systems refrain from compromising compliance. CC ID 15030
    [{storage conditions} Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. Article 23 4.
    {storage conditions} Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. Article 24 3.]
    Physical and Environmental Protection Detective
    Prohibit artificial intelligence systems from being placed on the market when it is not in compliance with the requirements. CC ID 15029
    [Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. Article 24 2.
    Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. Article 23 2.]
    Acquisition/Sale of Assets or Services Preventive
    Ensure the artificial intelligence system performs at an acceptable level of accuracy, robustness, and cybersecurity. CC ID 15024
    [High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. Article 15 1.
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. Article 55 1.(d)]
    Process or Activity Preventive
    Implement an acceptable level of accuracy, robustness, and cybersecurity in the development of artificial intelligence systems. CC ID 15022
    [High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. Article 15 1.]
    Systems Design, Build, and Implementation Preventive
    Take into account the nature of the situation when determining the possibility of using 'real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement. CC ID 15020
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used; Article 5 2.(a)]
    Process or Activity Preventive
    Notify users when images, videos, or audio on the artificial intelligence system has been artificially generated or manipulated. CC ID 15019
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.
    Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Article 50 4. ¶ 1
    Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. Article 50 4. ¶ 2]
    Communicate Preventive
    Refrain from notifying users of artificial intelligence systems using biometric categorization for law enforcement. CC ID 15017
    [{applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Communicate Preventive
    Use a remote biometric identification system under defined conditions. CC ID 15016
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    {post-remote biometric identification system} Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. Article 26 10. ¶ 1]
    Process or Activity Preventive
    Notify users when they are using an artificial intelligence system. CC ID 15015
    [Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply. Article 26 11.
    Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. Article 26 7.
    {applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Communicate Preventive
    Receive prior authorization for the use of a remote biometric identification system. CC ID 15014
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    {post-remote biometric identification system} Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. Article 26 10. ¶ 1]
    Business Processes Preventive
    Prohibit artificial intelligence systems that deploys subliminal techniques from being placed on the market. CC ID 15012 Acquisition/Sale of Assets or Services Preventive
    Prohibit artificial intelligence systems that use social scores for unfavorable treatment from being placed on the market. CC ID 15010 Acquisition/Sale of Assets or Services Preventive
    Prohibit artificial intelligence systems that evaluate or classify the trustworthiness of individuals from being placed on the market. CC ID 15008 Acquisition/Sale of Assets or Services Preventive
    Prohibit artificial intelligence systems that exploits vulnerabilities of a specific group of persons from being placed on the market. CC ID 15006 Acquisition/Sale of Assets or Services Preventive
    Refrain from making a decision based on system output unless verified by at least two natural persons. CC ID 15004
    [{not be taken} {adverse effect} The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. Article 5 3. ¶ 2
    {human oversight} {not taken} For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. Article 14 5. ¶ 1
    The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate. Article 14 5. ¶ 2
    {not be used} {not be taken} {adverse effect} In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. Article 26 10. ¶ 3]
    Business Processes Preventive
    Establish, implement, and maintain human oversight over artificial intelligence systems. CC ID 15003
    [High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use. Article 14 1.
    Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section. Article 14 2.
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: Article 14 3.
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; Article 14 3.(a)
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer. Article 14 3.(b)
    Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support. Article 26 2.]
    Behavior Preventive
    Implement measures to enable personnel assigned to human oversight to intervene or interrupt the operation of the artificial intelligence system. CC ID 15093
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state. Article 14 4.(e)]
    Process or Activity Preventive
    Implement measures to enable personnel assigned to human oversight to be aware of the possibility of automatically relying or over-relying on outputs to make decisions. CC ID 15091
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; Article 14 4.(b)]
    Human Resources Management Preventive
    Implement measures to enable personnel assigned to human oversight to interpret output correctly. CC ID 15089
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available; Article 14 4.(c)]
    Data and Information Management Preventive
    Implement measures to enable personnel assigned to human oversight to decide to refrain from using the artificial intelligence system or override disregard, or reverse the output. CC ID 15079
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system; Article 14 4.(d)]
    Behavior Preventive
    Enable users to interpret the artificial intelligence system's output and use. CC ID 15002
    [{is transparent} High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. Article 13 1.]
    Business Processes Preventive
    Develop artificial intelligence systems involving the training of models with data sets that meet the quality criteria. CC ID 14996
    [{training data} {validation data} {testing data} High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used. Article 10 1.]
    Systems Design, Build, and Implementation Preventive
    Withdraw the technical documentation assessment certificate when the artificial intelligence system is not in compliance with requirements. CC ID 15099
    [Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. Article 44 3. ¶ 1]
    Establish/Maintain Documentation Preventive
    Reassess the designation of artificial intelligence systems. CC ID 17230
    [Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. Article 52 5.]
    Process or Activity Preventive
    Define a high-risk artificial intelligence system. CC ID 14959
    [{high-risk artificial intelligence system} Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; Article 6 1.(a)
    {high-risk artificial intelligence system} Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I. Article 6 1.(b)
    {high-risk artificial intelligence system} By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. Article 6 3. ¶ 1
    {not be considered a high-risk artificial intelligence system} {assigned task} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to perform a narrow procedural task; Article 6 3. ¶ 2 (a)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to improve the result of a previously completed human activity; Article 6 3. ¶ 2 (b)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or Article 6 3. ¶ 2 (c)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III. Article 6 3. ¶ 2 (d)
    {high-risk artificial intelligence system} Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons. Article 6 3. ¶ 3]
    Establish/Maintain Documentation Preventive
    Take into account the consequences for the rights and freedoms of persons when using ‘real-time’ remote biometric identification systems for law enforcement. CC ID 14957
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. Article 5 2.(b)]
    Process or Activity Preventive
    Allow the use of 'real-time' remote biometric identification systems for law enforcement under defined conditions. CC ID 14955
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: Article 5 2.]
    Process or Activity Preventive
    Document the use of remote biometric identification systems. CC ID 17215
    [{post-remote biometric identification system} Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Article 26 10. ¶ 5]
    Business Processes Preventive
    Notify interested personnel and affected parties of the use of remote biometric identification systems. CC ID 17216
    [{post-remote biometric identification system} Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Article 26 10. ¶ 5]
    Communicate Preventive
    Refrain from using remote biometric identification systems under defined conditions. CC ID 14953
    [The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: Article 5 1.(h)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; Article 5 1.(h)(i)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; Article 5 1.(h)(ii)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. Article 5 1.(h)(iii)
    For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. Article 26 10. ¶ 2
    {not be used} {not be taken} {adverse effect} In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. Article 26 10. ¶ 3]
    Process or Activity Preventive
    Prohibit the use of artificial intelligence systems under defined conditions. CC ID 14951
    [The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; Article 5 1.(a)
    The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; Article 5 1.(b)
    The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: Article 5 1.(c)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; Article 5 1.(d)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; Article 5 1.(d)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; Article 5 1.(e)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; Article 5 1.(f)
    {religious beliefs} The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; Article 5 1.(g)
    {religious beliefs} The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; Article 5 1.(g)
    In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Process or Activity Preventive
    Establish, implement, and maintain a declaration of conformity. CC ID 15038
    [Providers of high-risk AI systems shall: draw up an EU declaration of conformity in accordance with Article 47; Article 16 ¶ 1 (g)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. Article 47 3.
    {keep up to date} By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. Article 47 4.]
    Establish/Maintain Documentation Preventive
    Include compliance requirements in the declaration of conformity. CC ID 15105
    [Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. Article 47 3.]
    Establish/Maintain Documentation Preventive
    Translate the declaration of conformity into an official language. CC ID 15103
    [The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.
    Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. Article 21 1.]
    Establish/Maintain Documentation Preventive
    Disseminate and communicate the declaration of conformity to interested personnel and affected parties. CC ID 15102
    [Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system shall provide that authority with all the information and documentation regarding their actions pursuant to paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in Section 2. Article 24 5.
    Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. Article 23 6.
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. Article 21 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; Article 22 3.(c)]
    Communicate Preventive
    Include all required information in the declaration of conformity. CC ID 15101
    [The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.]
    Establish/Maintain Documentation Preventive
    Include a statement that the artificial intelligence system meets all requirements in the declaration of conformity. CC ID 15100
    [Providers of high-risk AI systems shall: upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2; Article 16 ¶ 1 (k)
    The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.]
    Establish/Maintain Documentation Preventive
    Identify the artificial intelligence system in the declaration of conformity. CC ID 15098
    [The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.]
    Establish/Maintain Documentation Preventive
  • Physical and environmental protection
    53
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Establish, implement, and maintain a facility physical security program. CC ID 00711
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. Article 55 1.(d)]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain opening procedures for businesses. CC ID 16671 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain closing procedures for businesses. CC ID 16670 Establish/Maintain Documentation Preventive
    Establish and maintain a contract for accessing facilities that transmit, process, or store restricted data. CC ID 12050 Establish/Maintain Documentation Preventive
    Refrain from providing access to facilities that transmit, process, or store restricted data until the contract for accessing the facility is signed. CC ID 12311 Behavior Preventive
    Include identification cards or badges in the physical security program. CC ID 14818 Establish/Maintain Documentation Preventive
    Implement audio protection controls on telephone systems in controlled areas. CC ID 16455 Technical Security Preventive
    Establish, implement, and maintain security procedures for virtual meetings. CC ID 15581 Establish/Maintain Documentation Preventive
    Create security zones in facilities, as necessary. CC ID 16295 Physical and Environmental Protection Preventive
    Establish, implement, and maintain floor plans. CC ID 16419 Establish/Maintain Documentation Preventive
    Include network infrastructure and cabling infrastructure on the floor plan. CC ID 16420 Establish/Maintain Documentation Preventive
    Post floor plans of critical facilities in secure locations. CC ID 16138 Communicate Preventive
    Detect anomalies in physical barriers. CC ID 13533 Investigate Detective
    Configure the access control system to grant access only during authorized working hours. CC ID 12325 Physical and Environmental Protection Preventive
    Log the individual's address in the facility access list. CC ID 16921 Log Management Preventive
    Log the contact information for the person authorizing access in the facility access list. CC ID 16920 Log Management Preventive
    Log the organization's name in the facility access list. CC ID 16919 Log Management Preventive
    Log the individual's name in the facility access list. CC ID 16918 Log Management Preventive
    Log the purpose in the facility access list. CC ID 16982 Log Management Preventive
    Log the level of access in the facility access list. CC ID 16975 Log Management Preventive
    Disallow opting out of wearing or displaying identification cards or badges. CC ID 13030 Human Resources Management Preventive
    Implement physical identification processes. CC ID 13715 Process or Activity Preventive
    Refrain from using knowledge-based authentication for in-person identity verification. CC ID 13717 Process or Activity Preventive
    Issue photo identification badges to all employees. CC ID 12326 Physical and Environmental Protection Preventive
    Establish, implement, and maintain lost or damaged identification card procedures, as necessary. CC ID 14819 Establish/Maintain Documentation Preventive
    Document all lost badges in a lost badge list. CC ID 12448 Establish/Maintain Documentation Corrective
    Report lost badges, stolen badges, and broken badges to the Security Manager. CC ID 12334 Physical and Environmental Protection Preventive
    Direct each employee to be responsible for their identification card or badge. CC ID 12332 Human Resources Management Preventive
    Include name, date of entry, and validity period on disposable identification cards or badges. CC ID 12331 Physical and Environmental Protection Preventive
    Include error handling controls in identification issuance procedures. CC ID 13709 Establish/Maintain Documentation Preventive
    Include an appeal process in the identification issuance procedures. CC ID 15428 Business Processes Preventive
    Include information security in the identification issuance procedures. CC ID 15425 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain post-issuance update procedures for identification cards or badges. CC ID 15426 Establish/Maintain Documentation Preventive
    Enforce dual control for badge assignments. CC ID 12328 Physical and Environmental Protection Preventive
    Enforce dual control for accessing unassigned identification cards or badges. CC ID 12327 Physical and Environmental Protection Preventive
    Refrain from imprinting the company name or company logo on identification cards or badges. CC ID 12282 Physical and Environmental Protection Preventive
    Assign employees the responsibility for controlling their identification badges. CC ID 12333 Human Resources Management Preventive
    Charge a fee for replacement of identification cards or badges, as necessary. CC ID 17001 Business Processes Preventive
    Establish, implement, and maintain a door security standard. CC ID 06686 Establish/Maintain Documentation Preventive
    Restrict physical access mechanisms to authorized parties. CC ID 16924 Process or Activity Preventive
    Establish, implement, and maintain a window security standard. CC ID 06689 Establish/Maintain Documentation Preventive
    Use vandal resistant light fixtures for all security lighting. CC ID 16130 Physical and Environmental Protection Preventive
    Establish, Implement, and maintain a camera operating policy. CC ID 15456 Establish/Maintain Documentation Preventive
    Disseminate and communicate the camera installation policy to interested personnel and affected parties. CC ID 15461 Communicate Preventive
    Record the purpose of the visit in the visitor log. CC ID 16917 Log Management Preventive
    Record the date and time of entry in the visitor log. CC ID 13255 Establish/Maintain Documentation Preventive
    Record the date and time of departure in the visitor log. CC ID 16897 Log Management Preventive
    Record the type of identification used in the visitor log. CC ID 16916 Log Management Preventive
    Report anomalies in the visitor log to appropriate personnel. CC ID 14755 Investigate Detective
    Log when the cabinet is accessed. CC ID 11674 Log Management Detective
    Log the exit of a staff member to a facility or designated rooms within the facility. CC ID 11675 Monitor and Evaluate Occurrences Preventive
    Include the requestor's name in the physical access log. CC ID 16922 Log Management Preventive
    Physically segregate business areas in accordance with organizational standards. CC ID 16718 Physical and Environmental Protection Preventive
  • Privacy protection for information and data
    220
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Privacy protection for information and data CC ID 00008 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain a privacy framework that protects restricted data. CC ID 11850 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain personal data choice and consent program. CC ID 12569
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Establish/Maintain Documentation Preventive
    Provide a copy of the data subject's consent to the data subject. CC ID 17234
    [The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. Article 61 2.]
    Communicate Preventive
    Date the data subject's consent. CC ID 17233
    [The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. Article 61 2.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain data request procedures. CC ID 16546 Establish/Maintain Documentation Preventive
    Refrain from discriminating against data subjects who have exercised privacy rights. CC ID 13435 Human Resources Management Preventive
    Refrain from charging a fee to implement an opt-out request. CC ID 13877 Business Processes Preventive
    Establish and maintain disclosure authorization forms for authorization of consent to use personal data. CC ID 13433 Establish/Maintain Documentation Preventive
    Include procedures for revoking authorization of consent to use personal data in the disclosure authorization form. CC ID 13438 Establish/Maintain Documentation Preventive
    Include the identity of the person seeking consent in the disclosure authorization. CC ID 13999 Establish/Maintain Documentation Preventive
    Include the recipients of the disclosed personal data in the disclosure authorization form. CC ID 13440 Establish/Maintain Documentation Preventive
    Include the signature of the data subject and the signing date in the disclosure authorization form. CC ID 13439 Establish/Maintain Documentation Preventive
    Include the identity of the data subject in the disclosure authorization form. CC ID 13436 Establish/Maintain Documentation Preventive
    Include the types of personal data to be disclosed in the disclosure authorization form. CC ID 13442 Establish/Maintain Documentation Preventive
    Include how personal data will be used in the disclosure authorization form. CC ID 13441 Establish/Maintain Documentation Preventive
    Include agreement termination information in the disclosure authorization form. CC ID 13437 Establish/Maintain Documentation Preventive
    Offer incentives for consumers to opt-in to provide their personal data to the organization. CC ID 13781 Business Processes Preventive
    Refrain from using coercive financial incentive programs to entice opt-in consent. CC ID 13795 Business Processes Preventive
    Allow data subjects to opt out and refrain from granting an authorization of consent to use personal data. CC ID 00391 Data and Information Management Preventive
    Treat an opt-out direction by an individual joint consumer as applying to all associated joint consumers. CC ID 13452 Business Processes Preventive
    Treat opt-out directions separately for each customer relationship the data subject establishes with the organization. CC ID 13454 Business Processes Preventive
    Establish, implement, and maintain an opt-out method in accordance with organizational standards. CC ID 16526 Data and Information Management Preventive
    Establish, implement, and maintain a notification system for opt-out requests. CC ID 16880 Technical Security Preventive
    Comply with opt-out directions by the data subject, unless otherwise directed by compliance requirements. CC ID 13451 Business Processes Preventive
    Confirm the individual's identity before granting an opt-out request. CC ID 16813 Process or Activity Preventive
    Highlight the section regarding data subject's consent from other sections in contracts and agreements. CC ID 13988 Establish/Maintain Documentation Preventive
    Allow consent requests to be provided in any official languages. CC ID 16530 Business Processes Preventive
    Notify interested personnel and affected parties of the reasons the opt-out request was refused. CC ID 16537 Communicate Preventive
    Collect and retain disclosure authorizations for each data subject. CC ID 13434 Records Management Preventive
    Refrain from requiring consent to collect, use, or disclose personal data beyond specified, legitimate reasons in order to receive products and services. CC ID 13605 Data and Information Management Preventive
    Refrain from obtaining consent through deception. CC ID 13556 Data and Information Management Preventive
    Give individuals the ability to change the uses of their personal data. CC ID 00469 Data and Information Management Preventive
    Notify data subjects of the implications of withdrawing consent. CC ID 13551 Data and Information Management Preventive
    Establish, implement, and maintain a personal data accountability program. CC ID 13432 Establish/Maintain Documentation Preventive
    Require data controllers to be accountable for their actions. CC ID 00470 Establish Roles Preventive
    Notify the supervisory authority. CC ID 00472
    [Notified bodies shall inform the notifying authority of the following: any refusal, restriction, suspension or withdrawal of a Union background-color:#F0BBBC;" class="term_primary-noun">technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII; Article 45 1.(b)
    Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. Article 5 4.
    Providers of high-risk AI systems shall: take the necessary corrective actions and provide information as required in Article 20; Article 16 ¶ 1 (j)
    Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. Article 24 2.
    Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. Article 23 2.
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. Article 52 1.
    Providers or prospective providers shall notify the national market surveillance authority in the Member State where the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world conditions and of the final outcomes. Article 60 8.]
    Behavior Preventive
    Establish, implement, and maintain approval applications. CC ID 16778 Establish/Maintain Documentation Preventive
    Define the requirements for approving or denying approval applications. CC ID 16780 Business Processes Preventive
    Submit approval applications to the supervisory authority. CC ID 16627 Communicate Preventive
    Include required information in the approval application. CC ID 16628 Establish/Maintain Documentation Preventive
    Extend the time limit for approving or denying approval applications. CC ID 16779 Business Processes Preventive
    Approve the approval application unless applicant has been convicted. CC ID 16603 Process or Activity Preventive
    Provide the supervisory authority with any information requested by the supervisory authority. CC ID 12606
    [Notified bodies shall inform the notifying authority of the following: any circumstances affecting the scope of or F0BBBC;" class="term_primary-noun">conditions for notification; Article 45 1.(c)
    Notified bodies shall inform the notifying authority of the following: any request for information which they have received from ound-color:#F0BBBC;" class="term_primary-noun">market surveillance authorities regarding conformity assessment activities; Article 45 1.(d)
    Notified bodies shall inform the notifying authority of the following: on request, conformity assessment activities performed within the le="background-color:#F0BBBC;" class="term_primary-noun">scope of their pan style="background-color:#F0BBBC;" class="term_primary-noun">notification and any other activity performed, including cross-border activities and subcontracting. Article 45 1.(e)
    Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. Article 5 4.
    Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. Article 52 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to in point (b), necessary to demonstrate compliance with the obligations in this Chapter; Article 54 3.(c)
    The provider of the general-purpose AI model concerned, or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading. Article 91 5.
    The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. Article 92 5.]
    Process or Activity Preventive
    Notify the supervisory authority of the safeguards employed to protect the data subject's rights. CC ID 12605 Communicate Preventive
    Respond to questions about submissions in a timely manner. CC ID 16930 Communicate Preventive
    Include any reasons for delay if notifying the supervisory authority after the time limit. CC ID 12675 Communicate Corrective
    Establish, implement, and maintain a personal data use limitation program. CC ID 13428 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain a personal data use purpose specification. CC ID 00093 Establish/Maintain Documentation Preventive
    Dispose of media and restricted data in a timely manner. CC ID 00125
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first; Article 10 5.(e)
    If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. Article 26 10. ¶ 2]
    Data and Information Management Preventive
    Refrain from destroying records being inspected or reviewed. CC ID 13015 Records Management Preventive
    Notify the data subject after their personal data is disposed, as necessary. CC ID 13502 Communicate Preventive
    Establish, implement, and maintain restricted data use limitation procedures. CC ID 00128 Establish/Maintain Documentation Preventive
    Establish and maintain a record of processing activities when processing restricted data. CC ID 12636
    [{was necessary} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. Article 10 5.(f)]
    Establish/Maintain Documentation Preventive
    Refrain from maintaining a record of processing activities if the data processor employs a limited number of persons. CC ID 13378 Establish/Maintain Documentation Preventive
    Refrain from maintaining a record of processing activities if the personal data relates to criminal records. CC ID 13377 Establish/Maintain Documentation Preventive
    Refrain from maintaining a record of processing activities if the data being processed is restricted data. CC ID 13376 Establish/Maintain Documentation Preventive
    Refrain from maintaining a record of processing activities if it could result in a risk to the data subject's rights or data subject's freedom. CC ID 13375 Establish/Maintain Documentation Preventive
    Include the data protection officer's contact information in the record of processing activities. CC ID 12640 Records Management Preventive
    Include the data processor's contact information in the record of processing activities. CC ID 12657 Records Management Preventive
    Include the data processor's representative's contact information in the record of processing activities. CC ID 12658 Records Management Preventive
    Include a general description of the implemented security measures in the record of processing activities. CC ID 12641 Records Management Preventive
    Include a description of the data subject categories in the record of processing activities. CC ID 12659 Records Management Preventive
    Include the purpose of processing restricted data in the record of processing activities. CC ID 12663
    [{was necessary} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. Article 10 5.(f)]
    Records Management Preventive
    Include the personal data processing categories in the record of processing activities. CC ID 12661 Records Management Preventive
    Include the time limits for erasing each data category in the record of processing activities. CC ID 12690 Records Management Preventive
    Include the data recipient categories to whom restricted data has been or will be disclosed in the record of processing activities. CC ID 12664 Records Management Preventive
    Include a description of the personal data categories in the record of processing activities. CC ID 12660 Records Management Preventive
    Include the joint data controller's contact information in the record of processing activities. CC ID 12639 Records Management Preventive
    Include the data controller's representative's contact information in the record of processing activities. CC ID 12638 Records Management Preventive
    Include documentation of the transferee's safeguards for transferring restricted data in the record of processing activities. CC ID 12643 Records Management Preventive
    Include the identification of transferees for transferring restricted data in the record of processing activities. CC ID 12642 Records Management Preventive
    Include the data controller's contact information in the record of processing activities. CC ID 12637 Records Management Preventive
    Process restricted data lawfully and carefully. CC ID 00086
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data; Article 10 5.(a)
    To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; Article 10 5.(b)]
    Establish Roles Preventive
    Analyze requirements for processing personal data in contracts. CC ID 12550 Investigate Detective
    Implement technical controls that limit processing restricted data for specific purposes. CC ID 12646 Technical Security Preventive
    Process personal data pertaining to a patient's health in order to treat those patients. CC ID 00200 Data and Information Management Preventive
    Notify the subject of care when a lack of availability of health information systems might have adversely affected their care. CC ID 13990 Communicate Corrective
    Refrain from disclosing Individually Identifiable Health Information when in violation of territorial or federal law. CC ID 11966 Records Management Preventive
    Document the conditions for the use or disclosure of Individually Identifiable Health Information by a covered entity to another covered entity. CC ID 00210 Establish/Maintain Documentation Preventive
    Disclose Individually Identifiable Health Information for a covered entity's own use. CC ID 00211 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for a healthcare provider's treatment activities by a covered entity. CC ID 00212 Data and Information Management Preventive
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is permitted with the consent of the data subject. CC ID 11970 Records Management Preventive
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is to support the treatment of the individual. CC ID 11969 Process or Activity Preventive
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is permitted by law. CC ID 11976 Records Management Preventive
    Disclose Individually Identifiable Health Information for payment activities between covered entities or healthcare providers. CC ID 00213 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for Treatment, Payment, and Health Care Operations activities when both covered entities have a relationship with the data subject. CC ID 00214 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for Treatment, Payment, and Health Care Operations activities between a covered entity and a participating healthcare provider when the information is collected from the data subject and a third party. CC ID 00215 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in accordance with agreed upon restrictions. CC ID 06249 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in accordance with the privacy notice. CC ID 06250 Data and Information Management Preventive
    Disclose permitted Individually Identifiable Health Information for facility directories. CC ID 06251 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for cadaveric organ donation purposes, eye donation purposes, or tissue donation purposes. CC ID 06252 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for medical suitability determinations. CC ID 06253 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for armed forces personnel appropriately. CC ID 06254 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in order to provide public benefits by government agencies. CC ID 06255 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for fundraising. CC ID 06256 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information for research use when the appropriate requirements are included in the approval documentation or waiver documentation. CC ID 06257 Establish/Maintain Documentation Preventive
    Document the conditions for the disclosure of Individually Identifiable Health Information by an organization providing healthcare services to organizations other than business associates or other covered entities. CC ID 00201 Establish/Maintain Documentation Preventive
    Disclose Individually Identifiable Health Information when the data subject cannot physically or legally provide consent and the disclosing organization is a healthcare provider. CC ID 00202 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information to provide appropriate treatment to the data subject when the disclosing organization is a healthcare provider. CC ID 00203 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information when it is not contrary to the data subject's wish prior to becoming unable to provide consent and the disclosing organization is a healthcare provider. CC ID 00204 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information that is reasonable or necessary for the disclosure purpose when the disclosing organization is a healthcare provider. CC ID 00205 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information consistent with the law when the disclosing organization is a healthcare provider. CC ID 00206 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in order to carry out treatment when the disclosing organization is a healthcare provider. CC ID 00207 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in order to carry out treatment when the data subject has provided consent and the disclosing organization is a healthcare provider. CC ID 00208 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in order to carry out treatment when the data subject's guardian or representative has provided consent and the disclosing organization is a healthcare provider. CC ID 00209 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information when the disclosing organization is a healthcare provider that supports public health and safety activities. CC ID 06248 Data and Information Management Preventive
    Disclose Individually Identifiable Health Information in order to report abuse or neglect when the disclosing organization is a healthcare provider. CC ID 06819 Data and Information Management Preventive
    Refrain from disclosing Individually Identifiable Health Information related to reproductive health care, as necessary. CC ID 17250 Business Processes Preventive
    Document how Individually Identifiable Health Information is used and disclosed when authorization has been granted. CC ID 00216 Establish/Maintain Documentation Preventive
    Define and implement valid authorization control requirements. CC ID 06258 Establish/Maintain Documentation Preventive
    Obtain explicit consent for authorization to release Individually Identifiable Health Information. CC ID 00217 Data and Information Management Preventive
    Obtain explicit consent for authorization to release psychotherapy notes. CC ID 00218 Data and Information Management Preventive
    Cease the use or disclosure of Individually Identifiable Health Information under predetermined conditions. CC ID 17251 Business Processes Preventive
    Refrain from using Individually Identifiable Health Information related to reproductive health care, as necessary. CC ID 17256 Business Processes Preventive
    Refrain from using Individually Identifiable Health Information to determine eligibility or continued eligibility for credit. CC ID 00219 Data and Information Management Preventive
    Process personal data after the data subject has granted explicit consent. CC ID 00180 Data and Information Management Preventive
    Process personal data in order to perform a legal obligation or exercise a legal right. CC ID 00182 Data and Information Management Preventive
    Process personal data relating to criminal offenses when required by law. CC ID 00237 Data and Information Management Preventive
    Process personal data in order to prevent personal injury or damage to the data subject's health. CC ID 00183 Data and Information Management Preventive
    Process personal data in order to prevent personal injury or damage to a third party's health. CC ID 00184 Data and Information Management Preventive
    Process personal data for statistical purposes or scientific purposes. CC ID 00256 Data and Information Management Preventive
    Process personal data during legitimate activities with safeguards for the data subject's legal rights. CC ID 00185 Data and Information Management Preventive
    Process traffic data in a controlled manner. CC ID 00130 Data and Information Management Preventive
    Process personal data for health insurance, social insurance, state social benefits, social welfare, or child protection. CC ID 00186 Data and Information Management Preventive
    Process personal data when it is publicly accessible. CC ID 00187 Data and Information Management Preventive
    Process personal data for direct marketing and other personalized mail programs. CC ID 00188 Data and Information Management Preventive
    Refrain from processing personal data for marketing or advertising to children. CC ID 14010 Business Processes Preventive
    Refrain from disseminating and communicating with individuals that have opted out of direct marketing communications. CC ID 13708 Communicate Corrective
    Process personal data for the purposes of employment. CC ID 16527 Data and Information Management Preventive
    Process personal data for justice administration, lawsuits, judicial decisions, and investigations. CC ID 00189 Data and Information Management Preventive
    Process personal data for debt collection or benefit payments. CC ID 00190 Data and Information Management Preventive
    Process personal data in order to advance the public interest. CC ID 00191 Data and Information Management Preventive
    Process personal data for surveys, archives, or scientific research. CC ID 00192 Data and Information Management Preventive
    Process personal data absent consent for journalistic purposes, artistic purposes, or literary purposes. CC ID 00193 Data and Information Management Preventive
    Process personal data for academic purposes or religious purposes. CC ID 00194 Data and Information Management Preventive
    Process personal data when it is used by a public authority for National Security policy or criminal policy. CC ID 00195 Data and Information Management Preventive
    Refrain from storing data in newly created files or registers which directly or indirectly reveals the restricted data. CC ID 00196 Data and Information Management Preventive
    Follow legal obligations while processing personal data. CC ID 04794
    [{applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Data and Information Management Preventive
    Start personal data processing only after the needed notifications are submitted. CC ID 04791 Data and Information Management Preventive
    Limit the redisclosure and reuse of restricted data. CC ID 00168
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; Article 10 5.(b)]
    Data and Information Management Preventive
    Refrain from redisclosing or reusing restricted data. CC ID 00169 Data and Information Management Preventive
    Document the redisclosing restricted data exceptions. CC ID 00170 Establish/Maintain Documentation Preventive
    Redisclose restricted data when the data subject consents. CC ID 00171 Data and Information Management Preventive
    Redisclose restricted data when it is for criminal law enforcement. CC ID 00172 Data and Information Management Preventive
    Redisclose restricted data in order to protect public revenue. CC ID 00173 Data and Information Management Preventive
    Redisclose restricted data in order to assist a Telecommunications Ombudsman. CC ID 00174 Data and Information Management Preventive
    Redisclose restricted data in order to prevent a life-threatening emergency. CC ID 00175 Data and Information Management Preventive
    Redisclose restricted data when it deals with installing, maintaining, operating, or providing access to a Public Telecommunications Network or a telecommunication facility. CC ID 00176 Data and Information Management Preventive
    Redisclose restricted data in order to preserve human life at sea. CC ID 00177 Data and Information Management Preventive
    Establish, implement, and maintain a data handling program. CC ID 13427 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain data handling policies. CC ID 00353 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain data and information confidentiality policies. CC ID 00361 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain record structures to support information confidentiality. CC ID 00360
    [Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. Article 45 4.]
    Data and Information Management Preventive
    Include passwords, Personal Identification Numbers, and card security codes in the personal data definition. CC ID 04699 Configuration Preventive
    Refrain from storing data elements containing payment card full magnetic stripe data. CC ID 04757 Testing Detective
    Store payment card data in secure chips, if possible. CC ID 13065 Configuration Preventive
    Refrain from storing data elements containing sensitive authentication data after authorization is approved. CC ID 04758 Configuration Preventive
    Render unrecoverable sensitive authentication data after authorization is approved. CC ID 11952 Technical Security Preventive
    Automate the disposition process for records that contain "do not store" data or "delete after transaction process" data. CC ID 06083 Data and Information Management Preventive
    Log the disclosure of personal data. CC ID 06628 Log Management Preventive
    Log the modification of personal data. CC ID 11844 Log Management Preventive
    Encrypt, truncate, or tokenize data fields, as necessary. CC ID 06850 Technical Security Preventive
    Implement security measures to protect personal data. CC ID 13606
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; Article 10 5.(c)]
    Technical Security Preventive
    Establish, implement, and maintain a personal data transfer program. CC ID 00307
    [{have not transmitted} {have not transferred} {have not accessed} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties; Article 10 5.(d)]
    Establish/Maintain Documentation Preventive
    Obtain consent from an individual prior to transferring personal data. CC ID 06948 Data and Information Management Preventive
    Include procedures for transferring personal data from one data controller to another data controller in the personal data transfer program. CC ID 00351 Establish/Maintain Documentation Preventive
    Refrain from requiring independent recourse mechanisms when transferring personal data from one data controller to another data controller. CC ID 12528 Business Processes Preventive
    Notify data subjects when their personal data is transferred. CC ID 00352 Behavior Preventive
    Include procedures for transferring personal data to third parties in the personal data transfer program. CC ID 00333 Establish/Maintain Documentation Preventive
    Notify data subjects of the geographic locations of the third parties when transferring personal data to third parties. CC ID 14414 Communicate Preventive
    Provide an adequate data protection level by the transferee prior to transferring personal data to another country. CC ID 00314 Data and Information Management Preventive
    Refrain from restricting personal data transfers to member states of the European Union. CC ID 00312 Data and Information Management Preventive
    Prohibit personal data transfers when security is inadequate. CC ID 00345 Data and Information Management Preventive
    Meet the use of limitation exceptions in order to transfer personal data. CC ID 00346 Data and Information Management Preventive
    Refrain from transferring past the first transfer. CC ID 00347 Data and Information Management Preventive
    Document transfer disagreements by the data subject in writing. CC ID 00348 Establish/Maintain Documentation Preventive
    Allow the data subject the right to object to the personal data transfer. CC ID 00349 Data and Information Management Preventive
    Authorize the transfer of restricted data in accordance with organizational standards. CC ID 16428 Records Management Preventive
    Follow the instructions of the data transferrer. CC ID 00334 Behavior Preventive
    Define the personal data transfer exceptions for transferring personal data to another country when adequate protection level standards are not met. CC ID 00315 Establish/Maintain Documentation Preventive
    Include publicly available information as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00316 Data and Information Management Preventive
    Include transfer agreements between data controllers and third parties when it is for the data subject's interest as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00317 Data and Information Management Preventive
    Include personal data for the health field and for treatment as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00318 Data and Information Management Preventive
    Include personal data for journalistic purposes or private purposes as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00319 Data and Information Management Preventive
    Include personal data for important public interest as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00320 Data and Information Management Preventive
    Include consent by the data subject as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00321 Data and Information Management Preventive
    Include personal data used for a contract as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00322 Data and Information Management Preventive
    Include personal data for protecting the data subject or the data subject's interests, such as saving his/her life or providing healthcare as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00323 Data and Information Management Preventive
    Include personal data that is necessary to fulfill international law obligations as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00324 Data and Information Management Preventive
    Include personal data used for legal investigations as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00325 Data and Information Management Preventive
    Include personal data that is authorized by a legislative act as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00326 Data and Information Management Preventive
    Require transferees to implement adequate data protection levels for the personal data. CC ID 00335 Data and Information Management Preventive
    Refrain from requiring a contract between the data controller and trusted third parties when personal information is transferred. CC ID 12527 Business Processes Preventive
    Define the personal data transfer exceptions for transferring personal data to another organization when adequate protection level standards are not met. CC ID 00336 Establish/Maintain Documentation Preventive
    Include personal data that is publicly available information as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00337 Data and Information Management Preventive
    Include personal data that is used for journalistic purposes or private purposes as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00338 Data and Information Management Preventive
    Include personal data that is used for important public interest as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00339 Data and Information Management Preventive
    Include consent by the data subject as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00340 Data and Information Management Preventive
    Include personal data that is used for a contract as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00341 Data and Information Management Preventive
    Include personal data that is used for protecting the data subject or the data subject's interests, such as providing healthcare or saving his/her life as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00342 Data and Information Management Preventive
    Include personal data that is used for a legal investigation as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00343 Data and Information Management Preventive
    Include personal data that is authorized by a legislative act as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00344 Data and Information Management Preventive
    Notify data subjects about organizational liability when transferring personal data to third parties. CC ID 12353 Communicate Preventive
    Notify the data subject of any personal data changes during the personal data transfer. CC ID 00350 Behavior Preventive
    Establish, implement, and maintain Internet interactivity data transfer procedures. CC ID 06949 Establish/Maintain Documentation Preventive
    Obtain consent prior to storing cookies on an individual's browser. CC ID 06950 Data and Information Management Preventive
    Obtain consent prior to downloading software to an individual's computer. CC ID 06951 Data and Information Management Preventive
    Refrain from installing software on an individual's computer unless acting in accordance with a court order. CC ID 14000 Process or Activity Preventive
    Remove or uninstall software from an individual's computer, as necessary. CC ID 13998 Process or Activity Preventive
    Remove or uninstall software from an individual's computer when consent is revoked. CC ID 13997 Process or Activity Preventive
    Obtain consent prior to tracking Internet traffic patterns or browsing history of an individual. CC ID 06961 Data and Information Management Preventive
    Develop remedies and sanctions for privacy policy violations. CC ID 00474 Data and Information Management Preventive
    Define the organization's liability based on the applicable law. CC ID 00504
    [{be liable} {refrain from imposing} Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. Article 57 12.
    The provider or prospective provider shall be liable under applicable Union and national liability law for any damage caused in the course of their testing in real world conditions. Article 60 9.]
    Establish/Maintain Documentation Preventive
    Define the sanctions and fines available for privacy rights violations based on applicable law. CC ID 00505
    [{be liable} {refrain from imposing} Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. Article 57 12.
    The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. Article 80 4.
    Where, in the course of the evaluation pursuant to paragraph 1 of this Article, the market surveillance authority establishes that the AI system was misclassified by the provider as non-high-risk in order to circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Article 99. Article 80 7.]
    Establish/Maintain Documentation Preventive
    Define the appeal process based on the applicable law. CC ID 00506
    [An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available. Article 44 3. ¶ 2]
    Establish/Maintain Documentation Preventive
    Define the fee structure for the appeal process. CC ID 16532 Process or Activity Preventive
    Define the time requirements for the appeal process. CC ID 16531 Process or Activity Preventive
    Disseminate and communicate instructions for the appeal process to interested personnel and affected parties. CC ID 16544 Communicate Preventive
    Disseminate and communicate a written explanation of the reasons for appeal decisions to interested personnel and affected parties. CC ID 16542 Communicate Preventive
  • Records management
    8
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Records management CC ID 00902 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain an information management program. CC ID 14315 Establish/Maintain Documentation Preventive
    Ensure data sets have the appropriate characteristics. CC ID 15000
    [{training data} {validation data} {testing data} {be representative} {be complete} {be error free} Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. Article 10 3.]
    Data and Information Management Detective
    Ensure data sets are complete, are accurate, and are relevant. CC ID 14999
    [{training data} {validation data} {testing data} {be representative} {be complete} {be error free} Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. Article 10 3.]
    Data and Information Management Detective
    Archive appropriate records, logs, and database tables. CC ID 06321
    [Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. Article 19 1.
    Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law. Article 19 2.]
    Records Management Preventive
    Retain records in accordance with applicable requirements. CC ID 00968
    [The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, #B7D8ED;" class="term_primary-verb">keepan> at the disposal of the national competent authorities: the technical documentation referred to in Article 11; Article 18 1.(a)
    Providers of high-risk AI systems shall: keep the documentation referred to in Article 18; Article 16 ¶ 1 (d)
    Providers of high-risk AI systems shall: when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19; Article 16 ¶ 1 (e)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the documentation concerning the quality management system referred to in Article 17; Article 18 1.(b)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the documentation concerning the changes approved by notified bodies, where applicable; Article 18 1.(c)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the decisions and other documents issued by the notified bodies, where applicable; Article 18 1.(d)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the EU declaration of conformity referred to in Article 47. Article 18 1.(e)
    Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. Article 19 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep at the disposal of the competent authorities and national authorities or bodies referred to in Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details of the provider that appointed the authorised representative, a copy of the EU declaration of conformity referred to in Article 47, the technical documentation and, if applicable, the certificate issued by the notified body; Article 22 3.(b)
    Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity referred to in Article 47. Article 23 5.
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; Article 54 3.(b)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; Article 54 3.(b)
    Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Article 26 6. ¶ 1]
    Records Management Preventive
    Capture and maintain logs as official records. CC ID 06319
    [Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the F0BBBC;" class="term_primary-noun">logs as part of the documentation kept pursuant to the relevant Union financial service law. Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law. Article 26 6. ¶ 2]
    Log Management Preventive
    Establish and maintain access controls for all records. CC ID 00371
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; Article 10 5.(c)
    {training data} {validation data} {testing data} Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other relevant technical means and tools enabling remote access. Article 74 12.]
    Records Management Preventive
  • System hardening through configuration management
    82
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    System hardening through configuration management CC ID 00860 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain a Configuration Management program. CC ID 00867 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain configuration control and Configuration Status Accounting. CC ID 00863 Business Processes Preventive
    Establish, implement, and maintain appropriate system labeling. CC ID 01900
    [Providers of high-risk AI systems shall: indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable, their name, registered trade name or registered trade mark, the address at which they can be contacted; Article 16 ¶ 1 (b)
    Providers of high-risk AI systems shall: affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48; Article 16 ¶ 1 (h)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; Article 23 1.(c)
    Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable. Article 23 3.
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    {digital form} For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means. Article 48 2.
    The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. Article 48 3.
    The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. Article 48 3.
    Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.
    Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law. Article 48 5.]
    Establish/Maintain Documentation Preventive
    Include the identification number of the third party who performed the conformity assessment procedures on all promotional materials. CC ID 15041
    [Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.]
    Establish/Maintain Documentation Preventive
    Include the identification number of the third party who conducted the conformity assessment procedures after the CE marking of conformity. CC ID 15040
    [Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.]
    Establish/Maintain Documentation Preventive
    Establish, implement, and maintain system hardening procedures. CC ID 12001 Establish/Maintain Documentation Preventive
    Configure the system security parameters to prevent system misuse or information misappropriation. CC ID 00881
    [High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. Article 15 5. ¶ 1]
    Configuration Preventive
    Configure Hypertext Transfer Protocol headers in accordance with organizational standards. CC ID 16851 Configuration Preventive
    Configure Hypertext Transfer Protocol security headers in accordance with organizational standards. CC ID 16488 Configuration Preventive
    Configure "Enable Structured Exception Handling Overwrite Protection (SEHOP)" to organizational standards. CC ID 15385 Configuration Preventive
    Configure Microsoft Attack Surface Reduction rules in accordance with organizational standards. CC ID 16478 Configuration Preventive
    Configure "Remote host allows delegation of non-exportable credentials" to organizational standards. CC ID 15379 Configuration Preventive
    Configure "Configure enhanced anti-spoofing" to organizational standards. CC ID 15376 Configuration Preventive
    Configure "Block user from showing account details on sign-in" to organizational standards. CC ID 15374 Configuration Preventive
    Configure "Configure Attack Surface Reduction rules" to organizational standards. CC ID 15370 Configuration Preventive
    Configure "Turn on e-mail scanning" to organizational standards. CC ID 15361 Configuration Preventive
    Configure "Prevent users and apps from accessing dangerous websites" to organizational standards. CC ID 15359 Configuration Preventive
    Configure "Enumeration policy for external devices incompatible with Kernel DMA Protection" to organizational standards. CC ID 15352 Configuration Preventive
    Configure "Prevent Internet Explorer security prompt for Windows Installer scripts" to organizational standards. CC ID 15351 Configuration Preventive
    Store state information from applications and software separately. CC ID 14767 Configuration Preventive
    Configure the "aufs storage" to organizational standards. CC ID 14461 Configuration Preventive
    Configure the "AppArmor Profile" to organizational standards. CC ID 14496 Configuration Preventive
    Configure the "device" argument to organizational standards. CC ID 14536 Configuration Preventive
    Configure the "Docker" group ownership to organizational standards. CC ID 14495 Configuration Preventive
    Configure the "Docker" user ownership to organizational standards. CC ID 14505 Configuration Preventive
    Configure "Allow upload of User Activities" to organizational standards. CC ID 15338 Configuration Preventive
    Configure the "ulimit" to organizational standards. CC ID 14499 Configuration Preventive
    Configure the computer-wide, rather than per-user, use of Microsoft Spynet Reporting for Windows Defender properly. CC ID 05282 Configuration Preventive
    Configure the "Turn off Help Ratings" setting. CC ID 05285 Configuration Preventive
    Configure the "Decoy Admin Account Not Disabled" policy properly. CC ID 05286 Configuration Preventive
    Configure the "Anonymous access to the registry" policy properly. CC ID 05288 Configuration Preventive
    Configure the File System Checker and Popups setting. CC ID 05289 Configuration Preventive
    Configure the System File Checker setting. CC ID 05290 Configuration Preventive
    Configure the System File Checker Progress Meter setting. CC ID 05291 Configuration Preventive
    Configure the Protect Kernel object attributes properly. CC ID 05292 Configuration Preventive
    Verify crontab files are owned by an appropriate user or group. CC ID 05305 Configuration Preventive
    Restrict the exporting of files and directories, as necessary. CC ID 16315 Technical Security Preventive
    Verify the /etc/syslog.conf file is owned by an appropriate user or group. CC ID 05322 Configuration Preventive
    Verify the traceroute executable is owned by an appropriate user or group. CC ID 05323 Configuration Preventive
    Verify the /etc/passwd file is owned by an appropriate user or group. CC ID 05325 Configuration Preventive
    Configure the "Prohibit Access of the Windows Connect Now Wizards" setting. CC ID 05380 Configuration Preventive
    Configure the "Allow remote access to the PnP interface" setting. CC ID 05381 Configuration Preventive
    Configure the "Do not create system restore point when new device driver installed" setting. CC ID 05382 Configuration Preventive
    Configure the "Turn Off Access to All Windows Update Feature" setting. CC ID 05383 Configuration Preventive
    Configure the "Turn Off Automatic Root Certificates Update" setting. CC ID 05384 Configuration Preventive
    Configure the "Turn Off Event Views 'Events.asp' Links" setting. CC ID 05385 Configuration Preventive
    Configure the "Turn Off Internet File Association Service" setting. CC ID 05389 Configuration Preventive
    Configure the "Turn Off Registration if URL Connection is Referring to Microsoft.com" setting. CC ID 05390 Configuration Preventive
    Configure the "Turn off the 'Order Prints' Picture task" setting. CC ID 05391 Configuration Preventive
    Configure the "Turn Off Windows Movie Maker Online Web Links" setting. CC ID 05392 Configuration Preventive
    Configure the "Turn Off Windows Movie Maker Saving to Online Video Hosting Provider" setting. CC ID 05393 Configuration Preventive
    Configure the "Don't Display the Getting Started Welcome Screen at Logon" setting. CC ID 05394 Configuration Preventive
    Configure the "Turn off Windows Startup Sound" setting. CC ID 05395 Configuration Preventive
    Configure the "Prevent IIS Installation" setting. CC ID 05398 Configuration Preventive
    Configure the "Turn off Active Help" setting. CC ID 05399 Configuration Preventive
    Configure the "Turn off Untrusted Content" setting. CC ID 05400 Configuration Preventive
    Configure the "Turn off downloading of enclosures" setting. CC ID 05401 Configuration Preventive
    Configure "Allow indexing of encrypted files" to organizational standards. CC ID 05402 Configuration Preventive
    Configure the "Prevent indexing uncached Exchange folders" setting. CC ID 05403 Configuration Preventive
    Configure the "Turn off Windows Calendar" setting. CC ID 05404 Configuration Preventive
    Configure the "Turn off Windows Defender" setting. CC ID 05405 Configuration Preventive
    Configure the "Turn off the communication features" setting. CC ID 05410 Configuration Preventive
    Configure the "Turn off Windows Meeting Space" setting. CC ID 05413 Configuration Preventive
    Configure the "Turn on Windows Meeting Space auditing" setting. CC ID 05414 Configuration Preventive
    Configure the "Disable unpacking and installation of gadgets that are not digitally signed" setting. CC ID 05415 Configuration Preventive
    Configure the "Override the More Gadgets Link" setting. CC ID 05416 Configuration Preventive
    Configure the "Turn Off User Installed Windows Sidebar Gadgets" setting. CC ID 05417 Configuration Preventive
    Configure the "Turn off Downloading of Game Information" setting. CC ID 05419 Configuration Preventive
    Set the noexec_user_stack flag on the user stack properly. CC ID 05439 Configuration Preventive
    Configure the "restrict guest access to system log" policy, as appropriate. CC ID 06047 Configuration Preventive
    Configure the Trusted Platform Module (TPM) platform validation profile, as appropriate. CC ID 06056 Configuration Preventive
    Enable or disable the standby states, as appropriate. CC ID 06060 Configuration Preventive
    Configure the Trusted Platform Module startup options properly. CC ID 06061 Configuration Preventive
    Configure the "Obtain Software Package Updates with apt-get" setting to organizational standards. CC ID 11375 Configuration Preventive
    Configure the "display a banner before authentication" setting for "LightDM" to organizational standards. CC ID 11385 Configuration Preventive
    Configure Logging settings in accordance with organizational standards. CC ID 07611 Configuration Preventive
    Provide the reference database used to verify input data in the logging capability. CC ID 15018
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the reference database against which input data has been checked by the system; Article 12 3.(b)]
    Log Management Preventive
    Configure the log to capture the user's identification. CC ID 01334
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5). Article 12 3.(d)]
    Configuration Preventive
    Configure the log to capture a date and time stamp. CC ID 01336
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: recording of the period of each use of the system (start date and time and end date and time of each use); Article 12 3.(a)]
    Configuration Preventive
    Configure all logs to capture auditable events or actionable events. CC ID 06332 Configuration Preventive
    Configure the log to capture user queries and searches. CC ID 16479
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the input data for which the search has led to a match; Article 12 3.(c)]
    Log Management Preventive
  • Systems design, build, and implementation
    45
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Systems design, build, and implementation CC ID 00989 IT Impact Zone IT Impact Zone
    Initiate the System Development Life Cycle planning phase. CC ID 06266 Systems Design, Build, and Implementation Preventive
    Establish, implement, and maintain system design requirements. CC ID 06618 Establish/Maintain Documentation Preventive
    Design and develop built-in redundancies, as necessary. CC ID 13064
    [{be resilient} {technical measures} High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. Article 15 4. ¶ 1
    {be resilient} {technical measures} High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. Article 15 4. ¶ 1]
    Systems Design, Build, and Implementation Preventive
    Initiate the System Development Life Cycle development phase or System Development Life Cycle build phase. CC ID 06267 Systems Design, Build, and Implementation Preventive
    Establish, implement, and maintain human interface guidelines. CC ID 08662
    [Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. Article 50 1.
    Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. Article 50 1.]
    Establish/Maintain Documentation Preventive
    Ensure users can navigate content. CC ID 15163 Configuration Preventive
    Create text content using language that is readable and is understandable. CC ID 15167 Configuration Preventive
    Ensure user interface components are operable. CC ID 15162 Configuration Preventive
    Implement mechanisms to review, confirm, and correct user submissions. CC ID 15160 Configuration Preventive
    Allow users to reverse submissions. CC ID 15168 Configuration Preventive
    Provide a mechanism to control audio. CC ID 15158 Configuration Preventive
    Allow modification of style properties without loss of content or functionality. CC ID 15156 Configuration Preventive
    Programmatically determine the name and role of user interface components. CC ID 15148 Configuration Preventive
    Programmatically determine the language of content. CC ID 15137 Configuration Preventive
    Provide a mechanism to dismiss content triggered by mouseover or keyboard focus. CC ID 15164 Configuration Preventive
    Configure repeated navigational mechanisms to occur in the same order unless overridden by the user. CC ID 15166 Configuration Preventive
    Refrain from activating a change of context when changing the setting of user interface components, as necessary. CC ID 15165 Configuration Preventive
    Provide users a mechanism to remap keyboard shortcuts. CC ID 15133 Configuration Preventive
    Identify the components in a set of web pages that consistently have the same functionality. CC ID 15116 Process or Activity Preventive
    Provide captions for live audio content. CC ID 15120 Configuration Preventive
    Programmatically determine the purpose of each data field that collects information from the user. CC ID 15114 Configuration Preventive
    Provide labels or instructions when content requires user input. CC ID 15077 Configuration Preventive
    Allow users to control auto-updating information, as necessary. CC ID 15159 Configuration Preventive
    Use headings on all web pages and labels in all content that describes the topic or purpose. CC ID 15070 Configuration Preventive
    Display website content triggered by mouseover or keyboard focus. CC ID 15152 Configuration Preventive
    Ensure the purpose of links can be determined through the link text. CC ID 15157 Configuration Preventive
    Use a unique title that describes the topic or purpose for each web page. CC ID 15069 Configuration Preventive
    Allow the use of time limits, as necessary. CC ID 15155 Configuration Preventive
    Include mechanisms for changing authenticators in human interface guidelines. CC ID 14944 Establish/Maintain Documentation Preventive
    Refrain from activating a change of context in a user interface component. CC ID 15115 Configuration Preventive
    Include functionality for managing user data in human interface guidelines. CC ID 14928 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain sandboxes. CC ID 14946 Testing Preventive
    Allow personal data collected for other purposes to be used to develop and test artificial intelligence systems in regulatory sandboxes under defined conditions. CC ID 15044
    [In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems; Article 59 1.(a)(i)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures; Article 59 1.(a)(ii)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: energy sustainability; Article 59 1.(a)(iii)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: safety and resilience of transport systems and mobility, critical infrastructure and networks; Article 59 1.(a)(iv)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: efficiency and quality of public administration and public services; Article 59 1.(a)(v)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data; Article 59 1.(b)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing; Article 59 1.(c)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data; Article 59 1.(d)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: providers can further share the originally collected data only in accordance with Union data protection law; any personal data created in the sandbox cannot be shared outside the sandbox; Article 59 1.(e)
    {do not lead} {do not affect} In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data; Article 59 1.(f)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law; Article 59 1.(h)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV; Article 59 1.(i)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities. Article 59 1.(j)
    {technical measures} In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period; Article 59 1.(g)]
    Systems Design, Build, and Implementation Preventive
    Initiate the System Development Life Cycle implementation phase. CC ID 06268 Systems Design, Build, and Implementation Preventive
    Submit the information system's security authorization package to the appropriate stakeholders, as necessary. CC ID 13987
    [The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. Article 46 3.]
    Establish/Maintain Documentation Preventive
    Establish and maintain technical documentation. CC ID 15005
    [The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. Article 11 1. ¶ 1
    The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. Article 11 1. ¶ 1
    Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the technical documentation as part of the documentation kept under the relevant Union financial services law. Article 18 3.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV; Article 23 1.(b)
    Providers of general-purpose AI models shall: draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; Article 53 1.(a)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the technical documentation specified in Annex XI has been drawn up and all obligations referred to in Article 53 and, where applicable, Article 55 have been fulfilled by the provider; Article 54 3.(a)
    Providers of general-purpose AI models shall: draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. Article 53 1.(d)
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: Article 53 1.(b)]
    Establish/Maintain Documentation Preventive
    Retain technical documentation on the premises where the artificial intelligence system is located. CC ID 15104 Establish/Maintain Documentation Preventive
    Include the risk mitigation measures in the technical documentation. CC ID 17246 Establish/Maintain Documentation Preventive
    Include the intended outputs of the system in the technical documentation. CC ID 17245 Establish/Maintain Documentation Preventive
    Include the limitations of the system in the technical documentation. CC ID 17242 Establish/Maintain Documentation Preventive
    Include the types of data used to train the artificial intelligence system in the technical documentation. CC ID 17241 Establish/Maintain Documentation Preventive
    Include all required information in the technical documentation. CC ID 15094
    [The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2
    The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2
    Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up containing all the information set out in paragraph 1, as well as the information required under those legal acts. Article 11 2.
    Providers of general-purpose AI models shall: draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; Article 53 1.(a)
    The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). Article 72 3.
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and Article 53 1.(b)(i)
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: contain, at a minimum, the elements set out in Annex XII; Article 53 1.(b)(ii)]
    Establish/Maintain Documentation Preventive
    Include information that demonstrates compliance with requirements in the technical documentation. CC ID 15088
    [The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2]
    Establish/Maintain Documentation Preventive
    Disseminate and communicate technical documentation to interested personnel and affected parties. CC ID 17229
    [Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: Article 53 1.(b)
    Providers of general-purpose AI models shall: draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. Article 53 1.(d)]
    Communicate Preventive
  • Technical security
    22
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Technical security CC ID 00508 IT Impact Zone IT Impact Zone
    Establish, implement, and maintain a digital identity management program. CC ID 13713 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain an authorized representatives policy. CC ID 13798
    [Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. Article 22 1.]
    Establish/Maintain Documentation Preventive
    Include authorized representative life cycle management requirements in the authorized representatives policy. CC ID 13802 Establish/Maintain Documentation Preventive
    Include termination procedures in the authorized representatives policy. CC ID 17226
    [The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. Article 22 4.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. Article 54 5.]
    Establish/Maintain Documentation Preventive
    Include any necessary restrictions for the authorized representative in the authorized representatives policy. CC ID 13801 Establish/Maintain Documentation Preventive
    Include suspension requirements for authorized representatives in the authorized representatives policy. CC ID 13800 Establish/Maintain Documentation Preventive
    Include the authorized representative's life span in the authorized representatives policy. CC ID 13799 Establish/Maintain Documentation Preventive
    Grant access to authorized personnel or systems. CC ID 12186
    [Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and Article 74 13.(a)
    {testing procedures} Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient. Article 74 13.(b)
    The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. Article 92 5.]
    Configuration Preventive
    Disseminate and communicate the access control log to interested personnel and affected parties. CC ID 16442 Communicate Preventive
    Include the user identifiers of all personnel who are authorized to access a system in the system record. CC ID 15171 Establish/Maintain Documentation Preventive
    Include identity information of all personnel who are authorized to access a system in the system record. CC ID 16406 Establish/Maintain Documentation Preventive
    Include the user's location in the system record. CC ID 16996 Log Management Preventive
    Include the date and time that access was reviewed in the system record. CC ID 16416 Data and Information Management Preventive
    Include the date and time that access rights were changed in the system record. CC ID 16415 Establish/Maintain Documentation Preventive
    Control all methods of remote access and teleworking. CC ID 00559
    [{training data} {validation data} {testing data} Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other relevant technical means and tools enabling remote access. Article 74 12.]
    Technical Security Preventive
    Assign virtual escorting to authorized personnel. CC ID 16440 Process or Activity Preventive
    Include information security requirements in the remote access and teleworking program. CC ID 15704 Establish/Maintain Documentation Preventive
    Refrain from allowing individuals to self-enroll into multifactor authentication from untrusted devices. CC ID 17173 Technical Security Preventive
    Implement phishing-resistant multifactor authentication techniques. CC ID 16541 Technical Security Preventive
    Document and approve requests to bypass multifactor authentication. CC ID 15464 Establish/Maintain Documentation Preventive
    Limit the source addresses from which remote administration is performed. CC ID 16393 Technical Security Preventive
  • Third Party and supply chain oversight
    27
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular TYPE CLASS
    Third Party and supply chain oversight CC ID 08807 IT Impact Zone IT Impact Zone
    Formalize client and third party relationships with contracts or nondisclosure agreements. CC ID 00794
    [The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence. Article 25 4. ¶ 1]
    Process or Activity Detective
    Write contractual agreements in clear and conspicuous language. CC ID 16923 Acquisition/Sale of Assets or Services Preventive
    Establish, implement, and maintain software exchange agreements with all third parties. CC ID 11615 Establish/Maintain Documentation Preventive
    Establish, implement, and maintain rules of engagement with third parties. CC ID 13994 Establish/Maintain Documentation Preventive
    Include the purpose in the information flow agreement. CC ID 17016 Establish/Maintain Documentation Preventive
    Include the type of information being transmitted in the information flow agreement. CC ID 14245 Establish/Maintain Documentation Preventive
    Include the costs in the information flow agreement. CC ID 17018 Establish/Maintain Documentation Preventive
    Include the security requirements in the information flow agreement. CC ID 14244 Establish/Maintain Documentation Preventive
    Include the interface characteristics in the information flow agreement. CC ID 14240 Establish/Maintain Documentation Preventive
    Include text about participation in the organization's testing programs in third party contracts. CC ID 14402 Establish/Maintain Documentation Preventive
    Include the contract duration in third party contracts. CC ID 16221 Establish/Maintain Documentation Preventive
    Include cryptographic keys in third party contracts. CC ID 16179 Establish/Maintain Documentation Preventive
    Include bankruptcy provisions in third party contracts. CC ID 16519 Establish/Maintain Documentation Preventive
    Include cybersecurity supply chain risk management requirements in third party contracts. CC ID 15646 Establish/Maintain Documentation Preventive
    Include requirements to cooperate with competent authorities in third party contracts. CC ID 17186 Establish/Maintain Documentation Preventive
    Include compliance with the organization's data usage policies in third party contracts. CC ID 16413 Establish/Maintain Documentation Preventive
    Include financial reporting in third party contracts, as necessary. CC ID 13573 Establish/Maintain Documentation Preventive
    Include on-site visits in third party contracts. CC ID 17306 Establish/Maintain Documentation Preventive
    Include training requirements in third party contracts. CC ID 16367 Acquisition/Sale of Assets or Services Preventive
    Include location requirements in third party contracts. CC ID 16915 Acquisition/Sale of Assets or Services Preventive
    Include the dispute resolution body's contact information in the terms and conditions in third party contracts. CC ID 13813 Establish/Maintain Documentation Preventive
    Include a usage limitation of restricted data clause in third party contracts. CC ID 13026 Establish/Maintain Documentation Preventive
    Include end-of-life information in third party contracts. CC ID 15265 Establish/Maintain Documentation Preventive
    Approve or deny third party recovery plans, as necessary. CC ID 17124 Systems Continuity Preventive
    Review third party recovery plans. CC ID 17123 Systems Continuity Detective
    Disseminate and communicate third party contracts to interested personnel and affected parties. CC ID 17301 Communicate Preventive
Common Controls and
mandates by Type
240 Mandated Controls - bold    
64 Implied Controls - italic     779 Implementation

Each Common Control is assigned a meta-data type to help you determine the objective of the Control and associated Authority Document mandates aligned with it. These types include behavioral controls, process controls, records management, technical security, configuration management, etc. They are provided as another tool to dissect the Authority Document’s mandates and assign them effectively within your organization.

Number of Controls
1083 Total
  • Acquisition/Sale of Assets or Services
    11
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Prohibit the sale or transfer of a debt caused by identity theft. CC ID 16983 Monitoring and measurement Preventive
    Purchase insurance on behalf of interested personnel and affected parties. CC ID 16571 Audits and risk management Corrective
    Prohibit artificial intelligence systems from being placed on the market when it is not in compliance with the requirements. CC ID 15029
    [Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. Article 24 2.
    Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. Article 23 2.]
    Operational management Preventive
    Prohibit artificial intelligence systems that deploys subliminal techniques from being placed on the market. CC ID 15012 Operational management Preventive
    Prohibit artificial intelligence systems that use social scores for unfavorable treatment from being placed on the market. CC ID 15010 Operational management Preventive
    Prohibit artificial intelligence systems that evaluate or classify the trustworthiness of individuals from being placed on the market. CC ID 15008 Operational management Preventive
    Prohibit artificial intelligence systems that exploits vulnerabilities of a specific group of persons from being placed on the market. CC ID 15006 Operational management Preventive
    Process product return requests. CC ID 11598 Acquisition or sale of facilities, technology, and services Corrective
    Write contractual agreements in clear and conspicuous language. CC ID 16923 Third Party and supply chain oversight Preventive
    Include training requirements in third party contracts. CC ID 16367 Third Party and supply chain oversight Preventive
    Include location requirements in third party contracts. CC ID 16915 Third Party and supply chain oversight Preventive
  • Actionable Reports or Measurements
    8
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Monitor and evaluate system telemetry data. CC ID 14929 Monitoring and measurement Detective
    Report on the percentage of needed external audits that have been completed and reviewed. CC ID 11632 Monitoring and measurement Detective
    Establish, implement, and maintain environmental management system performance metrics. CC ID 15191 Monitoring and measurement Preventive
    Establish, implement, and maintain waste management metrics. CC ID 16152 Monitoring and measurement Preventive
    Establish, implement, and maintain emissions management metrics. CC ID 16145 Monitoring and measurement Preventive
    Establish, implement, and maintain financial management metrics. CC ID 16749 Monitoring and measurement Preventive
    Report compliance monitoring statistics to the Board of Directors and other critical stakeholders, as necessary. CC ID 00676
    [Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. Article 20 1.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.]
    Monitoring and measurement Corrective
    Mitigate reported incidents. CC ID 12973
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Operational management Preventive
  • Audits and Risk Management
    18
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Review audit reports to determine the effectiveness of in scope controls. CC ID 12156 Audits and risk management Detective
    Interview stakeholders to determine the effectiveness of in scope controls. CC ID 12154 Audits and risk management Detective
    Review policies and procedures to determine the effectiveness of in scope controls. CC ID 12144 Audits and risk management Detective
    Evaluate personnel status changes to determine the effectiveness of in scope controls. CC ID 16556 Audits and risk management Detective
    Determine whether individuals performing a control have the appropriate staff qualifications, staff clearances, and staff competencies. CC ID 16555 Audits and risk management Detective
    Take into account if the system will be accessed by or have an impact on children in the risk management program. CC ID 14992
    [When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups. Article 9 9.]
    Audits and risk management Preventive
    Establish, implement, and maintain fundamental rights impact assessments. CC ID 17217
    [Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of: Article 27 1.
    The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information. Article 27 2.]
    Audits and risk management Preventive
    Employ risk assessment procedures that take into account risk factors. CC ID 16560 Audits and risk management Preventive
    Review the risk profiles, as necessary. CC ID 16561 Audits and risk management Detective
    Include risks to critical personnel and assets in the threat and risk classification scheme. CC ID 00698
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; Article 9 2.(a)]
    Audits and risk management Preventive
    Conduct external audits of risk assessments, as necessary. CC ID 13308 Audits and risk management Detective
    Analyze and quantify the risks to in scope systems and information. CC ID 00701
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; Article 9 2.(a)
    The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72; Article 9 2.(c)
    The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse; Article 9 2.(b)
    Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.]
    Audits and risk management Preventive
    Assess the potential level of business impact risk associated with reputational damage. CC ID 15335 Audits and risk management Detective
    Include the release status of the risk assessment in the risk treatment plan. CC ID 16320 Audits and risk management Preventive
    Analyze the impact of artificial intelligence systems on society. CC ID 16317 Audits and risk management Detective
    Analyze the impact of artificial intelligence systems on individuals. CC ID 16316 Audits and risk management Detective
    Establish, implement, and maintain a cybersecurity risk management program. CC ID 16827 Audits and risk management Preventive
    Analyze the effect of the Governance, Risk, and Compliance capability to achieve organizational objectives. CC ID 12809
    [{is transparent} High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. Article 13 1.]
    Operational management Preventive
  • Behavior
    13
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Establish, implement, and maintain a testing program. CC ID 00654
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; Article 55 1.(a)]
    Monitoring and measurement Preventive
    Obtain the data subject's consent to participate in testing in real-world conditions. CC ID 17232
    [For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation; Article 61 1.(a)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects’ participation; Article 61 1.(b)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification; Article 61 1.(c)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of the AI system; Article 61 1.(d)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained. Article 61 1.(e)]
    Monitoring and measurement Preventive
    Disseminate and communicate information about risks to all interested personnel and affected parties. CC ID 06718
    [In identifying the most appropriate risk management measures, the following shall be ensured: provision of information required pursuant to Article 13 and, where appropriate, training to deployers. Article 9 5. ¶ 2 (c)
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Audits and risk management Preventive
    Refrain from providing access to facilities that transmit, process, or store restricted data until the contract for accessing the facility is signed. CC ID 12311 Physical and environmental protection Preventive
    Train all personnel and third parties, as necessary. CC ID 00785
    [Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. Article 4 ¶ 1]
    Human Resources management Preventive
    Include limitations on referrals for products and services in the Code of Conduct. CC ID 16719 Human Resources management Preventive
    Grant registration after competence and integrity is verified. CC ID 16802 Operational management Detective
    Establish, implement, and maintain human oversight over artificial intelligence systems. CC ID 15003
    [High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use. Article 14 1.
    Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section. Article 14 2.
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: Article 14 3.
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; Article 14 3.(a)
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer. Article 14 3.(b)
    Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support. Article 26 2.]
    Operational management Preventive
    Implement measures to enable personnel assigned to human oversight to decide to refrain from using the artificial intelligence system or override disregard, or reverse the output. CC ID 15079
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system; Article 14 4.(d)]
    Operational management Preventive
    Notify the supervisory authority. CC ID 00472
    [Notified bodies shall inform the notifying authority of the following: any refusal, restriction, suspension or withdrawal of a Union background-color:#F0BBBC;" class="term_primary-noun">technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII; Article 45 1.(b)
    Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. Article 5 4.
    Providers of high-risk AI systems shall: take the necessary corrective actions and provide information as required in Article 20; Article 16 ¶ 1 (j)
    Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. Article 24 2.
    Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. Article 23 2.
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. Article 52 1.
    Providers or prospective providers shall notify the national market surveillance authority in the Member State where the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world conditions and of the final outcomes. Article 60 8.]
    Privacy protection for information and data Preventive
    Notify data subjects when their personal data is transferred. CC ID 00352 Privacy protection for information and data Preventive
    Follow the instructions of the data transferrer. CC ID 00334 Privacy protection for information and data Preventive
    Notify the data subject of any personal data changes during the personal data transfer. CC ID 00350 Privacy protection for information and data Preventive
  • Business Processes
    51
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Establish, implement, and maintain a reporting methodology program. CC ID 02072 Leadership and high level objectives Preventive
    Establish, implement, and maintain an internal reporting program. CC ID 12409
    [Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. Article 26 7.]
    Leadership and high level objectives Preventive
    Accept the attestation engagement when all preconditions are met. CC ID 13933 Audits and risk management Preventive
    Document and justify any exclusions from the scope of the risk management activities in the risk management program. CC ID 15336 Audits and risk management Detective
    Integrate the risk management program with the organization's business activities. CC ID 13661 Audits and risk management Preventive
    Integrate the risk management program into daily business decision-making. CC ID 13659 Audits and risk management Preventive
    Include regular updating in the risk management system. CC ID 14990
    [{continuous life cycle} The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: Article 9 2.]
    Audits and risk management Preventive
    Approve the threat and risk classification scheme. CC ID 15693 Audits and risk management Preventive
    Establish, implement, and maintain a risk assessment awareness and training program. CC ID 06453
    [In identifying the most appropriate risk management measures, the following shall be ensured: provision of information required pursuant to Article 13 and, where appropriate, training to deployers. Article 9 5. ¶ 2 (c)]
    Audits and risk management Preventive
    Review and approve material risks documented in the residual risk report, as necessary. CC ID 13672
    [The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. Article 9 5. ¶ 1]
    Audits and risk management Preventive
    Analyze the impact of artificial intelligence systems on business operations. CC ID 16356 Audits and risk management Preventive
    Acquire cyber insurance, as necessary. CC ID 12693 Audits and risk management Preventive
    Include an appeal process in the identification issuance procedures. CC ID 15428 Physical and environmental protection Preventive
    Charge a fee for replacement of identification cards or badges, as necessary. CC ID 17001 Physical and environmental protection Preventive
    Adhere to operating procedures as defined in the Standard Operating Procedures Manual. CC ID 06328
    [{technical measures} Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6. Article 26 1.]
    Operational management Preventive
    Establish, implement, and maintain information sharing agreements. CC ID 15645 Operational management Preventive
    Implement and comply with the Governance, Risk, and Compliance framework. CC ID 00818 Operational management Preventive
    Classify virtual systems by type and purpose. CC ID 16332 Operational management Preventive
    Establish, implement, and maintain an Incident Management program. CC ID 00853 Operational management Preventive
    Impose conditions or restrictions on the termination or suspension of a registration. CC ID 16796 Operational management Preventive
    Limit artificial intelligence systems authorizations to the time period until conformity assessment procedures are complete. CC ID 15043
    [By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Operational management Preventive
    Terminate authorizations for artificial intelligence systems when conformity assessment procedures are complete. CC ID 15042 Operational management Preventive
    Authorize artificial intelligence systems to be put into service for exceptional reasons while conformity assessment procedures are being conducted. CC ID 15039
    [In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.]
    Operational management Preventive
    Assess the trustworthiness of artificial intelligence systems. CC ID 16319 Operational management Detective
    Authorize artificial intelligence systems to be placed on the market for exceptional reasons while conformity assessment procedures are being conducted. CC ID 15037
    [By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Operational management Preventive
    Withdraw authorizations that are unjustified. CC ID 15035 Operational management Corrective
    Ensure the transport conditions for artificial intelligence systems refrain from compromising compliance. CC ID 15031
    [{storage conditions} Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. Article 23 4.
    {storage conditions} Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. Article 24 3.]
    Operational management Detective
    Receive prior authorization for the use of a remote biometric identification system. CC ID 15014
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    {post-remote biometric identification system} Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. Article 26 10. ¶ 1]
    Operational management Preventive
    Refrain from making a decision based on system output unless verified by at least two natural persons. CC ID 15004
    [{not be taken} {adverse effect} The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. Article 5 3. ¶ 2
    {human oversight} {not taken} For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. Article 14 5. ¶ 1
    The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate. Article 14 5. ¶ 2
    {not be used} {not be taken} {adverse effect} In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. Article 26 10. ¶ 3]
    Operational management Preventive
    Enable users to interpret the artificial intelligence system's output and use. CC ID 15002
    [{is transparent} High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. Article 13 1.]
    Operational management Preventive
    Document the use of remote biometric identification systems. CC ID 17215
    [{post-remote biometric identification system} Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Article 26 10. ¶ 5]
    Operational management Preventive
    Establish, implement, and maintain configuration control and Configuration Status Accounting. CC ID 00863 System hardening through configuration management Preventive
    Register new systems with the program office or other applicable stakeholder. CC ID 13986
    [In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. Article 49 1.
    Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. Article 49 1.
    Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. Article 49 2.
    Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. Article 49 2.
    Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. Article 49 3.
    Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. Article 49 3.]
    Acquisition or sale of facilities, technology, and services Preventive
    Establish, implement, and maintain a consumer complaint management program. CC ID 04570
    [In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities. Article 85 ¶ 2
    Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. A complaint shall be duly reasoned and indicate at least: Article 89 2.]
    Acquisition or sale of facilities, technology, and services Preventive
    Document consumer complaints. CC ID 13903
    [{natural persons} Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority. Article 85 ¶ 1
    A complaint shall be duly reasoned and indicate at least: the point of contact of the provider of the general-purpose AI model concerned; Article 89 2.(a)
    A complaint shall be duly reasoned and indicate at least: a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the downstream provider considers that the provider of the general-purpose AI model concerned infringed this Regulation; Article 89 2.(b)
    {is relevant} A complaint shall be duly reasoned and indicate at least: any other information that the downstream provider that sent the request considers relevant, including, where appropriate, information gathered on its own initiative. Article 89 2.(c)]
    Acquisition or sale of facilities, technology, and services Preventive
    Analyze the digital content hosted by the organization for any electronic material associated with the take-down request. CC ID 09974 Acquisition or sale of facilities, technology, and services Detective
    Refrain from charging a fee to implement an opt-out request. CC ID 13877 Privacy protection for information and data Preventive
    Offer incentives for consumers to opt-in to provide their personal data to the organization. CC ID 13781 Privacy protection for information and data Preventive
    Refrain from using coercive financial incentive programs to entice opt-in consent. CC ID 13795 Privacy protection for information and data Preventive
    Treat an opt-out direction by an individual joint consumer as applying to all associated joint consumers. CC ID 13452 Privacy protection for information and data Preventive
    Treat opt-out directions separately for each customer relationship the data subject establishes with the organization. CC ID 13454 Privacy protection for information and data Preventive
    Comply with opt-out directions by the data subject, unless otherwise directed by compliance requirements. CC ID 13451 Privacy protection for information and data Preventive
    Allow consent requests to be provided in any official languages. CC ID 16530 Privacy protection for information and data Preventive
    Define the requirements for approving or denying approval applications. CC ID 16780 Privacy protection for information and data Preventive
    Extend the time limit for approving or denying approval applications. CC ID 16779 Privacy protection for information and data Preventive
    Refrain from disclosing Individually Identifiable Health Information related to reproductive health care, as necessary. CC ID 17250 Privacy protection for information and data Preventive
    Cease the use or disclosure of Individually Identifiable Health Information under predetermined conditions. CC ID 17251 Privacy protection for information and data Preventive
    Refrain from using Individually Identifiable Health Information related to reproductive health care, as necessary. CC ID 17256 Privacy protection for information and data Preventive
    Refrain from processing personal data for marketing or advertising to children. CC ID 14010 Privacy protection for information and data Preventive
    Refrain from requiring independent recourse mechanisms when transferring personal data from one data controller to another data controller. CC ID 12528 Privacy protection for information and data Preventive
    Refrain from requiring a contract between the data controller and trusted third parties when personal information is transferred. CC ID 12527 Privacy protection for information and data Preventive
  • Communicate
    73
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Establish, implement, and maintain an external reporting program. CC ID 12876 Leadership and high level objectives Preventive
    Include reporting to governing bodies in the external reporting plan. CC ID 12923
    [Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 22 3.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. Article 22 4.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 54 3.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. Article 54 5.]
    Leadership and high level objectives Preventive
    Submit confidential treatment applications to interested personnel and affected parties. CC ID 16592 Leadership and high level objectives Preventive
    Notify affected parties and interested personnel of quality management system approvals that have been refused, suspended, or withdrawn. CC ID 15045
    [Each notified body shall inform the other notified bodies of: quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued; Article 45 2.(a)]
    Leadership and high level objectives Preventive
    Notify affected parties and interested personnel of quality management system approvals that have been issued. CC ID 15036
    [Each notified body shall inform the other notified bodies of: quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has -color:#B7D8ED;" class="term_primary-verb">issued; Article 45 2.(a)
    Notified bodies shall inform the notifying authority of the following: any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; Article 45 1.(a)]
    Leadership and high level objectives Preventive
    Report errors and faults to the appropriate personnel, as necessary. CC ID 14296
    [Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Monitoring and measurement Corrective
    Disseminate and communicate the audit and accountability policy to interested personnel and affected parties. CC ID 14095 Monitoring and measurement Preventive
    Disseminate and communicate the audit and accountability procedures to interested personnel and affected parties. CC ID 14137 Monitoring and measurement Preventive
    Report inappropriate usage of user accounts to the appropriate personnel. CC ID 14243 Monitoring and measurement Detective
    Disseminate and communicate the security assessment and authorization policy to interested personnel and affected parties. CC ID 14218 Monitoring and measurement Preventive
    Disseminate and communicate security assessment and authorization procedures to interested personnel and affected parties. CC ID 14224 Monitoring and measurement Preventive
    Notify interested personnel and affected parties prior to performing testing. CC ID 17034 Monitoring and measurement Preventive
    Share conformity assessment results with affected parties and interested personnel. CC ID 15113
    [{high-risk artificial intelligence system} A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. Article 6 4.
    Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results. Article 45 3.]
    Monitoring and measurement Preventive
    Notify affected parties and interested personnel of technical documentation assessment certificates that have been issued. CC ID 15112
    [Each notified body shall inform the other notified bodies of: Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. Article 45 2.(b)
    Notified bodies shall inform the notifying authority of the following: any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; Article 45 1.(a)]
    Monitoring and measurement Preventive
    Notify affected parties and interested personnel of technical documentation assessment certificates that have been refused, withdrawn, suspended or restricted. CC ID 15111
    [Each notified body shall inform the other notified bodies of: Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. Article 45 2.(b)]
    Monitoring and measurement Preventive
    Disseminate and communicate the vulnerability scan results to interested personnel and affected parties. CC ID 16418 Monitoring and measurement Preventive
    Disseminate and communicate the test results to interested personnel and affected parties. CC ID 17103
    [{fundamental rights impact assessment} Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify. Article 27 3.]
    Monitoring and measurement Preventive
    Disseminate and communicate the disciplinary action notice to interested personnel and affected parties. CC ID 16585 Monitoring and measurement Preventive
    Delay the reporting of incident management metrics, as necessary. CC ID 15501 Monitoring and measurement Preventive
    Disseminate and communicate insurance options to interested personnel and affected parties. CC ID 16572 Audits and risk management Preventive
    Disseminate and communicate insurance requirements to interested personnel and affected parties. CC ID 16567 Audits and risk management Preventive
    Disseminate and communicate the Data Protection Impact Assessment to interested personnel and affected parties. CC ID 15313 Audits and risk management Preventive
    Disseminate and communicate the risk assessment policy to interested personnel and affected parties. CC ID 14115 Audits and risk management Preventive
    Disseminate and communicate the risk assessment procedures to interested personnel and affected parties. CC ID 14136 Audits and risk management Preventive
    Notify the organization upon completion of the external audits of the organization's risk assessment. CC ID 13313 Audits and risk management Preventive
    Disseminate and communicate the Business Impact Analysis to interested personnel and affected parties. CC ID 15300 Audits and risk management Preventive
    Disseminate and communicate the risk treatment plan to interested personnel and affected parties. CC ID 15694 Audits and risk management Preventive
    Disseminate and communicate the cybersecurity risk management policy to interested personnel and affected parties. CC ID 16832 Audits and risk management Preventive
    Disseminate and communicate the cybersecurity risk management program to interested personnel and affected parties. CC ID 16829 Audits and risk management Preventive
    Disseminate and communicate the cybersecurity risk management strategy to interested personnel and affected parties. CC ID 16825 Audits and risk management Preventive
    Disseminate and communicate the supply chain risk management policy to all interested personnel and affected parties. CC ID 14662 Audits and risk management Preventive
    Disseminate and communicate the supply chain risk management procedures to all interested personnel and affected parties. CC ID 14712 Audits and risk management Preventive
    Disseminate and communicate the risk management policy to interested personnel and affected parties. CC ID 13792 Audits and risk management Preventive
    Disseminate and communicate the disclosure report to interested personnel and affected parties. CC ID 15667
    [{market surveillance authority} Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment. Article 26 10. ¶ 6]
    Audits and risk management Preventive
    Disseminate and communicate the access control log to interested personnel and affected parties. CC ID 16442 Technical security Preventive
    Post floor plans of critical facilities in secure locations. CC ID 16138 Physical and environmental protection Preventive
    Disseminate and communicate the camera installation policy to interested personnel and affected parties. CC ID 15461 Physical and environmental protection Preventive
    Disseminate and communicate the security awareness and training procedures to interested personnel and affected parties. CC ID 14138 Human Resources management Preventive
    Notify interested personnel and affected parties of an extortion payment in the event of a cybersecurity event. CC ID 16539 Operational management Preventive
    Notify interested personnel and affected parties of the reasons for the extortion payment, along with any alternative solutions. CC ID 16538 Operational management Preventive
    Report to breach notification organizations the reasons for a delay in sending breach notifications. CC ID 16797 Operational management Preventive
    Report to breach notification organizations the distribution list to which the organization will send data loss event notifications. CC ID 16782 Operational management Preventive
    Establish, implement, and maintain incident reporting time frame standards. CC ID 12142
    [The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident. Article 73 2. ¶ 1
    {be no later than} Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident as defined in Article 3, point (49)(b), the report referred to in paragraph 1 of this Article shall be provided immediately, and not later than two days after the provider or, where applicable, the deployer becomes aware of that incident. Article 73 3.
    {be no later than} Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately after the provider or the deployer has established, or as soon as it suspects, a causal relationship between the high-risk AI system and the serious incident, but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident. Article 73 4.]
    Operational management Preventive
    Advise users on how to navigate content. CC ID 15138 Operational management Preventive
    Make the registration database available to the public. CC ID 15107
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Operational management Preventive
    Provide affected parties with the role of artificial intelligence in decision making. CC ID 17236
    [Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken. Article 86 1.]
    Operational management Preventive
    Refrain from notifying users when images, videos, or audio have been artificially generated or manipulated if use of the artificial intelligence system is authorized by law. CC ID 15051
    [Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Article 50 4. ¶ 1
    Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. Article 50 4. ¶ 2]
    Operational management Preventive
    Notify users when images, videos, or audio on the artificial intelligence system has been artificially generated or manipulated. CC ID 15019
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.
    Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Article 50 4. ¶ 1
    Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. Article 50 4. ¶ 2]
    Operational management Preventive
    Refrain from notifying users of artificial intelligence systems using biometric categorization for law enforcement. CC ID 15017
    [{applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Operational management Preventive
    Notify users when they are using an artificial intelligence system. CC ID 15015
    [Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply. Article 26 11.
    Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. Article 26 7.
    {applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Operational management Preventive
    Notify interested personnel and affected parties of the use of remote biometric identification systems. CC ID 17216
    [{post-remote biometric identification system} Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Article 26 10. ¶ 5]
    Operational management Preventive
    Disseminate and communicate the declaration of conformity to interested personnel and affected parties. CC ID 15102
    [Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system shall provide that authority with all the information and documentation regarding their actions pursuant to paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in Section 2. Article 24 5.
    Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. Article 23 6.
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. Article 21 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; Article 22 3.(c)]
    Operational management Preventive
    Disseminate and communicate technical documentation to interested personnel and affected parties. CC ID 17229
    [Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: Article 53 1.(b)
    Providers of general-purpose AI models shall: draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. Article 53 1.(d)]
    Systems design, build, and implementation Preventive
    Disseminate and communicate the system documentation to interested personnel and affected parties. CC ID 14285
    [Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; Article 23 1.(c)
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. Article 23 6.]
    Acquisition or sale of facilities, technology, and services Preventive
    Notify the complainant about their rights after receiving a complaint. CC ID 16794 Acquisition or sale of facilities, technology, and services Preventive
    Post contact information in an easily seen location at facilities. CC ID 13812 Acquisition or sale of facilities, technology, and services Preventive
    Provide users a list of the available dispute resolution bodies. CC ID 13814 Acquisition or sale of facilities, technology, and services Preventive
    Post the dispute resolution body's contact information on the organization's website. CC ID 13811 Acquisition or sale of facilities, technology, and services Preventive
    Disseminate and communicate the consumer complaint management program to interested personnel and affected parties. CC ID 16795 Acquisition or sale of facilities, technology, and services Preventive
    Provide a copy of the data subject's consent to the data subject. CC ID 17234
    [The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. Article 61 2.]
    Privacy protection for information and data Preventive
    Notify interested personnel and affected parties of the reasons the opt-out request was refused. CC ID 16537 Privacy protection for information and data Preventive
    Submit approval applications to the supervisory authority. CC ID 16627 Privacy protection for information and data Preventive
    Notify the supervisory authority of the safeguards employed to protect the data subject's rights. CC ID 12605 Privacy protection for information and data Preventive
    Respond to questions about submissions in a timely manner. CC ID 16930 Privacy protection for information and data Preventive
    Include any reasons for delay if notifying the supervisory authority after the time limit. CC ID 12675 Privacy protection for information and data Corrective
    Notify the data subject after their personal data is disposed, as necessary. CC ID 13502 Privacy protection for information and data Preventive
    Notify the subject of care when a lack of availability of health information systems might have adversely affected their care. CC ID 13990 Privacy protection for information and data Corrective
    Refrain from disseminating and communicating with individuals that have opted out of direct marketing communications. CC ID 13708 Privacy protection for information and data Corrective
    Notify data subjects of the geographic locations of the third parties when transferring personal data to third parties. CC ID 14414 Privacy protection for information and data Preventive
    Notify data subjects about organizational liability when transferring personal data to third parties. CC ID 12353 Privacy protection for information and data Preventive
    Disseminate and communicate instructions for the appeal process to interested personnel and affected parties. CC ID 16544 Privacy protection for information and data Preventive
    Disseminate and communicate a written explanation of the reasons for appeal decisions to interested personnel and affected parties. CC ID 16542 Privacy protection for information and data Preventive
    Disseminate and communicate third party contracts to interested personnel and affected parties. CC ID 17301 Third Party and supply chain oversight Preventive
  • Configuration
    153
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Enable the logging capability to capture enough information to ensure the system is functioning according to its intended purpose throughout its life cycle. CC ID 15001 Monitoring and measurement Preventive
    Disallow the use of payment applications when a vulnerability scan report indicates vulnerabilities are present. CC ID 12188 Monitoring and measurement Corrective
    Grant access to authorized personnel or systems. CC ID 12186
    [Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and Article 74 13.(a)
    {testing procedures} Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient. Article 74 13.(b)
    The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. Article 92 5.]
    Technical security Preventive
    Configure focus order in a meaningful way. CC ID 15206 Operational management Preventive
    Configure keyboard interfaces to provide all the functionality that is available for the associated website content. CC ID 15151 Operational management Preventive
    Programmatically set the states, properties, and values of user interface components. CC ID 15150 Operational management Preventive
    Notify users of changes to user interface components. CC ID 15149 Operational management Preventive
    Refrain from designing content in a way that is known to cause seizures or physical reactions. CC ID 15203 Operational management Preventive
    Configure content to be compatible with various user agents and assistive technologies. CC ID 15147 Operational management Preventive
    Configure content to be interpreted by various user agents and assistive technologies. CC ID 15146 Operational management Preventive
    Provide captions for prerecorded audio content. CC ID 15204 Operational management Preventive
    Ensure user interface component names include the same text that is presented visually. CC ID 15145 Operational management Preventive
    Configure user interface components to operate device motion and user motion functionality. CC ID 15144 Operational management Preventive
    Configure single pointer functionality to organizational standards. CC ID 15143 Operational management Preventive
    Configure the keyboard operable user interface so the keyboard focus indicator is visible. CC ID 15142 Operational management Preventive
    Provide users the ability to disable user motion and device motion. CC ID 15205 Operational management Preventive
    Refrain from duplicating attributes in website content using markup languages. CC ID 15141 Operational management Preventive
    Use unique identifiers when using markup languages. CC ID 15140 Operational management Preventive
    Programmatically determine the status messages to convey to users. CC ID 15139 Operational management Preventive
    Allow users the ability to move focus with the keyboard. CC ID 15136 Operational management Preventive
    Avoid using images of text to convey information. CC ID 15202 Operational management Preventive
    Allow users to pause, stop, or hide moving, blinking or scrolling information. CC ID 15135 Operational management Preventive
    Display website content without loss of information or functionality and without requiring scrolling in two dimensions. CC ID 15134 Operational management Preventive
    Use images of text to convey information, as necessary. CC ID 15132 Operational management Preventive
    Refrain from using color as the only visual means to distinguish content. CC ID 15130 Operational management Preventive
    Refrain from restricting content to a single display orientation. CC ID 15129 Operational management Preventive
    Use text to convey information on web pages, as necessary. CC ID 15128 Operational management Preventive
    Configure the contrast ratio to organizational standards. CC ID 15127 Operational management Preventive
    Programmatically determine the correct reading sequence. CC ID 15126 Operational management Preventive
    Programmatically determine the information, structure, and relationships conveyed through the presentation. CC ID 15123 Operational management Preventive
    Provide audio descriptions for all prerecorded video content. CC ID 15122 Operational management Preventive
    Provide alternative forms of CAPTCHA, as necessary. CC ID 15121 Operational management Preventive
    Provide alternatives for time-based media. CC ID 15119 Operational management Preventive
    Configure non-text content to be ignored by assistive technology when it is pure decoration or not presented to users. CC ID 15118 Operational management Preventive
    Configure non-text content with a descriptive identification. CC ID 15117 Operational management Preventive
    Provide text alternatives for non-text content, as necessary. CC ID 15078 Operational management Preventive
    Implement functionality for a single pointer so an up-event reverses the outcome of a down-event. CC ID 15076 Operational management Preventive
    Implement functionality for a single pointer so the completion of a down-event is essential. CC ID 15075 Operational management Preventive
    Implement functionality to abort or undo the function when using a single pointer. CC ID 15074 Operational management Preventive
    Implement functionality for a single pointer so the up-event signals the completion of a function. CC ID 15073 Operational management Preventive
    Implement functionality for a single pointer so the down-event is not used to execute any part of a function. CC ID 15072 Operational management Preventive
    Allow users the ability to use various input devices. CC ID 15071 Operational management Preventive
    Implement mechanisms to allow users the ability to bypass repeated blocks of website content. CC ID 15068 Operational management Preventive
    Implement flashes below the general flash and red flash thresholds on web pages. CC ID 15067 Operational management Preventive
    Configure content to be presentable in a manner that is clear and conspicuous to all users. CC ID 15066
    [The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. Article 50 5.]
    Operational management Preventive
    Configure non-text content that is a control or accepts user input with a name that describes its purpose. CC ID 15065 Operational management Preventive
    Allow users the ability to modify time limits in website content a defined number of times. CC ID 15064 Operational management Preventive
    Provide users with a simple method to extend the time limits set by content. CC ID 15063 Operational management Preventive
    Allow users the ability to disable time limits set by content. CC ID 15062 Operational management Preventive
    Warn users before time limits set by content are about to expire. CC ID 15061 Operational management Preventive
    Allow users the ability to modify time limits set by website or native applications. CC ID 15060 Operational management Preventive
    Provide users time to read and use website content, as necessary. CC ID 15059 Operational management Preventive
    Activate keyboard shortcuts on user interface components only when the appropriate component has focus. CC ID 15058 Operational management Preventive
    Provide users a mechanism to turn off keyboard shortcuts, as necessary. CC ID 15057 Operational management Preventive
    Configure all functionality to be accessible with a keyboard. CC ID 15056 Operational management Preventive
    Configure the system security parameters to prevent system misuse or information misappropriation. CC ID 00881
    [High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. Article 15 5. ¶ 1]
    System hardening through configuration management Preventive
    Configure Hypertext Transfer Protocol headers in accordance with organizational standards. CC ID 16851 System hardening through configuration management Preventive
    Configure Hypertext Transfer Protocol security headers in accordance with organizational standards. CC ID 16488 System hardening through configuration management Preventive
    Configure "Enable Structured Exception Handling Overwrite Protection (SEHOP)" to organizational standards. CC ID 15385 System hardening through configuration management Preventive
    Configure Microsoft Attack Surface Reduction rules in accordance with organizational standards. CC ID 16478 System hardening through configuration management Preventive
    Configure "Remote host allows delegation of non-exportable credentials" to organizational standards. CC ID 15379 System hardening through configuration management Preventive
    Configure "Configure enhanced anti-spoofing" to organizational standards. CC ID 15376 System hardening through configuration management Preventive
    Configure "Block user from showing account details on sign-in" to organizational standards. CC ID 15374 System hardening through configuration management Preventive
    Configure "Configure Attack Surface Reduction rules" to organizational standards. CC ID 15370 System hardening through configuration management Preventive
    Configure "Turn on e-mail scanning" to organizational standards. CC ID 15361 System hardening through configuration management Preventive
    Configure "Prevent users and apps from accessing dangerous websites" to organizational standards. CC ID 15359 System hardening through configuration management Preventive
    Configure "Enumeration policy for external devices incompatible with Kernel DMA Protection" to organizational standards. CC ID 15352 System hardening through configuration management Preventive
    Configure "Prevent Internet Explorer security prompt for Windows Installer scripts" to organizational standards. CC ID 15351 System hardening through configuration management Preventive
    Store state information from applications and software separately. CC ID 14767 System hardening through configuration management Preventive
    Configure the "aufs storage" to organizational standards. CC ID 14461 System hardening through configuration management Preventive
    Configure the "AppArmor Profile" to organizational standards. CC ID 14496 System hardening through configuration management Preventive
    Configure the "device" argument to organizational standards. CC ID 14536 System hardening through configuration management Preventive
    Configure the "Docker" group ownership to organizational standards. CC ID 14495 System hardening through configuration management Preventive
    Configure the "Docker" user ownership to organizational standards. CC ID 14505 System hardening through configuration management Preventive
    Configure "Allow upload of User Activities" to organizational standards. CC ID 15338 System hardening through configuration management Preventive
    Configure the "ulimit" to organizational standards. CC ID 14499 System hardening through configuration management Preventive
    Configure the computer-wide, rather than per-user, use of Microsoft Spynet Reporting for Windows Defender properly. CC ID 05282 System hardening through configuration management Preventive
    Configure the "Turn off Help Ratings" setting. CC ID 05285 System hardening through configuration management Preventive
    Configure the "Decoy Admin Account Not Disabled" policy properly. CC ID 05286 System hardening through configuration management Preventive
    Configure the "Anonymous access to the registry" policy properly. CC ID 05288 System hardening through configuration management Preventive
    Configure the File System Checker and Popups setting. CC ID 05289 System hardening through configuration management Preventive
    Configure the System File Checker setting. CC ID 05290 System hardening through configuration management Preventive
    Configure the System File Checker Progress Meter setting. CC ID 05291 System hardening through configuration management Preventive
    Configure the Protect Kernel object attributes properly. CC ID 05292 System hardening through configuration management Preventive
    Verify crontab files are owned by an appropriate user or group. CC ID 05305 System hardening through configuration management Preventive
    Verify the /etc/syslog.conf file is owned by an appropriate user or group. CC ID 05322 System hardening through configuration management Preventive
    Verify the traceroute executable is owned by an appropriate user or group. CC ID 05323 System hardening through configuration management Preventive
    Verify the /etc/passwd file is owned by an appropriate user or group. CC ID 05325 System hardening through configuration management Preventive
    Configure the "Prohibit Access of the Windows Connect Now Wizards" setting. CC ID 05380 System hardening through configuration management Preventive
    Configure the "Allow remote access to the PnP interface" setting. CC ID 05381 System hardening through configuration management Preventive
    Configure the "Do not create system restore point when new device driver installed" setting. CC ID 05382 System hardening through configuration management Preventive
    Configure the "Turn Off Access to All Windows Update Feature" setting. CC ID 05383 System hardening through configuration management Preventive
    Configure the "Turn Off Automatic Root Certificates Update" setting. CC ID 05384 System hardening through configuration management Preventive
    Configure the "Turn Off Event Views 'Events.asp' Links" setting. CC ID 05385 System hardening through configuration management Preventive
    Configure the "Turn Off Internet File Association Service" setting. CC ID 05389 System hardening through configuration management Preventive
    Configure the "Turn Off Registration if URL Connection is Referring to Microsoft.com" setting. CC ID 05390 System hardening through configuration management Preventive
    Configure the "Turn off the 'Order Prints' Picture task" setting. CC ID 05391 System hardening through configuration management Preventive
    Configure the "Turn Off Windows Movie Maker Online Web Links" setting. CC ID 05392 System hardening through configuration management Preventive
    Configure the "Turn Off Windows Movie Maker Saving to Online Video Hosting Provider" setting. CC ID 05393 System hardening through configuration management Preventive
    Configure the "Don't Display the Getting Started Welcome Screen at Logon" setting. CC ID 05394 System hardening through configuration management Preventive
    Configure the "Turn off Windows Startup Sound" setting. CC ID 05395 System hardening through configuration management Preventive
    Configure the "Prevent IIS Installation" setting. CC ID 05398 System hardening through configuration management Preventive
    Configure the "Turn off Active Help" setting. CC ID 05399 System hardening through configuration management Preventive
    Configure the "Turn off Untrusted Content" setting. CC ID 05400 System hardening through configuration management Preventive
    Configure the "Turn off downloading of enclosures" setting. CC ID 05401 System hardening through configuration management Preventive
    Configure "Allow indexing of encrypted files" to organizational standards. CC ID 05402 System hardening through configuration management Preventive
    Configure the "Prevent indexing uncached Exchange folders" setting. CC ID 05403 System hardening through configuration management Preventive
    Configure the "Turn off Windows Calendar" setting. CC ID 05404 System hardening through configuration management Preventive
    Configure the "Turn off Windows Defender" setting. CC ID 05405 System hardening through configuration management Preventive
    Configure the "Turn off the communication features" setting. CC ID 05410 System hardening through configuration management Preventive
    Configure the "Turn off Windows Meeting Space" setting. CC ID 05413 System hardening through configuration management Preventive
    Configure the "Turn on Windows Meeting Space auditing" setting. CC ID 05414 System hardening through configuration management Preventive
    Configure the "Disable unpacking and installation of gadgets that are not digitally signed" setting. CC ID 05415 System hardening through configuration management Preventive
    Configure the "Override the More Gadgets Link" setting. CC ID 05416 System hardening through configuration management Preventive
    Configure the "Turn Off User Installed Windows Sidebar Gadgets" setting. CC ID 05417 System hardening through configuration management Preventive
    Configure the "Turn off Downloading of Game Information" setting. CC ID 05419 System hardening through configuration management Preventive
    Set the noexec_user_stack flag on the user stack properly. CC ID 05439 System hardening through configuration management Preventive
    Configure the "restrict guest access to system log" policy, as appropriate. CC ID 06047 System hardening through configuration management Preventive
    Configure the Trusted Platform Module (TPM) platform validation profile, as appropriate. CC ID 06056 System hardening through configuration management Preventive
    Enable or disable the standby states, as appropriate. CC ID 06060 System hardening through configuration management Preventive
    Configure the Trusted Platform Module startup options properly. CC ID 06061 System hardening through configuration management Preventive
    Configure the "Obtain Software Package Updates with apt-get" setting to organizational standards. CC ID 11375 System hardening through configuration management Preventive
    Configure the "display a banner before authentication" setting for "LightDM" to organizational standards. CC ID 11385 System hardening through configuration management Preventive
    Configure Logging settings in accordance with organizational standards. CC ID 07611 System hardening through configuration management Preventive
    Configure the log to capture the user's identification. CC ID 01334
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5). Article 12 3.(d)]
    System hardening through configuration management Preventive
    Configure the log to capture a date and time stamp. CC ID 01336
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: recording of the period of each use of the system (start date and time and end date and time of each use); Article 12 3.(a)]
    System hardening through configuration management Preventive
    Configure all logs to capture auditable events or actionable events. CC ID 06332 System hardening through configuration management Preventive
    Ensure users can navigate content. CC ID 15163 Systems design, build, and implementation Preventive
    Create text content using language that is readable and is understandable. CC ID 15167 Systems design, build, and implementation Preventive
    Ensure user interface components are operable. CC ID 15162 Systems design, build, and implementation Preventive
    Implement mechanisms to review, confirm, and correct user submissions. CC ID 15160 Systems design, build, and implementation Preventive
    Allow users to reverse submissions. CC ID 15168 Systems design, build, and implementation Preventive
    Provide a mechanism to control audio. CC ID 15158 Systems design, build, and implementation Preventive
    Allow modification of style properties without loss of content or functionality. CC ID 15156 Systems design, build, and implementation Preventive
    Programmatically determine the name and role of user interface components. CC ID 15148 Systems design, build, and implementation Preventive
    Programmatically determine the language of content. CC ID 15137 Systems design, build, and implementation Preventive
    Provide a mechanism to dismiss content triggered by mouseover or keyboard focus. CC ID 15164 Systems design, build, and implementation Preventive
    Configure repeated navigational mechanisms to occur in the same order unless overridden by the user. CC ID 15166 Systems design, build, and implementation Preventive
    Refrain from activating a change of context when changing the setting of user interface components, as necessary. CC ID 15165 Systems design, build, and implementation Preventive
    Provide users a mechanism to remap keyboard shortcuts. CC ID 15133 Systems design, build, and implementation Preventive
    Provide captions for live audio content. CC ID 15120 Systems design, build, and implementation Preventive
    Programmatically determine the purpose of each data field that collects information from the user. CC ID 15114 Systems design, build, and implementation Preventive
    Provide labels or instructions when content requires user input. CC ID 15077 Systems design, build, and implementation Preventive
    Allow users to control auto-updating information, as necessary. CC ID 15159 Systems design, build, and implementation Preventive
    Use headings on all web pages and labels in all content that describes the topic or purpose. CC ID 15070 Systems design, build, and implementation Preventive
    Display website content triggered by mouseover or keyboard focus. CC ID 15152 Systems design, build, and implementation Preventive
    Ensure the purpose of links can be determined through the link text. CC ID 15157 Systems design, build, and implementation Preventive
    Use a unique title that describes the topic or purpose for each web page. CC ID 15069 Systems design, build, and implementation Preventive
    Allow the use of time limits, as necessary. CC ID 15155 Systems design, build, and implementation Preventive
    Refrain from activating a change of context in a user interface component. CC ID 15115 Systems design, build, and implementation Preventive
    Include passwords, Personal Identification Numbers, and card security codes in the personal data definition. CC ID 04699 Privacy protection for information and data Preventive
    Store payment card data in secure chips, if possible. CC ID 13065 Privacy protection for information and data Preventive
    Refrain from storing data elements containing sensitive authentication data after authorization is approved. CC ID 04758 Privacy protection for information and data Preventive
  • Data and Information Management
    125
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Include the data source in the data governance and management practices. CC ID 17211
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)]
    Leadership and high level objectives Preventive
    Take into account the characteristics of the geographical, behavioral and functional setting for all datasets. CC ID 15046
    [Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used. Article 10 4.]
    Leadership and high level objectives Preventive
    Include the system components that generate audit records in the event logging procedures. CC ID 16426 Monitoring and measurement Preventive
    Overwrite the oldest records when audit logging fails. CC ID 14308 Monitoring and measurement Preventive
    Delete personal data upon data subject's withdrawal from testing. CC ID 17238
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Monitoring and measurement Preventive
    Include data quality in the risk management strategies. CC ID 15308 Audits and risk management Preventive
    Include the date and time that access was reviewed in the system record. CC ID 16416 Technical security Preventive
    Share incident information with interested personnel and affected parties. CC ID 01212
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; Article 55 1.(c)
    Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.
    Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. Article 73 1.
    For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulations (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those referred to in Article 3, point (49)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the Member States where the incident occurred. Article 73 10.]
    Operational management Corrective
    Redact restricted data before sharing incident information. CC ID 16994 Operational management Preventive
    Destroy investigative materials, as necessary. CC ID 17082 Operational management Preventive
    Provide users with alternative methods to inputting data in online forms. CC ID 16951 Operational management Preventive
    Establish, implement, and maintain a registration database. CC ID 15048
    [The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or, where applicable, by the authorised representative. Article 71 2.
    The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on behalf of, a public authority, agency or body, in accordance with Article 49(3) and (4). Article 71 3.]
    Operational management Preventive
    Implement access restrictions for information in the registration database. CC ID 17235
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Operational management Preventive
    Include registration numbers in the registration database. CC ID 17272 Operational management Preventive
    Include electronic signatures in the registration database. CC ID 17281 Operational management Preventive
    Include other registrations in the registration database. CC ID 17274 Operational management Preventive
    Include the owners and shareholders in the registration database. CC ID 17273 Operational management Preventive
    Publish the registration information in the registration database in an official language. CC ID 17280 Operational management Preventive
    Maintain non-public information in a protected area in the registration database. CC ID 17237
    [For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in: Article 49 4.]
    Operational management Preventive
    Publish the IP addresses being used by each external customer in the registration database. CC ID 16403 Operational management Preventive
    Update registration information upon changes. CC ID 17275 Operational management Preventive
    Maintain the accuracy of registry information published in registration databases. CC ID 16402
    [For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. Article 22 3.(e)]
    Operational management Preventive
    Maintain ease of use for information in the registration database. CC ID 17239
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Operational management Preventive
    Include all required information in the registration database. CC ID 15106 Operational management Preventive
    Implement measures to enable personnel assigned to human oversight to interpret output correctly. CC ID 15089
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available; Article 14 4.(c)]
    Operational management Preventive
    Ensure data sets have the appropriate characteristics. CC ID 15000
    [{training data} {validation data} {testing data} {be representative} {be complete} {be error free} Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. Article 10 3.]
    Records management Detective
    Ensure data sets are complete, are accurate, and are relevant. CC ID 14999
    [{training data} {validation data} {testing data} {be representative} {be complete} {be error free} Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. Article 10 3.]
    Records management Detective
    Allow data subjects to opt out and refrain from granting an authorization of consent to use personal data. CC ID 00391 Privacy protection for information and data Preventive
    Establish, implement, and maintain an opt-out method in accordance with organizational standards. CC ID 16526 Privacy protection for information and data Preventive
    Refrain from requiring consent to collect, use, or disclose personal data beyond specified, legitimate reasons in order to receive products and services. CC ID 13605 Privacy protection for information and data Preventive
    Refrain from obtaining consent through deception. CC ID 13556 Privacy protection for information and data Preventive
    Give individuals the ability to change the uses of their personal data. CC ID 00469 Privacy protection for information and data Preventive
    Notify data subjects of the implications of withdrawing consent. CC ID 13551 Privacy protection for information and data Preventive
    Dispose of media and restricted data in a timely manner. CC ID 00125
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first; Article 10 5.(e)
    If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. Article 26 10. ¶ 2]
    Privacy protection for information and data Preventive
    Process personal data pertaining to a patient's health in order to treat those patients. CC ID 00200 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for a covered entity's own use. CC ID 00211 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for a healthcare provider's treatment activities by a covered entity. CC ID 00212 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for payment activities between covered entities or healthcare providers. CC ID 00213 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for Treatment, Payment, and Health Care Operations activities when both covered entities have a relationship with the data subject. CC ID 00214 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for Treatment, Payment, and Health Care Operations activities between a covered entity and a participating healthcare provider when the information is collected from the data subject and a third party. CC ID 00215 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in accordance with agreed upon restrictions. CC ID 06249 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in accordance with the privacy notice. CC ID 06250 Privacy protection for information and data Preventive
    Disclose permitted Individually Identifiable Health Information for facility directories. CC ID 06251 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for cadaveric organ donation purposes, eye donation purposes, or tissue donation purposes. CC ID 06252 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for medical suitability determinations. CC ID 06253 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for armed forces personnel appropriately. CC ID 06254 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in order to provide public benefits by government agencies. CC ID 06255 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for fundraising. CC ID 06256 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information when the data subject cannot physically or legally provide consent and the disclosing organization is a healthcare provider. CC ID 00202 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information to provide appropriate treatment to the data subject when the disclosing organization is a healthcare provider. CC ID 00203 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information when it is not contrary to the data subject's wish prior to becoming unable to provide consent and the disclosing organization is a healthcare provider. CC ID 00204 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information that is reasonable or necessary for the disclosure purpose when the disclosing organization is a healthcare provider. CC ID 00205 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information consistent with the law when the disclosing organization is a healthcare provider. CC ID 00206 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in order to carry out treatment when the disclosing organization is a healthcare provider. CC ID 00207 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in order to carry out treatment when the data subject has provided consent and the disclosing organization is a healthcare provider. CC ID 00208 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in order to carry out treatment when the data subject's guardian or representative has provided consent and the disclosing organization is a healthcare provider. CC ID 00209 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information when the disclosing organization is a healthcare provider that supports public health and safety activities. CC ID 06248 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information in order to report abuse or neglect when the disclosing organization is a healthcare provider. CC ID 06819 Privacy protection for information and data Preventive
    Obtain explicit consent for authorization to release Individually Identifiable Health Information. CC ID 00217 Privacy protection for information and data Preventive
    Obtain explicit consent for authorization to release psychotherapy notes. CC ID 00218 Privacy protection for information and data Preventive
    Refrain from using Individually Identifiable Health Information to determine eligibility or continued eligibility for credit. CC ID 00219 Privacy protection for information and data Preventive
    Process personal data after the data subject has granted explicit consent. CC ID 00180 Privacy protection for information and data Preventive
    Process personal data in order to perform a legal obligation or exercise a legal right. CC ID 00182 Privacy protection for information and data Preventive
    Process personal data relating to criminal offenses when required by law. CC ID 00237 Privacy protection for information and data Preventive
    Process personal data in order to prevent personal injury or damage to the data subject's health. CC ID 00183 Privacy protection for information and data Preventive
    Process personal data in order to prevent personal injury or damage to a third party's health. CC ID 00184 Privacy protection for information and data Preventive
    Process personal data for statistical purposes or scientific purposes. CC ID 00256 Privacy protection for information and data Preventive
    Process personal data during legitimate activities with safeguards for the data subject's legal rights. CC ID 00185 Privacy protection for information and data Preventive
    Process traffic data in a controlled manner. CC ID 00130 Privacy protection for information and data Preventive
    Process personal data for health insurance, social insurance, state social benefits, social welfare, or child protection. CC ID 00186 Privacy protection for information and data Preventive
    Process personal data when it is publicly accessible. CC ID 00187 Privacy protection for information and data Preventive
    Process personal data for direct marketing and other personalized mail programs. CC ID 00188 Privacy protection for information and data Preventive
    Process personal data for the purposes of employment. CC ID 16527 Privacy protection for information and data Preventive
    Process personal data for justice administration, lawsuits, judicial decisions, and investigations. CC ID 00189 Privacy protection for information and data Preventive
    Process personal data for debt collection or benefit payments. CC ID 00190 Privacy protection for information and data Preventive
    Process personal data in order to advance the public interest. CC ID 00191 Privacy protection for information and data Preventive
    Process personal data for surveys, archives, or scientific research. CC ID 00192 Privacy protection for information and data Preventive
    Process personal data absent consent for journalistic purposes, artistic purposes, or literary purposes. CC ID 00193 Privacy protection for information and data Preventive
    Process personal data for academic purposes or religious purposes. CC ID 00194 Privacy protection for information and data Preventive
    Process personal data when it is used by a public authority for National Security policy or criminal policy. CC ID 00195 Privacy protection for information and data Preventive
    Refrain from storing data in newly created files or registers which directly or indirectly reveals the restricted data. CC ID 00196 Privacy protection for information and data Preventive
    Follow legal obligations while processing personal data. CC ID 04794
    [{applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Privacy protection for information and data Preventive
    Start personal data processing only after the needed notifications are submitted. CC ID 04791 Privacy protection for information and data Preventive
    Limit the redisclosure and reuse of restricted data. CC ID 00168
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; Article 10 5.(b)]
    Privacy protection for information and data Preventive
    Refrain from redisclosing or reusing restricted data. CC ID 00169 Privacy protection for information and data Preventive
    Redisclose restricted data when the data subject consents. CC ID 00171 Privacy protection for information and data Preventive
    Redisclose restricted data when it is for criminal law enforcement. CC ID 00172 Privacy protection for information and data Preventive
    Redisclose restricted data in order to protect public revenue. CC ID 00173 Privacy protection for information and data Preventive
    Redisclose restricted data in order to assist a Telecommunications Ombudsman. CC ID 00174 Privacy protection for information and data Preventive
    Redisclose restricted data in order to prevent a life-threatening emergency. CC ID 00175 Privacy protection for information and data Preventive
    Redisclose restricted data when it deals with installing, maintaining, operating, or providing access to a Public Telecommunications Network or a telecommunication facility. CC ID 00176 Privacy protection for information and data Preventive
    Redisclose restricted data in order to preserve human life at sea. CC ID 00177 Privacy protection for information and data Preventive
    Establish, implement, and maintain record structures to support information confidentiality. CC ID 00360
    [Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. Article 45 4.]
    Privacy protection for information and data Preventive
    Automate the disposition process for records that contain "do not store" data or "delete after transaction process" data. CC ID 06083 Privacy protection for information and data Preventive
    Obtain consent from an individual prior to transferring personal data. CC ID 06948 Privacy protection for information and data Preventive
    Provide an adequate data protection level by the transferee prior to transferring personal data to another country. CC ID 00314 Privacy protection for information and data Preventive
    Refrain from restricting personal data transfers to member states of the European Union. CC ID 00312 Privacy protection for information and data Preventive
    Prohibit personal data transfers when security is inadequate. CC ID 00345 Privacy protection for information and data Preventive
    Meet the use of limitation exceptions in order to transfer personal data. CC ID 00346 Privacy protection for information and data Preventive
    Refrain from transferring past the first transfer. CC ID 00347 Privacy protection for information and data Preventive
    Allow the data subject the right to object to the personal data transfer. CC ID 00349 Privacy protection for information and data Preventive
    Include publicly available information as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00316 Privacy protection for information and data Preventive
    Include transfer agreements between data controllers and third parties when it is for the data subject's interest as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00317 Privacy protection for information and data Preventive
    Include personal data for the health field and for treatment as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00318 Privacy protection for information and data Preventive
    Include personal data for journalistic purposes or private purposes as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00319 Privacy protection for information and data Preventive
    Include personal data for important public interest as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00320 Privacy protection for information and data Preventive
    Include consent by the data subject as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00321 Privacy protection for information and data Preventive
    Include personal data used for a contract as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00322 Privacy protection for information and data Preventive
    Include personal data for protecting the data subject or the data subject's interests, such as saving his/her life or providing healthcare as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00323 Privacy protection for information and data Preventive
    Include personal data that is necessary to fulfill international law obligations as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00324 Privacy protection for information and data Preventive
    Include personal data used for legal investigations as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00325 Privacy protection for information and data Preventive
    Include personal data that is authorized by a legislative act as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00326 Privacy protection for information and data Preventive
    Require transferees to implement adequate data protection levels for the personal data. CC ID 00335 Privacy protection for information and data Preventive
    Include personal data that is publicly available information as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00337 Privacy protection for information and data Preventive
    Include personal data that is used for journalistic purposes or private purposes as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00338 Privacy protection for information and data Preventive
    Include personal data that is used for important public interest as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00339 Privacy protection for information and data Preventive
    Include consent by the data subject as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00340 Privacy protection for information and data Preventive
    Include personal data that is used for a contract as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00341 Privacy protection for information and data Preventive
    Include personal data that is used for protecting the data subject or the data subject's interests, such as providing healthcare or saving his/her life as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00342 Privacy protection for information and data Preventive
    Include personal data that is used for a legal investigation as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00343 Privacy protection for information and data Preventive
    Include personal data that is authorized by a legislative act as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00344 Privacy protection for information and data Preventive
    Obtain consent prior to storing cookies on an individual's browser. CC ID 06950 Privacy protection for information and data Preventive
    Obtain consent prior to downloading software to an individual's computer. CC ID 06951 Privacy protection for information and data Preventive
    Obtain consent prior to tracking Internet traffic patterns or browsing history of an individual. CC ID 06961 Privacy protection for information and data Preventive
    Develop remedies and sanctions for privacy policy violations. CC ID 00474 Privacy protection for information and data Preventive
  • Establish Roles
    5
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Assign ownership of the vulnerability management program to the appropriate role. CC ID 15723 Monitoring and measurement Preventive
    Establish, implement, and maintain high level operational roles and responsibilities. CC ID 00806 Human Resources management Preventive
    Classify assets according to the Asset Classification Policy. CC ID 07186
    [A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; Article 51 1.(a)
    A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII. Article 51 1.(b)]
    Operational management Preventive
    Require data controllers to be accountable for their actions. CC ID 00470 Privacy protection for information and data Preventive
    Process restricted data lawfully and carefully. CC ID 00086
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data; Article 10 5.(a)
    To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; Article 10 5.(b)]
    Privacy protection for information and data Preventive
  • Establish/Maintain Documentation
    375
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Define the thresholds for escalation in the internal reporting program. CC ID 14332 Leadership and high level objectives Preventive
    Define the thresholds for reporting in the internal reporting program. CC ID 14331 Leadership and high level objectives Preventive
    Include the reasons for objections to public disclosure in confidential treatment applications. CC ID 16594 Leadership and high level objectives Preventive
    Include contact information for the interested personnel and affected parties the report was filed with in the confidential treatment application. CC ID 16595 Leadership and high level objectives Preventive
    Include the information that was omitted in the confidential treatment application. CC ID 16593 Leadership and high level objectives Preventive
    Establish, implement, and maintain data governance and management practices. CC ID 14998
    [{training data} {validation data} {testing data} Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: Article 10 2.]
    Leadership and high level objectives Preventive
    Address shortcomings of the data sets in the data governance and management practices. CC ID 15087
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. Article 10 2.(h)]
    Leadership and high level objectives Preventive
    Include any shortcomings of the data sets in the data governance and management practices. CC ID 15086
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. Article 10 2.(h)]
    Leadership and high level objectives Preventive
    Include bias for data sets in the data governance and management practices. CC ID 15085
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; Article 10 2.(f)
    Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: appropriate measures to detect, prevent and mitigate possible biases identified according to point (f); Article 10 2.(g)]
    Leadership and high level objectives Preventive
    Include a data strategy in the data governance and management practices. CC ID 15304 Leadership and high level objectives Preventive
    Include data monitoring in the data governance and management practices. CC ID 15303 Leadership and high level objectives Preventive
    Include an assessment of the data sets in the data governance and management practices. CC ID 15084
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: an assessment of the availability, quantity and suitability of the data sets that are needed; Article 10 2.(e)]
    Leadership and high level objectives Preventive
    Include assumptions for the formulation of data sets in the data governance and management practices. CC ID 15083
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent; Article 10 2.(d)]
    Leadership and high level objectives Preventive
    Include data collection for data sets in the data governance and management practices. CC ID 15082
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)
    Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)]
    Leadership and high level objectives Preventive
    Include data preparations for data sets in the data governance and management practices. CC ID 15081
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation; Article 10 2.(c)]
    Leadership and high level objectives Preventive
    Include design choices for data sets in the data governance and management practices. CC ID 15080
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the relevant design choices; Article 10 2.(a)]
    Leadership and high level objectives Preventive
    Establish, implement, and maintain a data classification scheme. CC ID 11628 Leadership and high level objectives Preventive
    Establish, implement, and maintain a Quality Management framework. CC ID 07196 Leadership and high level objectives Preventive
    Establish, implement, and maintain a Quality Management policy. CC ID 13694
    [{put in place} Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: Article 17 1.]
    Leadership and high level objectives Preventive
    Include a commitment to satisfy applicable requirements in the Quality Management policy. CC ID 13700
    [Quality management system shall include at least the following aspects: a strategy for ="background-color:#F0BBBC;" class="term_primary-noun">regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system; Article 17 1.(a)]
    Leadership and high level objectives Preventive
    Tailor the Quality Management policy to support the organization's strategic direction. CC ID 13699
    [{quality management system} The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. Article 17 2.]
    Leadership and high level objectives Preventive
    Include a commitment to continual improvement of the Quality Management system in the Quality Management policy. CC ID 13698 Leadership and high level objectives Preventive
    Document the measurements used by Quality Assurance and Quality Control testing. CC ID 07200
    [Quality management system shall include at least the following aspects: techniques, procedures and systematic actions to be used for the development, quality control and <span style="background-color:#F0BBBC;" class="term_primary-noun">quality assurance of the high-risk AI system; Article 17 1.(c)]
    Leadership and high level objectives Preventive
    Establish, implement, and maintain a Quality Management program. CC ID 07201
    [{put in place} Providers of high-risk AI systems shall: have a quality management system in place which complies with Article 17; Article 16 ¶ 1 (c)
    {put in place} Providers of high-risk AI systems shall put a or:#F0BBBC;" class="term_primary-noun">quality management system
    in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: Article 17 1.]
    Leadership and high level objectives Preventive
    Include quality objectives in the Quality Management program. CC ID 13693 Leadership and high level objectives Preventive
    Include records management in the quality management system. CC ID 15055
    [Quality management system shall include at least the following aspects: systems and procedures for record-keeping of all relevant documentation and information; Article 17 1.(k)]
    Leadership and high level objectives Preventive
    Include risk management in the quality management system. CC ID 15054
    [Quality management system shall include at least the following aspects: the risk management system referred to in Article 9; Article 17 1.(g)]
    Leadership and high level objectives Preventive
    Include data management procedures in the quality management system. CC ID 15052
    [Quality management system shall include at least the following aspects: systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems; Article 17 1.(f)]
    Leadership and high level objectives Preventive
    Include a post-market monitoring system in the quality management system. CC ID 15027
    [Quality management system shall include at least the following aspects: the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72; Article 17 1.(h)]
    Leadership and high level objectives Preventive
    Include operational roles and responsibilities in the quality management system. CC ID 15028
    [Quality management system shall include at least the following aspects: an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph. Article 17 1.(m)]
    Leadership and high level objectives Preventive
    Include resource management in the quality management system. CC ID 15026
    [Quality management system shall include at least the following aspects: resource management, including security-of-supply related measures; Article 17 1.(l)]
    Leadership and high level objectives Preventive
    Include communication protocols in the quality management system. CC ID 15025
    [Quality management system shall include at least the following aspects: the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties; Article 17 1.(j)]
    Leadership and high level objectives Preventive
    Include incident reporting procedures in the quality management system. CC ID 15023
    [Quality management system shall include at least the following aspects: procedures related to the reporting of a serious incident in accordance with Article 73; Article 17 1.(i)]
    Leadership and high level objectives Preventive
    Include technical specifications in the quality management system. CC ID 15021
    [Quality management system shall include at least the following aspects: technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that the high-risk AI system complies with those requirements; Article 17 1.(e)]
    Leadership and high level objectives Preventive
    Include system testing standards in the Quality Management program. CC ID 01018
    [Quality management system shall include at least the following aspects: techniques, procedures and systematic actions to be used for the design, design control and tyle="background-color:#F0BBBC;" class="term_primary-noun">design verification of the high-risk AI system; Article 17 1.(b)
    {test procedure} Quality management system shall include at least the following aspects: examination, test and imary-noun">validation procedures
    to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out; Article 17 1.(d)]
    Leadership and high level objectives Preventive
    Include explanations, compensating controls, or risk acceptance in the compliance exceptions Exceptions document. CC ID 01631
    [Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto. Article 41 5.]
    Leadership and high level objectives Preventive
    Document and evaluate the decision outcomes from the decision-making process. CC ID 06918
    [Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. Article 44 3. ¶ 1
    Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. Article 52 5.]
    Leadership and high level objectives Preventive
    Establish, implement, and maintain an audit and accountability policy. CC ID 14035 Monitoring and measurement Preventive
    Include compliance requirements in the audit and accountability policy. CC ID 14103 Monitoring and measurement Preventive
    Include coordination amongst entities in the audit and accountability policy. CC ID 14102 Monitoring and measurement Preventive
    Include the purpose in the audit and accountability policy. CC ID 14100 Monitoring and measurement Preventive
    Include roles and responsibilities in the audit and accountability policy. CC ID 14098 Monitoring and measurement Preventive
    Include management commitment in the audit and accountability policy. CC ID 14097 Monitoring and measurement Preventive
    Include the scope in the audit and accountability policy. CC ID 14096 Monitoring and measurement Preventive
    Establish, implement, and maintain audit and accountability procedures. CC ID 14057 Monitoring and measurement Preventive
    Establish, implement, and maintain an intrusion detection and prevention program. CC ID 15211 Monitoring and measurement Preventive
    Establish, implement, and maintain an intrusion detection and prevention policy. CC ID 15169 Monitoring and measurement Preventive
    Establish, implement, and maintain an event logging policy. CC ID 15217 Monitoring and measurement Preventive
    Include identity information of suspects in the suspicious activity report. CC ID 16648 Monitoring and measurement Preventive
    Include error details, identifying the root causes, and mitigation actions in the testing procedures. CC ID 11827
    [Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Monitoring and measurement Preventive
    Establish, implement, and maintain a security assessment and authorization policy. CC ID 14031 Monitoring and measurement Preventive
    Include coordination amongst entities in the security assessment and authorization policy. CC ID 14222 Monitoring and measurement Preventive
    Include the scope in the security assessment and authorization policy. CC ID 14220 Monitoring and measurement Preventive
    Include the purpose in the security assessment and authorization policy. CC ID 14219 Monitoring and measurement Preventive
    Include management commitment in the security assessment and authorization policy. CC ID 14189 Monitoring and measurement Preventive
    Include compliance requirements in the security assessment and authorization policy. CC ID 14183 Monitoring and measurement Preventive
    Include roles and responsibilities in the security assessment and authorization policy. CC ID 14179 Monitoring and measurement Preventive
    Establish, implement, and maintain security assessment and authorization procedures. CC ID 14056 Monitoring and measurement Preventive
    Document improvement actions based on test results and exercises. CC ID 16840 Monitoring and measurement Preventive
    Define the test requirements for each testing program. CC ID 13177
    [The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. Article 9 8.]
    Monitoring and measurement Preventive
    Include mechanisms for emergency stops in the testing program. CC ID 14398 Monitoring and measurement Preventive
    Establish, implement, and maintain conformity assessment procedures. CC ID 15032
    [For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: the internal control referred to in Annex VI; or Article 43 1.(a)
    For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII. Article 43 1.(b)
    In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where: Article 43 1. ¶ 1
    For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. Article 43 2.]
    Monitoring and measurement Preventive
    Create technical documentation assessment certificates in an official language. CC ID 15110
    [Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established. Article 44 1.]
    Monitoring and measurement Preventive
    Define the test frequency for each testing program. CC ID 13176
    [{testing in real-world conditions} Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more deployers or prospective deployers. Article 60 2.]
    Monitoring and measurement Preventive
    Establish, implement, and maintain a stress test program for identification cards or badges. CC ID 15424 Monitoring and measurement Preventive
    Establish, implement, and maintain a business line testing strategy. CC ID 13245 Monitoring and measurement Preventive
    Include facilities in the business line testing strategy. CC ID 13253 Monitoring and measurement Preventive
    Include electrical systems in the business line testing strategy. CC ID 13251 Monitoring and measurement Preventive
    Include mechanical systems in the business line testing strategy. CC ID 13250 Monitoring and measurement Preventive
    Include Heating Ventilation and Air Conditioning systems in the business line testing strategy. CC ID 13248 Monitoring and measurement Preventive
    Include emergency power supplies in the business line testing strategy. CC ID 13247 Monitoring and measurement Preventive
    Include environmental controls in the business line testing strategy. CC ID 13246 Monitoring and measurement Preventive
    Establish, implement, and maintain a vulnerability management program. CC ID 15721 Monitoring and measurement Preventive
    Include the pass or fail test status in the test results. CC ID 17106 Monitoring and measurement Preventive
    Include time information in the test results. CC ID 17105 Monitoring and measurement Preventive
    Include a description of the system tested in the test results. CC ID 17104 Monitoring and measurement Preventive
    Establish, implement, and maintain a compliance monitoring policy. CC ID 00671
    [The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.]
    Monitoring and measurement Preventive
    Establish, implement, and maintain a metrics policy. CC ID 01654 Monitoring and measurement Preventive
    Establish, implement, and maintain an approach for compliance monitoring. CC ID 01653 Monitoring and measurement Preventive
    Identify and document instances of non-compliance with the compliance framework. CC ID 06499
    [Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.]
    Monitoring and measurement Preventive
    Establish, implement, and maintain disciplinary action notices. CC ID 16577 Monitoring and measurement Preventive
    Include a copy of the order in the disciplinary action notice. CC ID 16606 Monitoring and measurement Preventive
    Include the sanctions imposed in the disciplinary action notice. CC ID 16599 Monitoring and measurement Preventive
    Include the effective date of the sanctions in the disciplinary action notice. CC ID 16589 Monitoring and measurement Preventive
    Include the requirements that were violated in the disciplinary action notice. CC ID 16588 Monitoring and measurement Preventive
    Include responses to charges from interested personnel and affected parties in the disciplinary action notice. CC ID 16587 Monitoring and measurement Preventive
    Include the reasons for imposing sanctions in the disciplinary action notice. CC ID 16586 Monitoring and measurement Preventive
    Include required information in the disciplinary action notice. CC ID 16584 Monitoring and measurement Preventive
    Include a justification for actions taken in the disciplinary action notice. CC ID 16583 Monitoring and measurement Preventive
    Include a statement on the conclusions of the investigation in the disciplinary action notice. CC ID 16582 Monitoring and measurement Preventive
    Include the investigation results in the disciplinary action notice. CC ID 16581 Monitoring and measurement Preventive
    Include a description of the causes of the actions taken in the disciplinary action notice. CC ID 16580 Monitoring and measurement Preventive
    Include the name of the person responsible for the charges in the disciplinary action notice. CC ID 16579 Monitoring and measurement Preventive
    Include contact information in the disciplinary action notice. CC ID 16578 Monitoring and measurement Preventive
    Establish, implement, and maintain occupational health and safety management metrics program. CC ID 15915 Monitoring and measurement Preventive
    Establish, implement, and maintain a privacy metrics program. CC ID 15494 Monitoring and measurement Preventive
    Establish, implement, and maintain an Electronic Health Records measurement metrics program. CC ID 06221 Monitoring and measurement Preventive
    Include transfer procedures in the log management program. CC ID 17077 Monitoring and measurement Preventive
    Establish, implement, and maintain a cross-organizational audit sharing agreement. CC ID 10595 Monitoring and measurement Preventive
    Include a commitment to cooperate with applicable statutory bodies in the Statement of Compliance. CC ID 12370
    [Importers shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it. Article 23 7.
    Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation. Article 25 2.
    Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation. Article 26 12.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to the high-risk AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system; Article 22 3.(d)
    Distributors shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by it. Article 24 6.
    Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation. Article 53 3.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to the general-purpose AI model, including when the model is integrated into AI systems placed on the market or put into service in the Union. Article 54 3.(d)
    The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. Article 73 6. ¶ 2]
    Audits and risk management Preventive
    Establish, implement, and maintain a risk management program. CC ID 12051
    [A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. Article 9 1.]
    Audits and risk management Preventive
    Include the scope of risk management activities in the risk management program. CC ID 13658 Audits and risk management Preventive
    Include managing mobile risks in the risk management program. CC ID 13535 Audits and risk management Preventive
    Establish, implement, and maintain a risk management policy. CC ID 17192 Audits and risk management Preventive
    Establish, implement, and maintain risk management strategies. CC ID 13209
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a). Article 9 2.(d)]
    Audits and risk management Preventive
    Include off-site storage of supplies in the risk management strategies. CC ID 13221 Audits and risk management Preventive
    Include minimizing service interruptions in the risk management strategies. CC ID 13215 Audits and risk management Preventive
    Include off-site storage in the risk mitigation strategies. CC ID 13213 Audits and risk management Preventive
    Establish, implement, and maintain the risk assessment framework. CC ID 00685 Audits and risk management Preventive
    Establish, implement, and maintain a risk assessment program. CC ID 00687 Audits and risk management Preventive
    Establish, implement, and maintain insurance requirements. CC ID 16562 Audits and risk management Preventive
    Address cybersecurity risks in the risk assessment program. CC ID 13193 Audits and risk management Preventive
    Include the categories of data used by the system in the fundamental rights impact assessment. CC ID 17248 Audits and risk management Preventive
    Include metrics in the fundamental rights impact assessment. CC ID 17249 Audits and risk management Preventive
    Include the benefits of the system in the fundamental rights impact assessment. CC ID 17244 Audits and risk management Preventive
    Include user safeguards in the fundamental rights impact assessment. CC ID 17255 Audits and risk management Preventive
    Include the outputs produced by the system in the fundamental rights impact assessment. CC ID 17247 Audits and risk management Preventive
    Include the purpose in the fundamental rights impact assessment. CC ID 17243 Audits and risk management Preventive
    Include monitoring procedures in the fundamental rights impact assessment. CC ID 17254 Audits and risk management Preventive
    Include risk management measures in the fundamental rights impact assessment. CC ID 17224
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms. Article 27 1.(f)]
    Audits and risk management Preventive
    Include human oversight measures in the fundamental rights impact assessment. CC ID 17223
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the implementation of human oversight measures, according to the instructions for use; Article 27 1.(e)]
    Audits and risk management Preventive
    Include risks in the fundamental rights impact assessment. CC ID 17222
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13; Article 27 1.(d)]
    Audits and risk management Preventive
    Include affected parties in the fundamental rights impact assessment. CC ID 17221
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the categories of natural persons and groups likely to be affected by its use in the specific context; Article 27 1.(c)]
    Audits and risk management Preventive
    Include the frequency in the fundamental rights impact assessment. CC ID 17220
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; Article 27 1.(b)]
    Audits and risk management Preventive
    Include the usage duration in the fundamental rights impact assessment. CC ID 17219
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; Article 27 1.(b)]
    Audits and risk management Preventive
    Include system use in the fundamental rights impact assessment. CC ID 17218
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; Article 27 1.(a)]
    Audits and risk management Preventive
    Include an assessment of the relationship between the data subject and the parties processing the data in the Data Protection Impact Assessment. CC ID 16371 Audits and risk management Preventive
    Include consideration of the data subject's expectations in the Data Protection Impact Assessment. CC ID 16370 Audits and risk management Preventive
    Establish, implement, and maintain a risk assessment policy. CC ID 14026 Audits and risk management Preventive
    Include compliance requirements in the risk assessment policy. CC ID 14121 Audits and risk management Preventive
    Include coordination amongst entities in the risk assessment policy. CC ID 14120 Audits and risk management Preventive
    Include management commitment in the risk assessment policy. CC ID 14119 Audits and risk management Preventive
    Include roles and responsibilities in the risk assessment policy. CC ID 14118 Audits and risk management Preventive
    Include the scope in the risk assessment policy. CC ID 14117 Audits and risk management Preventive
    Include the purpose in the risk assessment policy. CC ID 14116 Audits and risk management Preventive
    Include the probability and potential impact of pandemics in the scope of the risk assessment. CC ID 13241 Audits and risk management Preventive
    Document any reasons for modifying or refraining from modifying the organization's risk assessment when the risk assessment has been reviewed. CC ID 13312 Audits and risk management Preventive
    Create a risk assessment report based on the risk assessment results. CC ID 15695 Audits and risk management Preventive
    Include recovery of the critical path in the Business Impact Analysis. CC ID 13224 Audits and risk management Preventive
    Include acceptable levels of data loss in the Business Impact Analysis. CC ID 13264 Audits and risk management Preventive
    Include Recovery Point Objectives in the Business Impact Analysis. CC ID 13223 Audits and risk management Preventive
    Include the Recovery Time Objectives in the Business Impact Analysis. CC ID 13222 Audits and risk management Preventive
    Include pandemic risks in the Business Impact Analysis. CC ID 13219 Audits and risk management Preventive
    Establish, implement, and maintain a risk register. CC ID 14828 Audits and risk management Preventive
    Perform a gap analysis to review in scope controls for identified risks and implement new controls, as necessary. CC ID 00704
    [In identifying the most appropriate risk management measures, the following shall be ensured: elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system; Article 9 5. ¶ 2 (a)]
    Audits and risk management Detective
    Document the results of the gap analysis. CC ID 16271 Audits and risk management Preventive
    Establish, implement, and maintain a risk treatment plan. CC ID 11983
    [In identifying the most appropriate risk management measures, the following shall be ensured: where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated; Article 9 5. ¶ 2 (b)
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; Article 55 1.(b)]
    Audits and risk management Preventive
    Include roles and responsibilities in the risk treatment plan. CC ID 16991
    [With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. Article 9 5. ¶ 3]
    Audits and risk management Preventive
    Include time information in the risk treatment plan. CC ID 16993 Audits and risk management Preventive
    Include allocation of resources in the risk treatment plan. CC ID 16989 Audits and risk management Preventive
    Include the date of the risk assessment in the risk treatment plan. CC ID 16321 Audits and risk management Preventive
    Include requirements for monitoring and reporting in the risk treatment plan, as necessary. CC ID 13620 Audits and risk management Preventive
    Include a description of usage in the risk treatment plan. CC ID 11977
    [With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. Article 9 5. ¶ 3]
    Audits and risk management Preventive
    Document all constraints applied to the risk treatment plan, as necessary. CC ID 13619 Audits and risk management Preventive
    Document residual risk in a residual risk report. CC ID 13664 Audits and risk management Corrective
    Establish, implement, and maintain an artificial intelligence risk management program. CC ID 16220 Audits and risk management Preventive
    Include diversity and equal opportunity in the artificial intelligence risk management program. CC ID 16255 Audits and risk management Preventive
    Include a commitment to continuous improvement In the cybersecurity risk management program. CC ID 16839 Audits and risk management Preventive
    Establish, implement, and maintain a cybersecurity risk management policy. CC ID 16834 Audits and risk management Preventive
    Establish, implement, and maintain a cybersecurity risk management strategy. CC ID 11991
    [The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. Article 15 5. ¶ 2]
    Audits and risk management Preventive
    Include a risk prioritization approach in the Cybersecurity Risk Management Strategy. CC ID 12276 Audits and risk management Preventive
    Include defense in depth strategies in the cybersecurity risk management strategy. CC ID 15582 Audits and risk management Preventive
    Establish, implement, and maintain a cybersecurity supply chain risk management program. CC ID 16826 Audits and risk management Preventive
    Establish, implement, and maintain cybersecurity supply chain risk management procedures. CC ID 16830 Audits and risk management Preventive
    Establish, implement, and maintain a supply chain risk management policy. CC ID 14663 Audits and risk management Preventive
    Include compliance requirements in the supply chain risk management policy. CC ID 14711 Audits and risk management Preventive
    Include coordination amongst entities in the supply chain risk management policy. CC ID 14710 Audits and risk management Preventive
    Include management commitment in the supply chain risk management policy. CC ID 14709 Audits and risk management Preventive
    Include roles and responsibilities in the supply chain risk management policy. CC ID 14708 Audits and risk management Preventive
    Include the scope in the supply chain risk management policy. CC ID 14707 Audits and risk management Preventive
    Include the purpose in the supply chain risk management policy. CC ID 14706 Audits and risk management Preventive
    Establish, implement, and maintain a supply chain risk management plan. CC ID 14713 Audits and risk management Preventive
    Include processes for monitoring and reporting in the supply chain risk management plan. CC ID 15619 Audits and risk management Preventive
    Include dates in the supply chain risk management plan. CC ID 15617 Audits and risk management Preventive
    Include implementation milestones in the supply chain risk management plan. CC ID 15615 Audits and risk management Preventive
    Include roles and responsibilities in the supply chain risk management plan. CC ID 15613 Audits and risk management Preventive
    Include supply chain risk management procedures in the risk management program. CC ID 13190 Audits and risk management Preventive
    Establish, implement, and maintain a disclosure report. CC ID 15521 Audits and risk management Preventive
    Establish, implement, and maintain a digital identity management program. CC ID 13713 Technical security Preventive
    Establish, implement, and maintain an authorized representatives policy. CC ID 13798
    [Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. Article 22 1.]
    Technical security Preventive
    Include authorized representative life cycle management requirements in the authorized representatives policy. CC ID 13802 Technical security Preventive
    Include termination procedures in the authorized representatives policy. CC ID 17226
    [The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. Article 22 4.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. Article 54 5.]
    Technical security Preventive
    Include any necessary restrictions for the authorized representative in the authorized representatives policy. CC ID 13801 Technical security Preventive
    Include suspension requirements for authorized representatives in the authorized representatives policy. CC ID 13800 Technical security Preventive
    Include the authorized representative's life span in the authorized representatives policy. CC ID 13799 Technical security Preventive
    Include the user identifiers of all personnel who are authorized to access a system in the system record. CC ID 15171 Technical security Preventive
    Include identity information of all personnel who are authorized to access a system in the system record. CC ID 16406 Technical security Preventive
    Include the date and time that access rights were changed in the system record. CC ID 16415 Technical security Preventive
    Include information security requirements in the remote access and teleworking program. CC ID 15704 Technical security Preventive
    Document and approve requests to bypass multifactor authentication. CC ID 15464 Technical security Preventive
    Establish, implement, and maintain a facility physical security program. CC ID 00711
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. Article 55 1.(d)]
    Physical and environmental protection Preventive
    Establish, implement, and maintain opening procedures for businesses. CC ID 16671 Physical and environmental protection Preventive
    Establish, implement, and maintain closing procedures for businesses. CC ID 16670 Physical and environmental protection Preventive
    Establish and maintain a contract for accessing facilities that transmit, process, or store restricted data. CC ID 12050 Physical and environmental protection Preventive
    Include identification cards or badges in the physical security program. CC ID 14818 Physical and environmental protection Preventive
    Establish, implement, and maintain security procedures for virtual meetings. CC ID 15581 Physical and environmental protection Preventive
    Establish, implement, and maintain floor plans. CC ID 16419 Physical and environmental protection Preventive
    Include network infrastructure and cabling infrastructure on the floor plan. CC ID 16420 Physical and environmental protection Preventive
    Establish, implement, and maintain lost or damaged identification card procedures, as necessary. CC ID 14819 Physical and environmental protection Preventive
    Document all lost badges in a lost badge list. CC ID 12448 Physical and environmental protection Corrective
    Include error handling controls in identification issuance procedures. CC ID 13709 Physical and environmental protection Preventive
    Include information security in the identification issuance procedures. CC ID 15425 Physical and environmental protection Preventive
    Establish, implement, and maintain post-issuance update procedures for identification cards or badges. CC ID 15426 Physical and environmental protection Preventive
    Establish, implement, and maintain a door security standard. CC ID 06686 Physical and environmental protection Preventive
    Establish, implement, and maintain a window security standard. CC ID 06689 Physical and environmental protection Preventive
    Establish, Implement, and maintain a camera operating policy. CC ID 15456 Physical and environmental protection Preventive
    Record the date and time of entry in the visitor log. CC ID 13255 Physical and environmental protection Preventive
    Include evidence of experience in applications for professional certification. CC ID 16193 Human Resources management Preventive
    Include supporting documentation in applications for professional certification. CC ID 16195 Human Resources management Preventive
    Include portions of the visitor control program in the training plan. CC ID 13287 Human Resources management Preventive
    Establish, implement, and maintain a security awareness and training policy. CC ID 14022 Human Resources management Preventive
    Include compliance requirements in the security awareness and training policy. CC ID 14092 Human Resources management Preventive
    Include coordination amongst entities in the security awareness and training policy. CC ID 14091 Human Resources management Preventive
    Establish, implement, and maintain security awareness and training procedures. CC ID 14054 Human Resources management Preventive
    Include management commitment in the security awareness and training policy. CC ID 14049 Human Resources management Preventive
    Include roles and responsibilities in the security awareness and training policy. CC ID 14048 Human Resources management Preventive
    Include the scope in the security awareness and training policy. CC ID 14047 Human Resources management Preventive
    Include the purpose in the security awareness and training policy. CC ID 14045 Human Resources management Preventive
    Include configuration management procedures in the security awareness program. CC ID 13967 Human Resources management Preventive
    Document security awareness requirements. CC ID 12146 Human Resources management Preventive
    Include a requirement to train all new hires and interested personnel in the security awareness program. CC ID 11800 Human Resources management Preventive
    Include remote access in the security awareness program. CC ID 13892 Human Resources management Preventive
    Document the goals of the security awareness program. CC ID 12145 Human Resources management Preventive
    Compare current security awareness assessment reports to the security awareness baseline. CC ID 12150 Human Resources management Preventive
    Document the scope of the security awareness program. CC ID 12148 Human Resources management Preventive
    Establish, implement, and maintain a security awareness baseline. CC ID 12147 Human Resources management Preventive
    Establish, implement, and maintain an environmental management system awareness program. CC ID 15200 Human Resources management Preventive
    Establish, implement, and maintain a Code of Conduct. CC ID 04897
    [Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of any interested stakeholders and their representative organisations, including civil society organisations and academia. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. Article 95 3.]
    Human Resources management Preventive
    Establish, implement, and maintain a code of conduct for financial recommendations. CC ID 16649 Human Resources management Preventive
    Include anti-coercion requirements and anti-tying requirements in the Code of Conduct. CC ID 16720 Human Resources management Preventive
    Include classifications of ethics violations in the Code of Conduct. CC ID 14769 Human Resources management Preventive
    Include definitions of ethics violations in the Code of Conduct. CC ID 14768 Human Resources management Preventive
    Include exercising due professional care in the Code of Conduct. CC ID 14210 Human Resources management Preventive
    Include health and safety provisions in the Code of Conduct. CC ID 16206 Human Resources management Preventive
    Include responsibilities to the public trust in the Code of Conduct. CC ID 14209 Human Resources management Preventive
    Include environmental responsibility criteria in the Code of Conduct. CC ID 16209 Human Resources management Preventive
    Include social responsibility criteria in the Code of Conduct. CC ID 16210 Human Resources management Preventive
    Include labor rights criteria in the Code of Conduct. CC ID 16208 Human Resources management Preventive
    Include the employee's legal responsibilities and rights in the Terms and Conditions of employment. CC ID 15701 Human Resources management Preventive
    Establish, implement, and maintain a Governance, Risk, and Compliance framework. CC ID 01406 Operational management Preventive
    Establish, implement, and maintain an information security program. CC ID 00812 Operational management Preventive
    Establish, implement, and maintain operational control procedures. CC ID 00831 Operational management Preventive
    Include alternative actions in the operational control procedures. CC ID 17096
    [Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 53 4.
    Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 55 2.]
    Operational management Preventive
    Establish, implement, and maintain a Standard Operating Procedures Manual. CC ID 00826
    [{is accessible} {is comprehensible} High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. Article 13 2.
    {is accessible} {is comprehensible} High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. Article 13 2.]
    Operational management Preventive
    Include system use information in the standard operating procedures manual. CC ID 17240 Operational management Preventive
    Include metrics in the standard operating procedures manual. CC ID 14988
    [The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. Article 15 3.]
    Operational management Preventive
    Include maintenance measures in the standard operating procedures manual. CC ID 14986
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Operational management Preventive
    Include logging procedures in the standard operating procedures manual. CC ID 17214
    [The instructions for use shall contain at least the following information: where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12. Article 13 3.(f)]
    Operational management Preventive
    Include the expected lifetime of the system in the standard operating procedures manual. CC ID 14984
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Operational management Preventive
    Include resources in the standard operating procedures manual. CC ID 17212
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Operational management Preventive
    Include technical measures used to interpret output in the standard operating procedures manual. CC ID 14982
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output; Article 13 3.(b)(iv)
    The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately; Article 13 3.(b)(vii)
    The instructions for use shall contain at least the following information: the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; Article 13 3.(d)]
    Operational management Preventive
    Include human oversight measures in the standard operating procedures manual. CC ID 17213
    [The instructions for use shall contain at least the following information: the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; Article 13 3.(d)]
    Operational management Preventive
    Include predetermined changes in the standard operating procedures manual. CC ID 14977
    [The instructions for use shall contain at least the following information: the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; Article 13 3.(c)]
    Operational management Preventive
    Include specifications for input data in the standard operating procedures manual. CC ID 14975
    [{training data} {validation data} {testing data} The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system; Article 13 3.(b)(vi)
    Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. Article 26 4.]
    Operational management Preventive
    Include risks to health and safety or fundamental rights in the standard operating procedures manual. CC ID 14973
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2); Article 13 3.(b)(iii)]
    Operational management Preventive
    Include circumstances that may impact the system in the standard operating procedures manual. CC ID 14972
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; Article 13 3.(b)(ii)]
    Operational management Preventive
    Include what the system was tested and validated for in the standard operating procedures manual. CC ID 14969
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; Article 13 3.(b)(ii)]
    Operational management Preventive
    Update operating procedures that contribute to user errors. CC ID 06935 Operational management Corrective
    Include the intended purpose in the standard operating procedures manual. CC ID 14967
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: its intended purpose; Article 13 3.(b)(i)]
    Operational management Preventive
    Include information on system performance in the standard operating procedures manual. CC ID 14965
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: Article 13 3.(b)
    The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used; Article 13 3.(b)(v)]
    Operational management Preventive
    Include contact details in the standard operating procedures manual. CC ID 14962
    [The instructions for use shall contain at least the following information: the identity and the contact details of the provider and, where applicable, of its authorised representative; Article 13 3.(a)]
    Operational management Preventive
    Include that explicit management authorization must be given for the use of all technologies and their documentation in the Acceptable Use Policy. CC ID 01351
    [The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. Article 46 3.]
    Operational management Preventive
    Establish, implement, and maintain an Intellectual Property Right program. CC ID 00821
    [Providers of general-purpose AI models shall: put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; Article 53 1.(c)]
    Operational management Preventive
    Establish, implement, and maintain Intellectual Property Rights protection procedures. CC ID 11512
    [Providers of general-purpose AI models shall: put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; Article 53 1.(c)]
    Operational management Preventive
    Comply with all implemented policies in the organization's compliance framework. CC ID 06384
    [In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. Article 8 1.
    Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under applicable Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplication and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I. Article 8 2.
    Providers of high-risk AI systems shall: ensure that their high-risk AI systems are compliant with the requirements set out in Section 2; Article 16 ¶ 1 (a)
    Providers of high-risk AI systems shall: comply with the registration obligations referred to in Article 49(1); Article 16 ¶ 1 (i)
    Providers of high-risk AI systems shall: ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882. Article 16 ¶ 1 (l)
    {quality management system} The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. Article 17 2.
    For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the obligation to put in place a quality management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be taken into account. Article 17 4.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. Article 22 3.(e)
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that those standards cover those requirements or obligations. Article 40 1.
    High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations. Article 41 3.
    High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used shall be presumed to comply with the relevant requirements laid down in Article 10(4). Article 42 1.
    High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. Article 42 2.
    {keep up to date} By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. Article 47 4.
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and Article 53 1.(b)(i)
    Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 55 2.
    For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the background-color:#F0BBBC;" class="term_primary-noun">rules on internal n">governance arrangements, processes and mechanisms pursuant to the relevant financial service law. For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law. Article 26 5. ¶ 2]
    Operational management Preventive
    Establish, implement, and maintain a customer service program. CC ID 00846 Operational management Preventive
    Include incident monitoring procedures in the Incident Management program. CC ID 01207 Operational management Preventive
    Record actions taken by investigators during a forensic investigation in the forensic investigation report. CC ID 07095 Operational management Preventive
    Include corrective actions in the forensic investigation report. CC ID 17070
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Operational management Preventive
    Document the justification for not reporting incidents to interested personnel and affected parties. CC ID 16547 Operational management Preventive
    Document any potential harm in the incident finding when concluding the incident investigation. CC ID 13830 Operational management Preventive
    Log incidents in the Incident Management audit log. CC ID 00857
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; Article 55 1.(c)]
    Operational management Preventive
    Include corrective actions in the incident management audit log. CC ID 16466 Operational management Preventive
    Establish, implement, and maintain an Incident Response program. CC ID 00579 Operational management Preventive
    Create an incident response report. CC ID 12700 Operational management Preventive
    Include a root cause analysis of the incident in the incident response report. CC ID 12701
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Operational management Preventive
    Establish, implement, and maintain a disability accessibility program. CC ID 06191
    [The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. Article 50 5.]
    Operational management Preventive
    Establish, implement, and maintain web content accessibility guidelines. CC ID 14949 Operational management Preventive
    Refrain from creating instructions for content that rely on sensory characteristics of components. CC ID 15124 Operational management Preventive
    Include contact details in the registration database. CC ID 15109
    [The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. Article 71 5.]
    Operational management Preventive
    Include personal data in the registration database, as necessary. CC ID 15108
    [The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. Article 71 5.]
    Operational management Preventive
    Withdraw the technical documentation assessment certificate when the artificial intelligence system is not in compliance with requirements. CC ID 15099
    [Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. Article 44 3. ¶ 1]
    Operational management Preventive
    Define a high-risk artificial intelligence system. CC ID 14959
    [{high-risk artificial intelligence system} Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; Article 6 1.(a)
    {high-risk artificial intelligence system} Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I. Article 6 1.(b)
    {high-risk artificial intelligence system} By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. Article 6 3. ¶ 1
    {not be considered a high-risk artificial intelligence system} {assigned task} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to perform a narrow procedural task; Article 6 3. ¶ 2 (a)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to improve the result of a previously completed human activity; Article 6 3. ¶ 2 (b)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or Article 6 3. ¶ 2 (c)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III. Article 6 3. ¶ 2 (d)
    {high-risk artificial intelligence system} Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons. Article 6 3. ¶ 3]
    Operational management Preventive
    Establish, implement, and maintain a declaration of conformity. CC ID 15038
    [Providers of high-risk AI systems shall: draw up an EU declaration of conformity in accordance with Article 47; Article 16 ¶ 1 (g)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. Article 47 3.
    {keep up to date} By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. Article 47 4.]
    Operational management Preventive
    Include compliance requirements in the declaration of conformity. CC ID 15105
    [Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. Article 47 3.]
    Operational management Preventive
    Translate the declaration of conformity into an official language. CC ID 15103
    [The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.
    Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. Article 21 1.]
    Operational management Preventive
    Include all required information in the declaration of conformity. CC ID 15101
    [The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.]
    Operational management Preventive
    Include a statement that the artificial intelligence system meets all requirements in the declaration of conformity. CC ID 15100
    [Providers of high-risk AI systems shall: upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2; Article 16 ¶ 1 (k)
    The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.]
    Operational management Preventive
    Identify the artificial intelligence system in the declaration of conformity. CC ID 15098
    [The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.]
    Operational management Preventive
    Establish, implement, and maintain a Configuration Management program. CC ID 00867 System hardening through configuration management Preventive
    Establish, implement, and maintain appropriate system labeling. CC ID 01900
    [Providers of high-risk AI systems shall: indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable, their name, registered trade name or registered trade mark, the address at which they can be contacted; Article 16 ¶ 1 (b)
    Providers of high-risk AI systems shall: affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48; Article 16 ¶ 1 (h)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; Article 23 1.(c)
    Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable. Article 23 3.
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    {digital form} For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means. Article 48 2.
    The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. Article 48 3.
    The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. Article 48 3.
    Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.
    Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law. Article 48 5.]
    System hardening through configuration management Preventive
    Include the identification number of the third party who performed the conformity assessment procedures on all promotional materials. CC ID 15041
    [Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.]
    System hardening through configuration management Preventive
    Include the identification number of the third party who conducted the conformity assessment procedures after the CE marking of conformity. CC ID 15040
    [Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.]
    System hardening through configuration management Preventive
    Establish, implement, and maintain system hardening procedures. CC ID 12001 System hardening through configuration management Preventive
    Establish, implement, and maintain an information management program. CC ID 14315 Records management Preventive
    Establish, implement, and maintain system design requirements. CC ID 06618 Systems design, build, and implementation Preventive
    Establish, implement, and maintain human interface guidelines. CC ID 08662
    [Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. Article 50 1.
    Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. Article 50 1.]
    Systems design, build, and implementation Preventive
    Include mechanisms for changing authenticators in human interface guidelines. CC ID 14944 Systems design, build, and implementation Preventive
    Include functionality for managing user data in human interface guidelines. CC ID 14928 Systems design, build, and implementation Preventive
    Submit the information system's security authorization package to the appropriate stakeholders, as necessary. CC ID 13987
    [The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. Article 46 3.]
    Systems design, build, and implementation Preventive
    Establish and maintain technical documentation. CC ID 15005
    [The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. Article 11 1. ¶ 1
    The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. Article 11 1. ¶ 1
    Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the technical documentation as part of the documentation kept under the relevant Union financial services law. Article 18 3.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV; Article 23 1.(b)
    Providers of general-purpose AI models shall: draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; Article 53 1.(a)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the technical documentation specified in Annex XI has been drawn up and all obligations referred to in Article 53 and, where applicable, Article 55 have been fulfilled by the provider; Article 54 3.(a)
    Providers of general-purpose AI models shall: draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. Article 53 1.(d)
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: Article 53 1.(b)]
    Systems design, build, and implementation Preventive
    Retain technical documentation on the premises where the artificial intelligence system is located. CC ID 15104 Systems design, build, and implementation Preventive
    Include the risk mitigation measures in the technical documentation. CC ID 17246 Systems design, build, and implementation Preventive
    Include the intended outputs of the system in the technical documentation. CC ID 17245 Systems design, build, and implementation Preventive
    Include the limitations of the system in the technical documentation. CC ID 17242 Systems design, build, and implementation Preventive
    Include the types of data used to train the artificial intelligence system in the technical documentation. CC ID 17241 Systems design, build, and implementation Preventive
    Include all required information in the technical documentation. CC ID 15094
    [The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2
    The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2
    Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up containing all the information set out in paragraph 1, as well as the information required under those legal acts. Article 11 2.
    Providers of general-purpose AI models shall: draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; Article 53 1.(a)
    The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). Article 72 3.
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and Article 53 1.(b)(i)
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: contain, at a minimum, the elements set out in Annex XII; Article 53 1.(b)(ii)]
    Systems design, build, and implementation Preventive
    Include information that demonstrates compliance with requirements in the technical documentation. CC ID 15088
    [The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2]
    Systems design, build, and implementation Preventive
    Establish, implement, and maintain system acquisition contracts. CC ID 14758 Acquisition or sale of facilities, technology, and services Preventive
    Include how to access information from the dispute resolution body in the consumer complaint management program. CC ID 13816 Acquisition or sale of facilities, technology, and services Preventive
    Include any requirements for using information from the dispute resolution body in the consumer complaint management program. CC ID 13815 Acquisition or sale of facilities, technology, and services Preventive
    Establish, implement, and maintain notice and take-down procedures. CC ID 09963 Acquisition or sale of facilities, technology, and services Preventive
    Establish, implement, and maintain a privacy framework that protects restricted data. CC ID 11850 Privacy protection for information and data Preventive
    Establish, implement, and maintain personal data choice and consent program. CC ID 12569
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Privacy protection for information and data Preventive
    Date the data subject's consent. CC ID 17233
    [The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. Article 61 2.]
    Privacy protection for information and data Preventive
    Establish, implement, and maintain data request procedures. CC ID 16546 Privacy protection for information and data Preventive
    Establish and maintain disclosure authorization forms for authorization of consent to use personal data. CC ID 13433 Privacy protection for information and data Preventive
    Include procedures for revoking authorization of consent to use personal data in the disclosure authorization form. CC ID 13438 Privacy protection for information and data Preventive
    Include the identity of the person seeking consent in the disclosure authorization. CC ID 13999 Privacy protection for information and data Preventive
    Include the recipients of the disclosed personal data in the disclosure authorization form. CC ID 13440 Privacy protection for information and data Preventive
    Include the signature of the data subject and the signing date in the disclosure authorization form. CC ID 13439 Privacy protection for information and data Preventive
    Include the identity of the data subject in the disclosure authorization form. CC ID 13436 Privacy protection for information and data Preventive
    Include the types of personal data to be disclosed in the disclosure authorization form. CC ID 13442 Privacy protection for information and data Preventive
    Include how personal data will be used in the disclosure authorization form. CC ID 13441 Privacy protection for information and data Preventive
    Include agreement termination information in the disclosure authorization form. CC ID 13437 Privacy protection for information and data Preventive
    Highlight the section regarding data subject's consent from other sections in contracts and agreements. CC ID 13988 Privacy protection for information and data Preventive
    Establish, implement, and maintain a personal data accountability program. CC ID 13432 Privacy protection for information and data Preventive
    Establish, implement, and maintain approval applications. CC ID 16778 Privacy protection for information and data Preventive
    Include required information in the approval application. CC ID 16628 Privacy protection for information and data Preventive
    Establish, implement, and maintain a personal data use limitation program. CC ID 13428 Privacy protection for information and data Preventive
    Establish, implement, and maintain a personal data use purpose specification. CC ID 00093 Privacy protection for information and data Preventive
    Establish, implement, and maintain restricted data use limitation procedures. CC ID 00128 Privacy protection for information and data Preventive
    Establish and maintain a record of processing activities when processing restricted data. CC ID 12636
    [{was necessary} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. Article 10 5.(f)]
    Privacy protection for information and data Preventive
    Refrain from maintaining a record of processing activities if the data processor employs a limited number of persons. CC ID 13378 Privacy protection for information and data Preventive
    Refrain from maintaining a record of processing activities if the personal data relates to criminal records. CC ID 13377 Privacy protection for information and data Preventive
    Refrain from maintaining a record of processing activities if the data being processed is restricted data. CC ID 13376 Privacy protection for information and data Preventive
    Refrain from maintaining a record of processing activities if it could result in a risk to the data subject's rights or data subject's freedom. CC ID 13375 Privacy protection for information and data Preventive
    Document the conditions for the use or disclosure of Individually Identifiable Health Information by a covered entity to another covered entity. CC ID 00210 Privacy protection for information and data Preventive
    Disclose Individually Identifiable Health Information for research use when the appropriate requirements are included in the approval documentation or waiver documentation. CC ID 06257 Privacy protection for information and data Preventive
    Document the conditions for the disclosure of Individually Identifiable Health Information by an organization providing healthcare services to organizations other than business associates or other covered entities. CC ID 00201 Privacy protection for information and data Preventive
    Document how Individually Identifiable Health Information is used and disclosed when authorization has been granted. CC ID 00216 Privacy protection for information and data Preventive
    Define and implement valid authorization control requirements. CC ID 06258 Privacy protection for information and data Preventive
    Document the redisclosing restricted data exceptions. CC ID 00170 Privacy protection for information and data Preventive
    Establish, implement, and maintain a data handling program. CC ID 13427 Privacy protection for information and data Preventive
    Establish, implement, and maintain data handling policies. CC ID 00353 Privacy protection for information and data Preventive
    Establish, implement, and maintain data and information confidentiality policies. CC ID 00361 Privacy protection for information and data Preventive
    Establish, implement, and maintain a personal data transfer program. CC ID 00307
    [{have not transmitted} {have not transferred} {have not accessed} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties; Article 10 5.(d)]
    Privacy protection for information and data Preventive
    Include procedures for transferring personal data from one data controller to another data controller in the personal data transfer program. CC ID 00351 Privacy protection for information and data Preventive
    Include procedures for transferring personal data to third parties in the personal data transfer program. CC ID 00333 Privacy protection for information and data Preventive
    Document transfer disagreements by the data subject in writing. CC ID 00348 Privacy protection for information and data Preventive
    Define the personal data transfer exceptions for transferring personal data to another country when adequate protection level standards are not met. CC ID 00315 Privacy protection for information and data Preventive
    Define the personal data transfer exceptions for transferring personal data to another organization when adequate protection level standards are not met. CC ID 00336 Privacy protection for information and data Preventive
    Establish, implement, and maintain Internet interactivity data transfer procedures. CC ID 06949 Privacy protection for information and data Preventive
    Define the organization's liability based on the applicable law. CC ID 00504
    [{be liable} {refrain from imposing} Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. Article 57 12.
    The provider or prospective provider shall be liable under applicable Union and national liability law for any damage caused in the course of their testing in real world conditions. Article 60 9.]
    Privacy protection for information and data Preventive
    Define the sanctions and fines available for privacy rights violations based on applicable law. CC ID 00505
    [{be liable} {refrain from imposing} Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. Article 57 12.
    The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. Article 80 4.
    Where, in the course of the evaluation pursuant to paragraph 1 of this Article, the market surveillance authority establishes that the AI system was misclassified by the provider as non-high-risk in order to circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Article 99. Article 80 7.]
    Privacy protection for information and data Preventive
    Define the appeal process based on the applicable law. CC ID 00506
    [An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available. Article 44 3. ¶ 2]
    Privacy protection for information and data Preventive
    Establish, implement, and maintain software exchange agreements with all third parties. CC ID 11615 Third Party and supply chain oversight Preventive
    Establish, implement, and maintain rules of engagement with third parties. CC ID 13994 Third Party and supply chain oversight Preventive
    Include the purpose in the information flow agreement. CC ID 17016 Third Party and supply chain oversight Preventive
    Include the type of information being transmitted in the information flow agreement. CC ID 14245 Third Party and supply chain oversight Preventive
    Include the costs in the information flow agreement. CC ID 17018 Third Party and supply chain oversight Preventive
    Include the security requirements in the information flow agreement. CC ID 14244 Third Party and supply chain oversight Preventive
    Include the interface characteristics in the information flow agreement. CC ID 14240 Third Party and supply chain oversight Preventive
    Include text about participation in the organization's testing programs in third party contracts. CC ID 14402 Third Party and supply chain oversight Preventive
    Include the contract duration in third party contracts. CC ID 16221 Third Party and supply chain oversight Preventive
    Include cryptographic keys in third party contracts. CC ID 16179 Third Party and supply chain oversight Preventive
    Include bankruptcy provisions in third party contracts. CC ID 16519 Third Party and supply chain oversight Preventive
    Include cybersecurity supply chain risk management requirements in third party contracts. CC ID 15646 Third Party and supply chain oversight Preventive
    Include requirements to cooperate with competent authorities in third party contracts. CC ID 17186 Third Party and supply chain oversight Preventive
    Include compliance with the organization's data usage policies in third party contracts. CC ID 16413 Third Party and supply chain oversight Preventive
    Include financial reporting in third party contracts, as necessary. CC ID 13573 Third Party and supply chain oversight Preventive
    Include on-site visits in third party contracts. CC ID 17306 Third Party and supply chain oversight Preventive
    Include the dispute resolution body's contact information in the terms and conditions in third party contracts. CC ID 13813 Third Party and supply chain oversight Preventive
    Include a usage limitation of restricted data clause in third party contracts. CC ID 13026 Third Party and supply chain oversight Preventive
    Include end-of-life information in third party contracts. CC ID 15265 Third Party and supply chain oversight Preventive
  • Human Resources Management
    13
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Employ third parties to carry out testing programs, as necessary. CC ID 13178 Monitoring and measurement Preventive
    Employ third parties when implementing a risk assessment, as necessary. CC ID 16306 Audits and risk management Detective
    Engage appropriate parties to assist with risk assessments, as necessary. CC ID 12153 Audits and risk management Preventive
    Assign key stakeholders to review and approve supply chain risk management procedures. CC ID 13199 Audits and risk management Preventive
    Disallow opting out of wearing or displaying identification cards or badges. CC ID 13030 Physical and environmental protection Preventive
    Direct each employee to be responsible for their identification card or badge. CC ID 12332 Physical and environmental protection Preventive
    Assign employees the responsibility for controlling their identification badges. CC ID 12333 Physical and environmental protection Preventive
    Define and assign the authorized representatives roles and responsibilities. CC ID 15033
    [The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. Article 22 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 22 3.
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the provider has appointed an authorised representative in accordance with Article 22(1). Article 23 1.(d)
    Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. Article 54 1.
    The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. Article 54 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 54 3.]
    Human Resources management Preventive
    Hire third parties to conduct training, as necessary. CC ID 13167 Human Resources management Preventive
    Establish and maintain a steering committee to guide the security awareness program. CC ID 12149 Human Resources management Preventive
    Encourage interested personnel to obtain security certification. CC ID 11804 Human Resources management Preventive
    Implement measures to enable personnel assigned to human oversight to be aware of the possibility of automatically relying or over-relying on outputs to make decisions. CC ID 15091
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; Article 14 4.(b)]
    Operational management Preventive
    Refrain from discriminating against data subjects who have exercised privacy rights. CC ID 13435 Privacy protection for information and data Preventive
  • IT Impact Zone
    11
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Leadership and high level objectives CC ID 00597 Leadership and high level objectives IT Impact Zone
    Monitoring and measurement CC ID 00636 Monitoring and measurement IT Impact Zone
    Audits and risk management CC ID 00677 Audits and risk management IT Impact Zone
    Technical security CC ID 00508 Technical security IT Impact Zone
    Human Resources management CC ID 00763 Human Resources management IT Impact Zone
    Operational management CC ID 00805 Operational management IT Impact Zone
    System hardening through configuration management CC ID 00860 System hardening through configuration management IT Impact Zone
    Records management CC ID 00902 Records management IT Impact Zone
    Systems design, build, and implementation CC ID 00989 Systems design, build, and implementation IT Impact Zone
    Privacy protection for information and data CC ID 00008 Privacy protection for information and data IT Impact Zone
    Third Party and supply chain oversight CC ID 08807 Third Party and supply chain oversight IT Impact Zone
  • Investigate
    14
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Monitor and evaluate the effectiveness of detection tools. CC ID 13505 Monitoring and measurement Detective
    Determine if high rates of retail payment activities are from Originating Depository Financial Institutions. CC ID 13546 Monitoring and measurement Detective
    Review retail payment service reports, as necessary. CC ID 13545 Monitoring and measurement Detective
    Evaluate the effectiveness of threat and vulnerability management procedures. CC ID 13491 Audits and risk management Detective
    Identify changes to in scope systems that could threaten communication between business units. CC ID 13173 Audits and risk management Detective
    Detect anomalies in physical barriers. CC ID 13533 Physical and environmental protection Detective
    Report anomalies in the visitor log to appropriate personnel. CC ID 14755 Physical and environmental protection Detective
    Analyze the behaviors of individuals involved in the incident during incident investigations. CC ID 14042 Operational management Detective
    Identify the affected parties during incident investigations. CC ID 16781 Operational management Detective
    Interview suspects during incident investigations, as necessary. CC ID 14041 Operational management Detective
    Interview victims and witnesses during incident investigations, as necessary. CC ID 14038 Operational management Detective
    Refrain from altering the state of compromised systems when collecting digital forensic evidence. CC ID 08671
    [The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. Article 73 6. ¶ 2]
    Operational management Detective
    Assess consumer complaints and litigation. CC ID 16521 Acquisition or sale of facilities, technology, and services Preventive
    Analyze requirements for processing personal data in contracts. CC ID 12550 Privacy protection for information and data Detective
  • Log Management
    26
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Establish, implement, and maintain monitoring and logging operations. CC ID 00637
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: monitoring the operation of high-risk AI systems referred to in Article 26(5). Article 12 2.(c)
    For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)]
    Monitoring and measurement Detective
    Enable monitoring and logging operations on all assets that meet the organizational criteria to maintain event logs. CC ID 06312
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification; Article 12 2.(a)]
    Monitoring and measurement Preventive
    Operationalize key monitoring and logging concepts to ensure the audit trails capture sufficient information. CC ID 00638
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: facilitating the post-market monitoring referred to in Article 72; and Article 12 2.(b)]
    Monitoring and measurement Detective
    Correlate log entries to security controls to verify the security control's effectiveness. CC ID 13207 Monitoring and measurement Detective
    Enable logging for all systems that meet a traceability criteria. CC ID 00640
    [High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system. Article 12 1.]
    Monitoring and measurement Detective
    Log account usage times. CC ID 07099 Monitoring and measurement Detective
    Restrict access to logs to authorized individuals. CC ID 01342
    [Upon a reasoned request by a competent authority, providers shall also give the requesting competent authority, as applicable, access to the automatically generated logs of the high-risk AI system referred to in Article 12(1), to the extent such logs are under their control. Article 21 2.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; Article 22 3.(c)]
    Monitoring and measurement Preventive
    Include the user's location in the system record. CC ID 16996 Technical security Preventive
    Log the individual's address in the facility access list. CC ID 16921 Physical and environmental protection Preventive
    Log the contact information for the person authorizing access in the facility access list. CC ID 16920 Physical and environmental protection Preventive
    Log the organization's name in the facility access list. CC ID 16919 Physical and environmental protection Preventive
    Log the individual's name in the facility access list. CC ID 16918 Physical and environmental protection Preventive
    Log the purpose in the facility access list. CC ID 16982 Physical and environmental protection Preventive
    Log the level of access in the facility access list. CC ID 16975 Physical and environmental protection Preventive
    Record the purpose of the visit in the visitor log. CC ID 16917 Physical and environmental protection Preventive
    Record the date and time of departure in the visitor log. CC ID 16897 Physical and environmental protection Preventive
    Record the type of identification used in the visitor log. CC ID 16916 Physical and environmental protection Preventive
    Log when the cabinet is accessed. CC ID 11674 Physical and environmental protection Detective
    Include the requestor's name in the physical access log. CC ID 16922 Physical and environmental protection Preventive
    Include who the incident was reported to in the incident management audit log. CC ID 16487 Operational management Preventive
    Include the information that was exchanged in the incident management audit log. CC ID 16995 Operational management Preventive
    Provide the reference database used to verify input data in the logging capability. CC ID 15018
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the reference database against which input data has been checked by the system; Article 12 3.(b)]
    System hardening through configuration management Preventive
    Configure the log to capture user queries and searches. CC ID 16479
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the input data for which the search has led to a match; Article 12 3.(c)]
    System hardening through configuration management Preventive
    Capture and maintain logs as official records. CC ID 06319
    [Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the F0BBBC;" class="term_primary-noun">logs as part of the documentation kept pursuant to the relevant Union financial service law. Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law. Article 26 6. ¶ 2]
    Records management Preventive
    Log the disclosure of personal data. CC ID 06628 Privacy protection for information and data Preventive
    Log the modification of personal data. CC ID 11844 Privacy protection for information and data Preventive
  • Monitor and Evaluate Occurrences
    15
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Analyze organizational objectives, functions, and activities. CC ID 00598 Leadership and high level objectives Preventive
    Include monitoring and analysis capabilities in the quality management program. CC ID 17153 Leadership and high level objectives Preventive
    Monitor the usage and capacity of critical assets. CC ID 14825 Monitoring and measurement Detective
    Monitor the usage and capacity of Information Technology assets. CC ID 00668
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Monitoring and measurement Detective
    Monitor systems for errors and faults. CC ID 04544 Monitoring and measurement Detective
    Monitor systems for inappropriate usage and other security violations. CC ID 00585
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)]
    Monitoring and measurement Detective
    Monitor for and react to when suspicious activities are detected. CC ID 00586
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3
    The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Monitoring and measurement Detective
    Establish, implement, and maintain network monitoring operations. CC ID 16444 Monitoring and measurement Preventive
    Monitor and review retail payment activities, as necessary. CC ID 13541 Monitoring and measurement Detective
    Log account usage durations. CC ID 12117 Monitoring and measurement Detective
    Monitor personnel and third parties for compliance to the organizational compliance framework. CC ID 04726 Monitoring and measurement Detective
    Monitor the effectiveness of the cybersecurity risk management program. CC ID 16831 Audits and risk management Preventive
    Monitor the effectiveness of the cybersecurity supply chain risk management program. CC ID 16828 Audits and risk management Preventive
    Log the exit of a staff member to a facility or designated rooms within the facility. CC ID 11675 Physical and environmental protection Preventive
    Establish, implement, and maintain a post-market monitoring system. CC ID 15050
    [Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system. Article 72 1.
    The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.
    The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.
    The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). Article 72 3.]
    Operational management Preventive
  • Physical and Environmental Protection
    11
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Create security zones in facilities, as necessary. CC ID 16295 Physical and environmental protection Preventive
    Configure the access control system to grant access only during authorized working hours. CC ID 12325 Physical and environmental protection Preventive
    Issue photo identification badges to all employees. CC ID 12326 Physical and environmental protection Preventive
    Report lost badges, stolen badges, and broken badges to the Security Manager. CC ID 12334 Physical and environmental protection Preventive
    Include name, date of entry, and validity period on disposable identification cards or badges. CC ID 12331 Physical and environmental protection Preventive
    Enforce dual control for badge assignments. CC ID 12328 Physical and environmental protection Preventive
    Enforce dual control for accessing unassigned identification cards or badges. CC ID 12327 Physical and environmental protection Preventive
    Refrain from imprinting the company name or company logo on identification cards or badges. CC ID 12282 Physical and environmental protection Preventive
    Use vandal resistant light fixtures for all security lighting. CC ID 16130 Physical and environmental protection Preventive
    Physically segregate business areas in accordance with organizational standards. CC ID 16718 Physical and environmental protection Preventive
    Ensure the storage conditions for artificial intelligence systems refrain from compromising compliance. CC ID 15030
    [{storage conditions} Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. Article 23 4.
    {storage conditions} Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. Article 24 3.]
    Operational management Detective
  • Process or Activity
    49
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Request extensions for submissions to governing bodies, as necessary. CC ID 16955 Leadership and high level objectives Preventive
    Review and approve the use of continuous security management systems. CC ID 13181 Monitoring and measurement Preventive
    Include the correlation and analysis of information obtained during testing in the continuous monitoring program. CC ID 14250 Monitoring and measurement Detective
    Identify risk management measures when testing in scope systems. CC ID 14960
    [High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. Article 9 6.]
    Monitoring and measurement Detective
    Request an extension of the validity period of technical documentation assessment certificates, as necessary. CC ID 17228
    [Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.]
    Monitoring and measurement Preventive
    Define the validity period for technical documentation assessment certificates. CC ID 17227
    [Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.
    Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.]
    Monitoring and measurement Preventive
    Ensure protocols are free from injection flaws. CC ID 16401 Monitoring and measurement Preventive
    Approve the vulnerability management program. CC ID 15722 Monitoring and measurement Preventive
    Correct compliance violations. CC ID 13515
    [Providers of high-risk AI systems shall: take the necessary corrective actions and provide information as required in Article 20; Article 16 ¶ 1 (j)
    Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. Article 20 1.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.
    Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Article 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation. Article 79 2. ¶ 2
    The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. Article 79 4.
    {high-risk AI system} Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. Article 80 2.
    {high-risk AI system} Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. Article 80 2.
    The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. Article 80 4.
    The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. Article 80 5.
    The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1. Article 82 2.
    Where the market surveillance authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe: Article 83 1.
    Where, having performed an evaluation under Article 79, after consulting the relevant national public authority referred to in Article 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system complies with this Regulation, it nevertheless presents a risk to the health or safety of persons, to fundamental rights, or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe. Article 82 1.]
    Monitoring and measurement Corrective
    Convert data into standard units before reporting metrics. CC ID 15507 Monitoring and measurement Corrective
    Review documentation to determine the effectiveness of in scope controls. CC ID 16522 Audits and risk management Preventive
    Establish, implement, and maintain Data Protection Impact Assessments. CC ID 14830
    [Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680. Article 26 9.]
    Audits and risk management Preventive
    Assess the potential level of business impact risk associated with the loss of personnel. CC ID 17172 Audits and risk management Detective
    Assess the potential level of business impact risk associated with individuals. CC ID 17170 Audits and risk management Detective
    Assess the potential level of business impact risk associated with non-compliance. CC ID 17169 Audits and risk management Detective
    Assess the potential level of business impact risk associated with the natural environment. CC ID 17171 Audits and risk management Detective
    Approve the risk acceptance level, as necessary. CC ID 17168 Audits and risk management Preventive
    Analyze supply chain risk management procedures, as necessary. CC ID 13198 Audits and risk management Detective
    Assign virtual escorting to authorized personnel. CC ID 16440 Technical security Preventive
    Implement physical identification processes. CC ID 13715 Physical and environmental protection Preventive
    Refrain from using knowledge-based authentication for in-person identity verification. CC ID 13717 Physical and environmental protection Preventive
    Restrict physical access mechanisms to authorized parties. CC ID 16924 Physical and environmental protection Preventive
    Use systems in accordance with the standard operating procedures manual. CC ID 15049
    [Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 53 4.]
    Operational management Preventive
    Provide support for information sharing activities. CC ID 15644 Operational management Preventive
    Contain the incident to prevent further loss. CC ID 01751 Operational management Corrective
    Conduct incident investigations, as necessary. CC ID 13826
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Operational management Detective
    Provide the reasons for adverse decisions made by artificial intelligence systems. CC ID 17253 Operational management Preventive
    Authorize artificial intelligence systems for use under defined conditions. CC ID 17210
    [{not be taken} {adverse effect} The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. Article 5 3. ¶ 2]
    Operational management Preventive
    Discard the outputs of the artificial intelligence system when authorizations are denied. CC ID 17225
    [In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.]
    Operational management Preventive
    Ensure the artificial intelligence system performs at an acceptable level of accuracy, robustness, and cybersecurity. CC ID 15024
    [High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. Article 15 1.
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. Article 55 1.(d)]
    Operational management Preventive
    Take into account the nature of the situation when determining the possibility of using 'real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement. CC ID 15020
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used; Article 5 2.(a)]
    Operational management Preventive
    Use a remote biometric identification system under defined conditions. CC ID 15016
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    {post-remote biometric identification system} Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. Article 26 10. ¶ 1]
    Operational management Preventive
    Implement measures to enable personnel assigned to human oversight to intervene or interrupt the operation of the artificial intelligence system. CC ID 15093
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state. Article 14 4.(e)]
    Operational management Preventive
    Reassess the designation of artificial intelligence systems. CC ID 17230
    [Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. Article 52 5.]
    Operational management Preventive
    Take into account the consequences for the rights and freedoms of persons when using ‘real-time’ remote biometric identification systems for law enforcement. CC ID 14957
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. Article 5 2.(b)]
    Operational management Preventive
    Allow the use of 'real-time' remote biometric identification systems for law enforcement under defined conditions. CC ID 14955
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: Article 5 2.]
    Operational management Preventive
    Refrain from using remote biometric identification systems under defined conditions. CC ID 14953
    [The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: Article 5 1.(h)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; Article 5 1.(h)(i)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; Article 5 1.(h)(ii)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. Article 5 1.(h)(iii)
    For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. Article 26 10. ¶ 2
    {not be used} {not be taken} {adverse effect} In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. Article 26 10. ¶ 3]
    Operational management Preventive
    Prohibit the use of artificial intelligence systems under defined conditions. CC ID 14951
    [The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; Article 5 1.(a)
    The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; Article 5 1.(b)
    The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: Article 5 1.(c)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; Article 5 1.(d)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; Article 5 1.(d)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; Article 5 1.(e)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; Article 5 1.(f)
    {religious beliefs} The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; Article 5 1.(g)
    {religious beliefs} The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; Article 5 1.(g)
    In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Operational management Preventive
    Identify the components in a set of web pages that consistently have the same functionality. CC ID 15116 Systems design, build, and implementation Preventive
    Confirm the individual's identity before granting an opt-out request. CC ID 16813 Privacy protection for information and data Preventive
    Approve the approval application unless applicant has been convicted. CC ID 16603 Privacy protection for information and data Preventive
    Provide the supervisory authority with any information requested by the supervisory authority. CC ID 12606
    [Notified bodies shall inform the notifying authority of the following: any circumstances affecting the scope of or F0BBBC;" class="term_primary-noun">conditions for notification; Article 45 1.(c)
    Notified bodies shall inform the notifying authority of the following: any request for information which they have received from ound-color:#F0BBBC;" class="term_primary-noun">market surveillance authorities regarding conformity assessment activities; Article 45 1.(d)
    Notified bodies shall inform the notifying authority of the following: on request, conformity assessment activities performed within the le="background-color:#F0BBBC;" class="term_primary-noun">scope of their pan style="background-color:#F0BBBC;" class="term_primary-noun">notification and any other activity performed, including cross-border activities and subcontracting. Article 45 1.(e)
    Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. Article 5 4.
    Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. Article 52 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to in point (b), necessary to demonstrate compliance with the obligations in this Chapter; Article 54 3.(c)
    The provider of the general-purpose AI model concerned, or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading. Article 91 5.
    The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. Article 92 5.]
    Privacy protection for information and data Preventive
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is to support the treatment of the individual. CC ID 11969 Privacy protection for information and data Preventive
    Refrain from installing software on an individual's computer unless acting in accordance with a court order. CC ID 14000 Privacy protection for information and data Preventive
    Remove or uninstall software from an individual's computer, as necessary. CC ID 13998 Privacy protection for information and data Preventive
    Remove or uninstall software from an individual's computer when consent is revoked. CC ID 13997 Privacy protection for information and data Preventive
    Define the fee structure for the appeal process. CC ID 16532 Privacy protection for information and data Preventive
    Define the time requirements for the appeal process. CC ID 16531 Privacy protection for information and data Preventive
    Formalize client and third party relationships with contracts or nondisclosure agreements. CC ID 00794
    [The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence. Article 25 4. ¶ 1]
    Third Party and supply chain oversight Detective
  • Records Management
    25
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Maintain vulnerability scan reports as organizational records. CC ID 12092 Monitoring and measurement Preventive
    Archive appropriate records, logs, and database tables. CC ID 06321
    [Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. Article 19 1.
    Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law. Article 19 2.]
    Records management Preventive
    Retain records in accordance with applicable requirements. CC ID 00968
    [The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, #B7D8ED;" class="term_primary-verb">keepan> at the disposal of the national competent authorities: the technical documentation referred to in Article 11; Article 18 1.(a)
    Providers of high-risk AI systems shall: keep the documentation referred to in Article 18; Article 16 ¶ 1 (d)
    Providers of high-risk AI systems shall: when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19; Article 16 ¶ 1 (e)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the documentation concerning the quality management system referred to in Article 17; Article 18 1.(b)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the documentation concerning the changes approved by notified bodies, where applicable; Article 18 1.(c)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the decisions and other documents issued by the notified bodies, where applicable; Article 18 1.(d)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the EU declaration of conformity referred to in Article 47. Article 18 1.(e)
    Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. Article 19 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep at the disposal of the competent authorities and national authorities or bodies referred to in Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details of the provider that appointed the authorised representative, a copy of the EU declaration of conformity referred to in Article 47, the technical documentation and, if applicable, the certificate issued by the notified body; Article 22 3.(b)
    Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity referred to in Article 47. Article 23 5.
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; Article 54 3.(b)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; Article 54 3.(b)
    Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Article 26 6. ¶ 1]
    Records management Preventive
    Establish and maintain access controls for all records. CC ID 00371
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; Article 10 5.(c)
    {training data} {validation data} {testing data} Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other relevant technical means and tools enabling remote access. Article 74 12.]
    Records management Preventive
    Collect and retain disclosure authorizations for each data subject. CC ID 13434 Privacy protection for information and data Preventive
    Refrain from destroying records being inspected or reviewed. CC ID 13015 Privacy protection for information and data Preventive
    Include the data protection officer's contact information in the record of processing activities. CC ID 12640 Privacy protection for information and data Preventive
    Include the data processor's contact information in the record of processing activities. CC ID 12657 Privacy protection for information and data Preventive
    Include the data processor's representative's contact information in the record of processing activities. CC ID 12658 Privacy protection for information and data Preventive
    Include a general description of the implemented security measures in the record of processing activities. CC ID 12641 Privacy protection for information and data Preventive
    Include a description of the data subject categories in the record of processing activities. CC ID 12659 Privacy protection for information and data Preventive
    Include the purpose of processing restricted data in the record of processing activities. CC ID 12663
    [{was necessary} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. Article 10 5.(f)]
    Privacy protection for information and data Preventive
    Include the personal data processing categories in the record of processing activities. CC ID 12661 Privacy protection for information and data Preventive
    Include the time limits for erasing each data category in the record of processing activities. CC ID 12690 Privacy protection for information and data Preventive
    Include the data recipient categories to whom restricted data has been or will be disclosed in the record of processing activities. CC ID 12664 Privacy protection for information and data Preventive
    Include a description of the personal data categories in the record of processing activities. CC ID 12660 Privacy protection for information and data Preventive
    Include the joint data controller's contact information in the record of processing activities. CC ID 12639 Privacy protection for information and data Preventive
    Include the data controller's representative's contact information in the record of processing activities. CC ID 12638 Privacy protection for information and data Preventive
    Include documentation of the transferee's safeguards for transferring restricted data in the record of processing activities. CC ID 12643 Privacy protection for information and data Preventive
    Include the identification of transferees for transferring restricted data in the record of processing activities. CC ID 12642 Privacy protection for information and data Preventive
    Include the data controller's contact information in the record of processing activities. CC ID 12637 Privacy protection for information and data Preventive
    Refrain from disclosing Individually Identifiable Health Information when in violation of territorial or federal law. CC ID 11966 Privacy protection for information and data Preventive
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is permitted with the consent of the data subject. CC ID 11970 Privacy protection for information and data Preventive
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is permitted by law. CC ID 11976 Privacy protection for information and data Preventive
    Authorize the transfer of restricted data in accordance with organizational standards. CC ID 16428 Privacy protection for information and data Preventive
  • Systems Continuity
    3
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Execute fail-safe procedures when an emergency occurs. CC ID 07108
    [{backup plans} The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. Article 15 4. ¶ 2]
    Operational and Systems Continuity Preventive
    Approve or deny third party recovery plans, as necessary. CC ID 17124 Third Party and supply chain oversight Preventive
    Review third party recovery plans. CC ID 17123 Third Party and supply chain oversight Detective
  • Systems Design, Build, and Implementation
    10
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Separate foreground from background when designing and building content. CC ID 15125 Operational management Preventive
    Establish, implement, and maintain an artificial intelligence system. CC ID 14943
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.]
    Operational management Preventive
    Include mitigation measures to address biased output during the development of artificial intelligence systems. CC ID 15047
    [High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. Article 15 4. ¶ 3
    High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. Article 15 4. ¶ 3]
    Operational management Corrective
    Implement an acceptable level of accuracy, robustness, and cybersecurity in the development of artificial intelligence systems. CC ID 15022
    [High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. Article 15 1.]
    Operational management Preventive
    Develop artificial intelligence systems involving the training of models with data sets that meet the quality criteria. CC ID 14996
    [{training data} {validation data} {testing data} High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used. Article 10 1.]
    Operational management Preventive
    Initiate the System Development Life Cycle planning phase. CC ID 06266 Systems design, build, and implementation Preventive
    Design and develop built-in redundancies, as necessary. CC ID 13064
    [{be resilient} {technical measures} High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. Article 15 4. ¶ 1
    {be resilient} {technical measures} High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. Article 15 4. ¶ 1]
    Systems design, build, and implementation Preventive
    Initiate the System Development Life Cycle development phase or System Development Life Cycle build phase. CC ID 06267 Systems design, build, and implementation Preventive
    Allow personal data collected for other purposes to be used to develop and test artificial intelligence systems in regulatory sandboxes under defined conditions. CC ID 15044
    [In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems; Article 59 1.(a)(i)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures; Article 59 1.(a)(ii)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: energy sustainability; Article 59 1.(a)(iii)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: safety and resilience of transport systems and mobility, critical infrastructure and networks; Article 59 1.(a)(iv)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: efficiency and quality of public administration and public services; Article 59 1.(a)(v)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data; Article 59 1.(b)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing; Article 59 1.(c)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data; Article 59 1.(d)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: providers can further share the originally collected data only in accordance with Union data protection law; any personal data created in the sandbox cannot be shared outside the sandbox; Article 59 1.(e)
    {do not lead} {do not affect} In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data; Article 59 1.(f)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law; Article 59 1.(h)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV; Article 59 1.(i)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities. Article 59 1.(j)
    {technical measures} In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period; Article 59 1.(g)]
    Systems design, build, and implementation Preventive
    Initiate the System Development Life Cycle implementation phase. CC ID 06268 Systems design, build, and implementation Preventive
  • Technical Security
    24
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Establish, implement, and maintain log analysis tools. CC ID 17056 Monitoring and measurement Preventive
    Identify cybersecurity events in event logs and audit logs. CC ID 13206 Monitoring and measurement Detective
    Erase payment applications when suspicious activity is confirmed. CC ID 12193 Monitoring and measurement Corrective
    Conduct Red Team exercises, as necessary. CC ID 12131 Monitoring and measurement Detective
    Test security systems and associated security procedures, as necessary. CC ID 11901
    [{testing in real-world conditions} Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5. Article 60 1. ¶ 1]
    Monitoring and measurement Detective
    Establish, implement, and maintain a port scan baseline for all in scope systems. CC ID 12134 Monitoring and measurement Detective
    Prevent adversaries from disabling or compromising security controls. CC ID 17057 Monitoring and measurement Preventive
    Perform vulnerability scans prior to installing payment applications. CC ID 12192 Monitoring and measurement Detective
    Implement scanning tools, as necessary. CC ID 14282 Monitoring and measurement Detective
    Establish, implement, and maintain an exception management process for vulnerabilities that cannot be remediated. CC ID 13859 Monitoring and measurement Corrective
    Establish, implement, and maintain a network activity baseline. CC ID 13188 Monitoring and measurement Detective
    Analyze the organization's information security environment. CC ID 13122 Audits and risk management Preventive
    Control all methods of remote access and teleworking. CC ID 00559
    [{training data} {validation data} {testing data} Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other relevant technical means and tools enabling remote access. Article 74 12.]
    Technical security Preventive
    Refrain from allowing individuals to self-enroll into multifactor authentication from untrusted devices. CC ID 17173 Technical security Preventive
    Implement phishing-resistant multifactor authentication techniques. CC ID 16541 Technical security Preventive
    Limit the source addresses from which remote administration is performed. CC ID 16393 Technical security Preventive
    Implement audio protection controls on telephone systems in controlled areas. CC ID 16455 Physical and environmental protection Preventive
    Shut down systems when an integrity violation is detected, as necessary. CC ID 10679
    [Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Operational management Corrective
    Restrict the exporting of files and directories, as necessary. CC ID 16315 System hardening through configuration management Preventive
    Establish, implement, and maintain a notification system for opt-out requests. CC ID 16880 Privacy protection for information and data Preventive
    Implement technical controls that limit processing restricted data for specific purposes. CC ID 12646 Privacy protection for information and data Preventive
    Render unrecoverable sensitive authentication data after authorization is approved. CC ID 11952 Privacy protection for information and data Preventive
    Encrypt, truncate, or tokenize data fields, as necessary. CC ID 06850 Privacy protection for information and data Preventive
    Implement security measures to protect personal data. CC ID 13606
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; Article 10 5.(c)]
    Privacy protection for information and data Preventive
  • Testing
    25
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Test compliance controls for proper functionality. CC ID 00660
    [High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. Article 9 6.]
    Monitoring and measurement Detective
    Enable security controls which were disabled to conduct testing. CC ID 17031 Monitoring and measurement Preventive
    Disable dedicated accounts after testing is complete. CC ID 17033 Monitoring and measurement Preventive
    Protect systems and data during testing in the production environment. CC ID 17198 Monitoring and measurement Preventive
    Define the criteria to conduct testing in the production environment. CC ID 17197
    [{high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market surveillance authority in the Member State where the testing in real world conditions is to be conducted; Article 60 4.(a)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider, with the exception of providers or prospective providers of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, and high-risk AI systems referred to in point 2 of Annex III has registered the testing in real world conditions in accordance with Article 71(4) with a Union-wide unique single identification number and with the information specified in Annex IX; the provider or prospective provider of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, has registered the testing in real-world conditions in the secure non-public section of the EU database according to Article 49(4), point (d), with a Union-wide unique single identification number and with the information specified therein; the provider or prospective provider of high-risk AI systems referred to in point 2 of Annex III has registered the testing in real-world conditions in accordance with Article 49(5); Article 60 4.(c)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider conducting the testing in real world conditions is established in the Union or has appointed a legal representative who is established in the Union; Article 60 4.(d)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: data collected and processed for the purpose of the testing in real world conditions shall be transferred to third countries only provided that appropriate and applicable safeguards under Union law are implemented; Article 60 4.(e)
    {high-risk AI systems} {outside AI regulatory sandbox} {no longer than necessary} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than six months, which may be extended for an additional period of six months, subject to prior notification by the provider or prospective provider to the market surveillance authority, accompanied by an explanation of the need for such an extension; Article 60 4.(f)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the subjects of the testing in real world conditions who are persons belonging to vulnerable groups due to their age or disability, are appropriately protected; Article 60 4.(g)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the testing in real world conditions is effectively overseen by the provider or prospective provider, as well as by deployers or prospective deployers through persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks; Article 60 4.(j)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded. Article 60 4.(k)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; Article 60 4.(h)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; Article 60 4.(h)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subjects, and their personal data shall be deleted after the test is performed; Article 60 4.(i)]
    Monitoring and measurement Preventive
    Suspend testing in a production environment, as necessary. CC ID 17231
    [Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Monitoring and measurement Preventive
    Test in scope systems for segregation of duties, as necessary. CC ID 13906 Monitoring and measurement Detective
    Include test requirements for the use of production data in the testing program. CC ID 17201 Monitoring and measurement Preventive
    Include test requirements for the use of human subjects in the testing program. CC ID 16222
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Monitoring and measurement Preventive
    Test the in scope system in accordance with its intended purpose. CC ID 14961
    [The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. Article 9 8.
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; Article 55 1.(a)]
    Monitoring and measurement Preventive
    Perform network testing in accordance with organizational standards. CC ID 16448 Monitoring and measurement Preventive
    Test user accounts in accordance with organizational standards. CC ID 16421 Monitoring and measurement Preventive
    Opt out of third party conformity assessments when the system meets harmonized standards. CC ID 15096
    [Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter. Article 43 3. ¶ 3]
    Monitoring and measurement Preventive
    Perform conformity assessments, as necessary. CC ID 15095
    [Providers of high-risk AI systems shall: ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its being placed on the market or put into service; Article 16 ¶ 1 (f)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the high-risk AI system; Article 23 1.(a)
    For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. Article 43 3. ¶ 1
    High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer. Article 43 4. ¶ 1
    By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Monitoring and measurement Detective
    Use dedicated user accounts when conducting penetration testing. CC ID 13728 Monitoring and measurement Detective
    Remove dedicated user accounts after penetration testing is concluded. CC ID 13729 Monitoring and measurement Corrective
    Conduct scanning activities in a test environment. CC ID 17036 Monitoring and measurement Preventive
    Test the system for unvalidated input. CC ID 01318
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Monitoring and measurement Detective
    Document and maintain test results. CC ID 17028
    [{high-risk artificial intelligence system} A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. Article 6 4.]
    Monitoring and measurement Preventive
    Determine the effectiveness of in scope controls. CC ID 06984
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.]
    Audits and risk management Detective
    Perform risk assessments for all target environments, as necessary. CC ID 06452
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; Article 55 1.(b)]
    Audits and risk management Preventive
    Determine the effectiveness of risk control measures. CC ID 06601
    [The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. Article 9 4.]
    Audits and risk management Detective
    Conduct web accessibility testing in accordance with organizational standards. CC ID 16950 Operational management Preventive
    Establish, implement, and maintain sandboxes. CC ID 14946 Systems design, build, and implementation Preventive
    Refrain from storing data elements containing payment card full magnetic stripe data. CC ID 04757 Privacy protection for information and data Detective
  • Training
    25
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE CLASS
    Provide new hires limited network access to complete computer-based training. CC ID 17008 Human Resources management Preventive
    Submit applications for professional certification. CC ID 16192 Human Resources management Preventive
    Approve training plans, as necessary. CC ID 17193 Human Resources management Preventive
    Train personnel to recognize conditions of diseases or sicknesses, as necessary. CC ID 14383 Human Resources management Detective
    Include prevention techniques in training regarding diseases or sicknesses. CC ID 14387 Human Resources management Preventive
    Include the concept of disease clusters when training to recognize conditions of diseases or sicknesses. CC ID 14386 Human Resources management Preventive
    Train personnel to identify and communicate symptoms of exposure to disease or sickness. CC ID 14385 Human Resources management Detective
    Establish, implement, and maintain a list of tasks to include in the training plan. CC ID 17101 Human Resources management Preventive
    Designate training facilities in the training plan. CC ID 16200 Human Resources management Preventive
    Include insider threats in the security awareness program. CC ID 16963 Human Resources management Preventive
    Conduct personal data processing training. CC ID 13757 Human Resources management Preventive
    Include in personal data processing training how to provide the contact information for the categories of personal data the organization may disclose. CC ID 13758 Human Resources management Preventive
    Complete security awareness training prior to being granted access to information systems or data. CC ID 17009 Human Resources management Preventive
    Include media protection in the security awareness program. CC ID 16368 Human Resources management Preventive
    Include identity and access management in the security awareness program. CC ID 17013 Human Resources management Preventive
    Include the encryption process in the security awareness program. CC ID 17014 Human Resources management Preventive
    Include physical security in the security awareness program. CC ID 16369 Human Resources management Preventive
    Include data management in the security awareness program. CC ID 17010 Human Resources management Preventive
    Include e-mail and electronic messaging in the security awareness program. CC ID 17012 Human Resources management Preventive
    Include updates on emerging issues in the security awareness program. CC ID 13184 Human Resources management Preventive
    Include cybersecurity in the security awareness program. CC ID 13183 Human Resources management Preventive
    Include implications of non-compliance in the security awareness program. CC ID 16425 Human Resources management Preventive
    Include social networking in the security awareness program. CC ID 17011 Human Resources management Preventive
    Include the acceptable use policy in the security awareness program. CC ID 15487 Human Resources management Preventive
    Train all personnel and third parties on how to recognize and report system failures. CC ID 13475 Human Resources management Preventive
Common Controls and
mandates by Classification
240 Mandated Controls - bold    
64 Implied Controls - italic     779 Implementation

There are three types of Common Control classifications; corrective, detective, and preventive. Common Controls at the top level have the default assignment of Impact Zone.

Number of Controls
1083 Total
  • Corrective
    21
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Report errors and faults to the appropriate personnel, as necessary. CC ID 14296
    [Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Monitoring and measurement Communicate
    Erase payment applications when suspicious activity is confirmed. CC ID 12193 Monitoring and measurement Technical Security
    Remove dedicated user accounts after penetration testing is concluded. CC ID 13729 Monitoring and measurement Testing
    Disallow the use of payment applications when a vulnerability scan report indicates vulnerabilities are present. CC ID 12188 Monitoring and measurement Configuration
    Establish, implement, and maintain an exception management process for vulnerabilities that cannot be remediated. CC ID 13859 Monitoring and measurement Technical Security
    Correct compliance violations. CC ID 13515
    [Providers of high-risk AI systems shall: take the necessary corrective actions and provide information as required in Article 20; Article 16 ¶ 1 (j)
    Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. Article 20 1.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.
    Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance authority in cooperation with the national public authority referred to in Article 77(1), finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without undue delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation. Article 79 2. ¶ 2
    The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. Article 79 4.
    {high-risk AI system} Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. Article 80 2.
    {high-risk AI system} Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action within a period the market surveillance authority may prescribe. Article 80 2.
    The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. Article 80 4.
    The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market. Article 80 5.
    The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI systems concerned that it has made available on the Union market within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1. Article 82 2.
    Where the market surveillance authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe: Article 83 1.
    Where, having performed an evaluation under Article 79, after consulting the relevant national public authority referred to in Article 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system complies with this Regulation, it nevertheless presents a risk to the health or safety of persons, to fundamental rights, or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe. Article 82 1.]
    Monitoring and measurement Process or Activity
    Convert data into standard units before reporting metrics. CC ID 15507 Monitoring and measurement Process or Activity
    Report compliance monitoring statistics to the Board of Directors and other critical stakeholders, as necessary. CC ID 00676
    [Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly. Article 20 1.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.
    {not be} A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken. Article 24 4.]
    Monitoring and measurement Actionable Reports or Measurements
    Purchase insurance on behalf of interested personnel and affected parties. CC ID 16571 Audits and risk management Acquisition/Sale of Assets or Services
    Document residual risk in a residual risk report. CC ID 13664 Audits and risk management Establish/Maintain Documentation
    Document all lost badges in a lost badge list. CC ID 12448 Physical and environmental protection Establish/Maintain Documentation
    Update operating procedures that contribute to user errors. CC ID 06935 Operational management Establish/Maintain Documentation
    Contain the incident to prevent further loss. CC ID 01751 Operational management Process or Activity
    Share incident information with interested personnel and affected parties. CC ID 01212
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; Article 55 1.(c)
    Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.
    Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. Article 73 1.
    For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulations (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those referred to in Article 3, point (49)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the Member States where the incident occurred. Article 73 10.]
    Operational management Data and Information Management
    Shut down systems when an integrity violation is detected, as necessary. CC ID 10679
    [Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Operational management Technical Security
    Include mitigation measures to address biased output during the development of artificial intelligence systems. CC ID 15047
    [High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. Article 15 4. ¶ 3
    High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures. Article 15 4. ¶ 3]
    Operational management Systems Design, Build, and Implementation
    Withdraw authorizations that are unjustified. CC ID 15035 Operational management Business Processes
    Process product return requests. CC ID 11598 Acquisition or sale of facilities, technology, and services Acquisition/Sale of Assets or Services
    Include any reasons for delay if notifying the supervisory authority after the time limit. CC ID 12675 Privacy protection for information and data Communicate
    Notify the subject of care when a lack of availability of health information systems might have adversely affected their care. CC ID 13990 Privacy protection for information and data Communicate
    Refrain from disseminating and communicating with individuals that have opted out of direct marketing communications. CC ID 13708 Privacy protection for information and data Communicate
  • Detective
    77
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Monitor the usage and capacity of critical assets. CC ID 14825 Monitoring and measurement Monitor and Evaluate Occurrences
    Monitor the usage and capacity of Information Technology assets. CC ID 00668
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Monitoring and measurement Monitor and Evaluate Occurrences
    Monitor systems for errors and faults. CC ID 04544 Monitoring and measurement Monitor and Evaluate Occurrences
    Establish, implement, and maintain monitoring and logging operations. CC ID 00637
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: monitoring the operation of high-risk AI systems referred to in Article 26(5). Article 12 2.(c)
    For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)]
    Monitoring and measurement Log Management
    Monitor and evaluate system telemetry data. CC ID 14929 Monitoring and measurement Actionable Reports or Measurements
    Monitor systems for inappropriate usage and other security violations. CC ID 00585
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; Article 14 4.(a)]
    Monitoring and measurement Monitor and Evaluate Occurrences
    Operationalize key monitoring and logging concepts to ensure the audit trails capture sufficient information. CC ID 00638
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: facilitating the post-market monitoring referred to in Article 72; and Article 12 2.(b)]
    Monitoring and measurement Log Management
    Correlate log entries to security controls to verify the security control's effectiveness. CC ID 13207 Monitoring and measurement Log Management
    Identify cybersecurity events in event logs and audit logs. CC ID 13206 Monitoring and measurement Technical Security
    Enable logging for all systems that meet a traceability criteria. CC ID 00640
    [High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system. Article 12 1.]
    Monitoring and measurement Log Management
    Monitor for and react to when suspicious activities are detected. CC ID 00586
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3
    The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Monitoring and measurement Monitor and Evaluate Occurrences
    Monitor and evaluate the effectiveness of detection tools. CC ID 13505 Monitoring and measurement Investigate
    Monitor and review retail payment activities, as necessary. CC ID 13541 Monitoring and measurement Monitor and Evaluate Occurrences
    Determine if high rates of retail payment activities are from Originating Depository Financial Institutions. CC ID 13546 Monitoring and measurement Investigate
    Review retail payment service reports, as necessary. CC ID 13545 Monitoring and measurement Investigate
    Include the correlation and analysis of information obtained during testing in the continuous monitoring program. CC ID 14250 Monitoring and measurement Process or Activity
    Log account usage times. CC ID 07099 Monitoring and measurement Log Management
    Log account usage durations. CC ID 12117 Monitoring and measurement Monitor and Evaluate Occurrences
    Report inappropriate usage of user accounts to the appropriate personnel. CC ID 14243 Monitoring and measurement Communicate
    Test compliance controls for proper functionality. CC ID 00660
    [High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. Article 9 6.]
    Monitoring and measurement Testing
    Conduct Red Team exercises, as necessary. CC ID 12131 Monitoring and measurement Technical Security
    Test security systems and associated security procedures, as necessary. CC ID 11901
    [{testing in real-world conditions} Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5. Article 60 1. ¶ 1]
    Monitoring and measurement Technical Security
    Test in scope systems for segregation of duties, as necessary. CC ID 13906 Monitoring and measurement Testing
    Identify risk management measures when testing in scope systems. CC ID 14960
    [High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section. Article 9 6.]
    Monitoring and measurement Process or Activity
    Perform conformity assessments, as necessary. CC ID 15095
    [Providers of high-risk AI systems shall: ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, prior to its being placed on the market or put into service; Article 16 ¶ 1 (f)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the high-risk AI system; Article 23 1.(a)
    For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply. Article 43 3. ¶ 1
    High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer. Article 43 4. ¶ 1
    By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Monitoring and measurement Testing
    Establish, implement, and maintain a port scan baseline for all in scope systems. CC ID 12134 Monitoring and measurement Technical Security
    Use dedicated user accounts when conducting penetration testing. CC ID 13728 Monitoring and measurement Testing
    Perform vulnerability scans prior to installing payment applications. CC ID 12192 Monitoring and measurement Technical Security
    Implement scanning tools, as necessary. CC ID 14282 Monitoring and measurement Technical Security
    Test the system for unvalidated input. CC ID 01318
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Monitoring and measurement Testing
    Monitor personnel and third parties for compliance to the organizational compliance framework. CC ID 04726 Monitoring and measurement Monitor and Evaluate Occurrences
    Report on the percentage of needed external audits that have been completed and reviewed. CC ID 11632 Monitoring and measurement Actionable Reports or Measurements
    Establish, implement, and maintain a network activity baseline. CC ID 13188 Monitoring and measurement Technical Security
    Determine the effectiveness of in scope controls. CC ID 06984
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.]
    Audits and risk management Testing
    Review audit reports to determine the effectiveness of in scope controls. CC ID 12156 Audits and risk management Audits and Risk Management
    Interview stakeholders to determine the effectiveness of in scope controls. CC ID 12154 Audits and risk management Audits and Risk Management
    Review policies and procedures to determine the effectiveness of in scope controls. CC ID 12144 Audits and risk management Audits and Risk Management
    Evaluate personnel status changes to determine the effectiveness of in scope controls. CC ID 16556 Audits and risk management Audits and Risk Management
    Determine whether individuals performing a control have the appropriate staff qualifications, staff clearances, and staff competencies. CC ID 16555 Audits and risk management Audits and Risk Management
    Document and justify any exclusions from the scope of the risk management activities in the risk management program. CC ID 15336 Audits and risk management Business Processes
    Employ third parties when implementing a risk assessment, as necessary. CC ID 16306 Audits and risk management Human Resources Management
    Review the risk profiles, as necessary. CC ID 16561 Audits and risk management Audits and Risk Management
    Conduct external audits of risk assessments, as necessary. CC ID 13308 Audits and risk management Audits and Risk Management
    Evaluate the effectiveness of threat and vulnerability management procedures. CC ID 13491 Audits and risk management Investigate
    Assess the potential level of business impact risk associated with the loss of personnel. CC ID 17172 Audits and risk management Process or Activity
    Assess the potential level of business impact risk associated with individuals. CC ID 17170 Audits and risk management Process or Activity
    Identify changes to in scope systems that could threaten communication between business units. CC ID 13173 Audits and risk management Investigate
    Assess the potential level of business impact risk associated with non-compliance. CC ID 17169 Audits and risk management Process or Activity
    Assess the potential level of business impact risk associated with reputational damage. CC ID 15335 Audits and risk management Audits and Risk Management
    Assess the potential level of business impact risk associated with the natural environment. CC ID 17171 Audits and risk management Process or Activity
    Perform a gap analysis to review in scope controls for identified risks and implement new controls, as necessary. CC ID 00704
    [In identifying the most appropriate risk management measures, the following shall be ensured: elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system; Article 9 5. ¶ 2 (a)]
    Audits and risk management Establish/Maintain Documentation
    Determine the effectiveness of risk control measures. CC ID 06601
    [The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements. Article 9 4.]
    Audits and risk management Testing
    Analyze the impact of artificial intelligence systems on society. CC ID 16317 Audits and risk management Audits and Risk Management
    Analyze the impact of artificial intelligence systems on individuals. CC ID 16316 Audits and risk management Audits and Risk Management
    Analyze supply chain risk management procedures, as necessary. CC ID 13198 Audits and risk management Process or Activity
    Detect anomalies in physical barriers. CC ID 13533 Physical and environmental protection Investigate
    Report anomalies in the visitor log to appropriate personnel. CC ID 14755 Physical and environmental protection Investigate
    Log when the cabinet is accessed. CC ID 11674 Physical and environmental protection Log Management
    Train personnel to recognize conditions of diseases or sicknesses, as necessary. CC ID 14383 Human Resources management Training
    Train personnel to identify and communicate symptoms of exposure to disease or sickness. CC ID 14385 Human Resources management Training
    Conduct incident investigations, as necessary. CC ID 13826
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Operational management Process or Activity
    Analyze the behaviors of individuals involved in the incident during incident investigations. CC ID 14042 Operational management Investigate
    Identify the affected parties during incident investigations. CC ID 16781 Operational management Investigate
    Interview suspects during incident investigations, as necessary. CC ID 14041 Operational management Investigate
    Interview victims and witnesses during incident investigations, as necessary. CC ID 14038 Operational management Investigate
    Refrain from altering the state of compromised systems when collecting digital forensic evidence. CC ID 08671
    [The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. Article 73 6. ¶ 2]
    Operational management Investigate
    Grant registration after competence and integrity is verified. CC ID 16802 Operational management Behavior
    Assess the trustworthiness of artificial intelligence systems. CC ID 16319 Operational management Business Processes
    Ensure the transport conditions for artificial intelligence systems refrain from compromising compliance. CC ID 15031
    [{storage conditions} Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. Article 23 4.
    {storage conditions} Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. Article 24 3.]
    Operational management Business Processes
    Ensure the storage conditions for artificial intelligence systems refrain from compromising compliance. CC ID 15030
    [{storage conditions} Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2. Article 23 4.
    {storage conditions} Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2. Article 24 3.]
    Operational management Physical and Environmental Protection
    Ensure data sets have the appropriate characteristics. CC ID 15000
    [{training data} {validation data} {testing data} {be representative} {be complete} {be error free} Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. Article 10 3.]
    Records management Data and Information Management
    Ensure data sets are complete, are accurate, and are relevant. CC ID 14999
    [{training data} {validation data} {testing data} {be representative} {be complete} {be error free} Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof. Article 10 3.]
    Records management Data and Information Management
    Analyze the digital content hosted by the organization for any electronic material associated with the take-down request. CC ID 09974 Acquisition or sale of facilities, technology, and services Business Processes
    Analyze requirements for processing personal data in contracts. CC ID 12550 Privacy protection for information and data Investigate
    Refrain from storing data elements containing payment card full magnetic stripe data. CC ID 04757 Privacy protection for information and data Testing
    Formalize client and third party relationships with contracts or nondisclosure agreements. CC ID 00794
    [The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence. Article 25 4. ¶ 1]
    Third Party and supply chain oversight Process or Activity
    Review third party recovery plans. CC ID 17123 Third Party and supply chain oversight Systems Continuity
  • IT Impact Zone
    11
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Leadership and high level objectives CC ID 00597 Leadership and high level objectives IT Impact Zone
    Monitoring and measurement CC ID 00636 Monitoring and measurement IT Impact Zone
    Audits and risk management CC ID 00677 Audits and risk management IT Impact Zone
    Technical security CC ID 00508 Technical security IT Impact Zone
    Human Resources management CC ID 00763 Human Resources management IT Impact Zone
    Operational management CC ID 00805 Operational management IT Impact Zone
    System hardening through configuration management CC ID 00860 System hardening through configuration management IT Impact Zone
    Records management CC ID 00902 Records management IT Impact Zone
    Systems design, build, and implementation CC ID 00989 Systems design, build, and implementation IT Impact Zone
    Privacy protection for information and data CC ID 00008 Privacy protection for information and data IT Impact Zone
    Third Party and supply chain oversight CC ID 08807 Third Party and supply chain oversight IT Impact Zone
  • Preventive
    974
    KEY:    Primary Verb     Primary Noun     Secondary Verb     Secondary Noun     Limiting Term
    Mandated - bold    Implied - italic    Implementation - regular IMPACT ZONE TYPE
    Establish, implement, and maintain a reporting methodology program. CC ID 02072 Leadership and high level objectives Business Processes
    Establish, implement, and maintain an internal reporting program. CC ID 12409
    [Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. Article 26 7.]
    Leadership and high level objectives Business Processes
    Define the thresholds for escalation in the internal reporting program. CC ID 14332 Leadership and high level objectives Establish/Maintain Documentation
    Define the thresholds for reporting in the internal reporting program. CC ID 14331 Leadership and high level objectives Establish/Maintain Documentation
    Establish, implement, and maintain an external reporting program. CC ID 12876 Leadership and high level objectives Communicate
    Include reporting to governing bodies in the external reporting plan. CC ID 12923
    [Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 22 3.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. Article 22 4.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 54 3.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. Article 54 5.]
    Leadership and high level objectives Communicate
    Submit confidential treatment applications to interested personnel and affected parties. CC ID 16592 Leadership and high level objectives Communicate
    Include the reasons for objections to public disclosure in confidential treatment applications. CC ID 16594 Leadership and high level objectives Establish/Maintain Documentation
    Include contact information for the interested personnel and affected parties the report was filed with in the confidential treatment application. CC ID 16595 Leadership and high level objectives Establish/Maintain Documentation
    Include the information that was omitted in the confidential treatment application. CC ID 16593 Leadership and high level objectives Establish/Maintain Documentation
    Request extensions for submissions to governing bodies, as necessary. CC ID 16955 Leadership and high level objectives Process or Activity
    Analyze organizational objectives, functions, and activities. CC ID 00598 Leadership and high level objectives Monitor and Evaluate Occurrences
    Establish, implement, and maintain data governance and management practices. CC ID 14998
    [{training data} {validation data} {testing data} Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: Article 10 2.]
    Leadership and high level objectives Establish/Maintain Documentation
    Address shortcomings of the data sets in the data governance and management practices. CC ID 15087
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. Article 10 2.(h)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include any shortcomings of the data sets in the data governance and management practices. CC ID 15086
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed. Article 10 2.(h)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include bias for data sets in the data governance and management practices. CC ID 15085
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; Article 10 2.(f)
    Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: appropriate measures to detect, prevent and mitigate possible biases identified according to point (f); Article 10 2.(g)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include the data source in the data governance and management practices. CC ID 17211
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)]
    Leadership and high level objectives Data and Information Management
    Include a data strategy in the data governance and management practices. CC ID 15304 Leadership and high level objectives Establish/Maintain Documentation
    Include data monitoring in the data governance and management practices. CC ID 15303 Leadership and high level objectives Establish/Maintain Documentation
    Include an assessment of the data sets in the data governance and management practices. CC ID 15084
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: an assessment of the availability, quantity and suitability of the data sets that are needed; Article 10 2.(e)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include assumptions for the formulation of data sets in the data governance and management practices. CC ID 15083
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent; Article 10 2.(d)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include data collection for data sets in the data governance and management practices. CC ID 15082
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)
    Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection; Article 10 2.(b)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include data preparations for data sets in the data governance and management practices. CC ID 15081
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation; Article 10 2.(c)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include design choices for data sets in the data governance and management practices. CC ID 15080
    [Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: the relevant design choices; Article 10 2.(a)]
    Leadership and high level objectives Establish/Maintain Documentation
    Establish, implement, and maintain a data classification scheme. CC ID 11628 Leadership and high level objectives Establish/Maintain Documentation
    Take into account the characteristics of the geographical, behavioral and functional setting for all datasets. CC ID 15046
    [Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used. Article 10 4.]
    Leadership and high level objectives Data and Information Management
    Establish, implement, and maintain a Quality Management framework. CC ID 07196 Leadership and high level objectives Establish/Maintain Documentation
    Establish, implement, and maintain a Quality Management policy. CC ID 13694
    [{put in place} Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: Article 17 1.]
    Leadership and high level objectives Establish/Maintain Documentation
    Include a commitment to satisfy applicable requirements in the Quality Management policy. CC ID 13700
    [Quality management system shall include at least the following aspects: a strategy for ="background-color:#F0BBBC;" class="term_primary-noun">regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system; Article 17 1.(a)]
    Leadership and high level objectives Establish/Maintain Documentation
    Tailor the Quality Management policy to support the organization's strategic direction. CC ID 13699
    [{quality management system} The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. Article 17 2.]
    Leadership and high level objectives Establish/Maintain Documentation
    Include a commitment to continual improvement of the Quality Management system in the Quality Management policy. CC ID 13698 Leadership and high level objectives Establish/Maintain Documentation
    Document the measurements used by Quality Assurance and Quality Control testing. CC ID 07200
    [Quality management system shall include at least the following aspects: techniques, procedures and systematic actions to be used for the development, quality control and <span style="background-color:#F0BBBC;" class="term_primary-noun">quality assurance of the high-risk AI system; Article 17 1.(c)]
    Leadership and high level objectives Establish/Maintain Documentation
    Establish, implement, and maintain a Quality Management program. CC ID 07201
    [{put in place} Providers of high-risk AI systems shall: have a quality management system in place which complies with Article 17; Article 16 ¶ 1 (c)
    {put in place} Providers of high-risk AI systems shall put a or:#F0BBBC;" class="term_primary-noun">quality management system
    in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: Article 17 1.]
    Leadership and high level objectives Establish/Maintain Documentation
    Notify affected parties and interested personnel of quality management system approvals that have been refused, suspended, or withdrawn. CC ID 15045
    [Each notified body shall inform the other notified bodies of: quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued; Article 45 2.(a)]
    Leadership and high level objectives Communicate
    Notify affected parties and interested personnel of quality management system approvals that have been issued. CC ID 15036
    [Each notified body shall inform the other notified bodies of: quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has -color:#B7D8ED;" class="term_primary-verb">issued; Article 45 2.(a)
    Notified bodies shall inform the notifying authority of the following: any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; Article 45 1.(a)]
    Leadership and high level objectives Communicate
    Include quality objectives in the Quality Management program. CC ID 13693 Leadership and high level objectives Establish/Maintain Documentation
    Include monitoring and analysis capabilities in the quality management program. CC ID 17153 Leadership and high level objectives Monitor and Evaluate Occurrences
    Include records management in the quality management system. CC ID 15055
    [Quality management system shall include at least the following aspects: systems and procedures for record-keeping of all relevant documentation and information; Article 17 1.(k)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include risk management in the quality management system. CC ID 15054
    [Quality management system shall include at least the following aspects: the risk management system referred to in Article 9; Article 17 1.(g)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include data management procedures in the quality management system. CC ID 15052
    [Quality management system shall include at least the following aspects: systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems; Article 17 1.(f)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include a post-market monitoring system in the quality management system. CC ID 15027
    [Quality management system shall include at least the following aspects: the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72; Article 17 1.(h)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include operational roles and responsibilities in the quality management system. CC ID 15028
    [Quality management system shall include at least the following aspects: an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph. Article 17 1.(m)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include resource management in the quality management system. CC ID 15026
    [Quality management system shall include at least the following aspects: resource management, including security-of-supply related measures; Article 17 1.(l)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include communication protocols in the quality management system. CC ID 15025
    [Quality management system shall include at least the following aspects: the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties; Article 17 1.(j)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include incident reporting procedures in the quality management system. CC ID 15023
    [Quality management system shall include at least the following aspects: procedures related to the reporting of a serious incident in accordance with Article 73; Article 17 1.(i)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include technical specifications in the quality management system. CC ID 15021
    [Quality management system shall include at least the following aspects: technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that the high-risk AI system complies with those requirements; Article 17 1.(e)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include system testing standards in the Quality Management program. CC ID 01018
    [Quality management system shall include at least the following aspects: techniques, procedures and systematic actions to be used for the design, design control and tyle="background-color:#F0BBBC;" class="term_primary-noun">design verification of the high-risk AI system; Article 17 1.(b)
    {test procedure} Quality management system shall include at least the following aspects: examination, test and imary-noun">validation procedures
    to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out; Article 17 1.(d)]
    Leadership and high level objectives Establish/Maintain Documentation
    Include explanations, compensating controls, or risk acceptance in the compliance exceptions Exceptions document. CC ID 01631
    [Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto. Article 41 5.]
    Leadership and high level objectives Establish/Maintain Documentation
    Document and evaluate the decision outcomes from the decision-making process. CC ID 06918
    [Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. Article 44 3. ¶ 1
    Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. Article 52 5.]
    Leadership and high level objectives Establish/Maintain Documentation
    Establish, implement, and maintain an audit and accountability policy. CC ID 14035 Monitoring and measurement Establish/Maintain Documentation
    Include compliance requirements in the audit and accountability policy. CC ID 14103 Monitoring and measurement Establish/Maintain Documentation
    Include coordination amongst entities in the audit and accountability policy. CC ID 14102 Monitoring and measurement Establish/Maintain Documentation
    Include the purpose in the audit and accountability policy. CC ID 14100 Monitoring and measurement Establish/Maintain Documentation
    Include roles and responsibilities in the audit and accountability policy. CC ID 14098 Monitoring and measurement Establish/Maintain Documentation
    Include management commitment in the audit and accountability policy. CC ID 14097 Monitoring and measurement Establish/Maintain Documentation
    Include the scope in the audit and accountability policy. CC ID 14096 Monitoring and measurement Establish/Maintain Documentation
    Disseminate and communicate the audit and accountability policy to interested personnel and affected parties. CC ID 14095 Monitoring and measurement Communicate
    Establish, implement, and maintain audit and accountability procedures. CC ID 14057 Monitoring and measurement Establish/Maintain Documentation
    Disseminate and communicate the audit and accountability procedures to interested personnel and affected parties. CC ID 14137 Monitoring and measurement Communicate
    Enable monitoring and logging operations on all assets that meet the organizational criteria to maintain event logs. CC ID 06312
    [In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or in a substantial modification; Article 12 2.(a)]
    Monitoring and measurement Log Management
    Review and approve the use of continuous security management systems. CC ID 13181 Monitoring and measurement Process or Activity
    Establish, implement, and maintain an intrusion detection and prevention program. CC ID 15211 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain an intrusion detection and prevention policy. CC ID 15169 Monitoring and measurement Establish/Maintain Documentation
    Prohibit the sale or transfer of a debt caused by identity theft. CC ID 16983 Monitoring and measurement Acquisition/Sale of Assets or Services
    Establish, implement, and maintain an event logging policy. CC ID 15217 Monitoring and measurement Establish/Maintain Documentation
    Include the system components that generate audit records in the event logging procedures. CC ID 16426 Monitoring and measurement Data and Information Management
    Overwrite the oldest records when audit logging fails. CC ID 14308 Monitoring and measurement Data and Information Management
    Include identity information of suspects in the suspicious activity report. CC ID 16648 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain log analysis tools. CC ID 17056 Monitoring and measurement Technical Security
    Enable the logging capability to capture enough information to ensure the system is functioning according to its intended purpose throughout its life cycle. CC ID 15001 Monitoring and measurement Configuration
    Establish, implement, and maintain network monitoring operations. CC ID 16444 Monitoring and measurement Monitor and Evaluate Occurrences
    Include error details, identifying the root causes, and mitigation actions in the testing procedures. CC ID 11827
    [Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain a testing program. CC ID 00654
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; Article 55 1.(a)]
    Monitoring and measurement Behavior
    Establish, implement, and maintain a security assessment and authorization policy. CC ID 14031 Monitoring and measurement Establish/Maintain Documentation
    Include coordination amongst entities in the security assessment and authorization policy. CC ID 14222 Monitoring and measurement Establish/Maintain Documentation
    Include the scope in the security assessment and authorization policy. CC ID 14220 Monitoring and measurement Establish/Maintain Documentation
    Include the purpose in the security assessment and authorization policy. CC ID 14219 Monitoring and measurement Establish/Maintain Documentation
    Disseminate and communicate the security assessment and authorization policy to interested personnel and affected parties. CC ID 14218 Monitoring and measurement Communicate
    Include management commitment in the security assessment and authorization policy. CC ID 14189 Monitoring and measurement Establish/Maintain Documentation
    Include compliance requirements in the security assessment and authorization policy. CC ID 14183 Monitoring and measurement Establish/Maintain Documentation
    Include roles and responsibilities in the security assessment and authorization policy. CC ID 14179 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain security assessment and authorization procedures. CC ID 14056 Monitoring and measurement Establish/Maintain Documentation
    Disseminate and communicate security assessment and authorization procedures to interested personnel and affected parties. CC ID 14224 Monitoring and measurement Communicate
    Employ third parties to carry out testing programs, as necessary. CC ID 13178 Monitoring and measurement Human Resources Management
    Enable security controls which were disabled to conduct testing. CC ID 17031 Monitoring and measurement Testing
    Document improvement actions based on test results and exercises. CC ID 16840 Monitoring and measurement Establish/Maintain Documentation
    Disable dedicated accounts after testing is complete. CC ID 17033 Monitoring and measurement Testing
    Protect systems and data during testing in the production environment. CC ID 17198 Monitoring and measurement Testing
    Delete personal data upon data subject's withdrawal from testing. CC ID 17238
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Monitoring and measurement Data and Information Management
    Define the criteria to conduct testing in the production environment. CC ID 17197
    [{high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market surveillance authority in the Member State where the testing in real world conditions is to be conducted; Article 60 4.(a)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be understood to have been approved; where national law does not provide for a tacit approval, the testing in real world conditions shall remain subject to an authorisation; Article 60 4.(b)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider, with the exception of providers or prospective providers of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, and high-risk AI systems referred to in point 2 of Annex III has registered the testing in real world conditions in accordance with Article 71(4) with a Union-wide unique single identification number and with the information specified in Annex IX; the provider or prospective provider of high-risk AI systems referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, has registered the testing in real-world conditions in the secure non-public section of the EU database according to Article 49(4), point (d), with a Union-wide unique single identification number and with the information specified therein; the provider or prospective provider of high-risk AI systems referred to in point 2 of Annex III has registered the testing in real-world conditions in accordance with Article 49(5); Article 60 4.(c)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the provider or prospective provider conducting the testing in real world conditions is established in the Union or has appointed a legal representative who is established in the Union; Article 60 4.(d)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: data collected and processed for the purpose of the testing in real world conditions shall be transferred to third countries only provided that appropriate and applicable safeguards under Union law are implemented; Article 60 4.(e)
    {high-risk AI systems} {outside AI regulatory sandbox} {no longer than necessary} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than six months, which may be extended for an additional period of six months, subject to prior notification by the provider or prospective provider to the market surveillance authority, accompanied by an explanation of the need for such an extension; Article 60 4.(f)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the subjects of the testing in real world conditions who are persons belonging to vulnerable groups due to their age or disability, are appropriately protected; Article 60 4.(g)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the testing in real world conditions is effectively overseen by the provider or prospective provider, as well as by deployers or prospective deployers through persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks; Article 60 4.(j)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded. Article 60 4.(k)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; Article 60 4.(h)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and under other applicable Union and national law; Article 60 4.(h)
    {high-risk AI systems} {outside AI regulatory sandbox} Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met: the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subjects, and their personal data shall be deleted after the test is performed; Article 60 4.(i)]
    Monitoring and measurement Testing
    Obtain the data subject's consent to participate in testing in real-world conditions. CC ID 17232
    [For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to their participation; Article 61 1.(a)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject or subjects’ participation; Article 61 1.(b)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having to provide any justification; Article 61 1.(c)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of the AI system; Article 61 1.(d)
    For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained from the subjects of testing prior to their participation in such testing and after their having been duly informed with concise, clear, relevant, and understandable information regarding: the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be obtained. Article 61 1.(e)]
    Monitoring and measurement Behavior
    Suspend testing in a production environment, as necessary. CC ID 17231
    [Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Monitoring and measurement Testing
    Define the test requirements for each testing program. CC ID 13177
    [The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. Article 9 8.]
    Monitoring and measurement Establish/Maintain Documentation
    Include test requirements for the use of production data in the testing program. CC ID 17201 Monitoring and measurement Testing
    Include test requirements for the use of human subjects in the testing program. CC ID 16222
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Monitoring and measurement Testing
    Test the in scope system in accordance with its intended purpose. CC ID 14961
    [The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system. Article 9 8.
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks; Article 55 1.(a)]
    Monitoring and measurement Testing
    Perform network testing in accordance with organizational standards. CC ID 16448 Monitoring and measurement Testing
    Notify interested personnel and affected parties prior to performing testing. CC ID 17034 Monitoring and measurement Communicate
    Test user accounts in accordance with organizational standards. CC ID 16421 Monitoring and measurement Testing
    Include mechanisms for emergency stops in the testing program. CC ID 14398 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain conformity assessment procedures. CC ID 15032
    [For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: the internal control referred to in Annex VI; or Article 43 1.(a)
    For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on: the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII. Article 43 1.(b)
    In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where: Article 43 1. ¶ 1
    For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. Article 43 2.]
    Monitoring and measurement Establish/Maintain Documentation
    Share conformity assessment results with affected parties and interested personnel. CC ID 15113
    [{high-risk artificial intelligence system} A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. Article 6 4.
    Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results. Article 45 3.]
    Monitoring and measurement Communicate
    Notify affected parties and interested personnel of technical documentation assessment certificates that have been issued. CC ID 15112
    [Each notified body shall inform the other notified bodies of: Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. Article 45 2.(b)
    Notified bodies shall inform the notifying authority of the following: any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII; Article 45 1.(a)]
    Monitoring and measurement Communicate
    Notify affected parties and interested personnel of technical documentation assessment certificates that have been refused, withdrawn, suspended or restricted. CC ID 15111
    [Each notified body shall inform the other notified bodies of: Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued. Article 45 2.(b)]
    Monitoring and measurement Communicate
    Create technical documentation assessment certificates in an official language. CC ID 15110
    [Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established. Article 44 1.]
    Monitoring and measurement Establish/Maintain Documentation
    Request an extension of the validity period of technical documentation assessment certificates, as necessary. CC ID 17228
    [Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.]
    Monitoring and measurement Process or Activity
    Define the validity period for technical documentation assessment certificates. CC ID 17227
    [Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.
    Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid. Article 44 2.]
    Monitoring and measurement Process or Activity
    Opt out of third party conformity assessments when the system meets harmonized standards. CC ID 15096
    [Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter. Article 43 3. ¶ 3]
    Monitoring and measurement Testing
    Define the test frequency for each testing program. CC ID 13176
    [{testing in real-world conditions} Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or the putting into service of the AI system on their own or in partnership with one or more deployers or prospective deployers. Article 60 2.]
    Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain a stress test program for identification cards or badges. CC ID 15424 Monitoring and measurement Establish/Maintain Documentation
    Ensure protocols are free from injection flaws. CC ID 16401 Monitoring and measurement Process or Activity
    Prevent adversaries from disabling or compromising security controls. CC ID 17057 Monitoring and measurement Technical Security
    Establish, implement, and maintain a business line testing strategy. CC ID 13245 Monitoring and measurement Establish/Maintain Documentation
    Include facilities in the business line testing strategy. CC ID 13253 Monitoring and measurement Establish/Maintain Documentation
    Include electrical systems in the business line testing strategy. CC ID 13251 Monitoring and measurement Establish/Maintain Documentation
    Include mechanical systems in the business line testing strategy. CC ID 13250 Monitoring and measurement Establish/Maintain Documentation
    Include Heating Ventilation and Air Conditioning systems in the business line testing strategy. CC ID 13248 Monitoring and measurement Establish/Maintain Documentation
    Include emergency power supplies in the business line testing strategy. CC ID 13247 Monitoring and measurement Establish/Maintain Documentation
    Include environmental controls in the business line testing strategy. CC ID 13246 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain a vulnerability management program. CC ID 15721 Monitoring and measurement Establish/Maintain Documentation
    Conduct scanning activities in a test environment. CC ID 17036 Monitoring and measurement Testing
    Disseminate and communicate the vulnerability scan results to interested personnel and affected parties. CC ID 16418 Monitoring and measurement Communicate
    Maintain vulnerability scan reports as organizational records. CC ID 12092 Monitoring and measurement Records Management
    Approve the vulnerability management program. CC ID 15722 Monitoring and measurement Process or Activity
    Assign ownership of the vulnerability management program to the appropriate role. CC ID 15723 Monitoring and measurement Establish Roles
    Document and maintain test results. CC ID 17028
    [{high-risk artificial intelligence system} A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. Article 6 4.]
    Monitoring and measurement Testing
    Include the pass or fail test status in the test results. CC ID 17106 Monitoring and measurement Establish/Maintain Documentation
    Include time information in the test results. CC ID 17105 Monitoring and measurement Establish/Maintain Documentation
    Include a description of the system tested in the test results. CC ID 17104 Monitoring and measurement Establish/Maintain Documentation
    Disseminate and communicate the test results to interested personnel and affected parties. CC ID 17103
    [{fundamental rights impact assessment} Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify. Article 27 3.]
    Monitoring and measurement Communicate
    Establish, implement, and maintain a compliance monitoring policy. CC ID 00671
    [The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.]
    Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain a metrics policy. CC ID 01654 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain an approach for compliance monitoring. CC ID 01653 Monitoring and measurement Establish/Maintain Documentation
    Identify and document instances of non-compliance with the compliance framework. CC ID 06499
    [Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.]
    Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain disciplinary action notices. CC ID 16577 Monitoring and measurement Establish/Maintain Documentation
    Include a copy of the order in the disciplinary action notice. CC ID 16606 Monitoring and measurement Establish/Maintain Documentation
    Include the sanctions imposed in the disciplinary action notice. CC ID 16599 Monitoring and measurement Establish/Maintain Documentation
    Include the effective date of the sanctions in the disciplinary action notice. CC ID 16589 Monitoring and measurement Establish/Maintain Documentation
    Include the requirements that were violated in the disciplinary action notice. CC ID 16588 Monitoring and measurement Establish/Maintain Documentation
    Include responses to charges from interested personnel and affected parties in the disciplinary action notice. CC ID 16587 Monitoring and measurement Establish/Maintain Documentation
    Include the reasons for imposing sanctions in the disciplinary action notice. CC ID 16586 Monitoring and measurement Establish/Maintain Documentation
    Disseminate and communicate the disciplinary action notice to interested personnel and affected parties. CC ID 16585 Monitoring and measurement Communicate
    Include required information in the disciplinary action notice. CC ID 16584 Monitoring and measurement Establish/Maintain Documentation
    Include a justification for actions taken in the disciplinary action notice. CC ID 16583 Monitoring and measurement Establish/Maintain Documentation
    Include a statement on the conclusions of the investigation in the disciplinary action notice. CC ID 16582 Monitoring and measurement Establish/Maintain Documentation
    Include the investigation results in the disciplinary action notice. CC ID 16581 Monitoring and measurement Establish/Maintain Documentation
    Include a description of the causes of the actions taken in the disciplinary action notice. CC ID 16580 Monitoring and measurement Establish/Maintain Documentation
    Include the name of the person responsible for the charges in the disciplinary action notice. CC ID 16579 Monitoring and measurement Establish/Maintain Documentation
    Include contact information in the disciplinary action notice. CC ID 16578 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain occupational health and safety management metrics program. CC ID 15915 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain a privacy metrics program. CC ID 15494 Monitoring and measurement Establish/Maintain Documentation
    Establish, implement, and maintain environmental management system performance metrics. CC ID 15191 Monitoring and measurement Actionable Reports or Measurements
    Establish, implement, and maintain waste management metrics. CC ID 16152 Monitoring and measurement Actionable Reports or Measurements
    Establish, implement, and maintain emissions management metrics. CC ID 16145 Monitoring and measurement Actionable Reports or Measurements
    Establish, implement, and maintain financial management metrics. CC ID 16749 Monitoring and measurement Actionable Reports or Measurements
    Delay the reporting of incident management metrics, as necessary. CC ID 15501 Monitoring and measurement Communicate
    Establish, implement, and maintain an Electronic Health Records measurement metrics program. CC ID 06221 Monitoring and measurement Establish/Maintain Documentation
    Include transfer procedures in the log management program. CC ID 17077 Monitoring and measurement Establish/Maintain Documentation
    Restrict access to logs to authorized individuals. CC ID 01342
    [Upon a reasoned request by a competent authority, providers shall also give the requesting competent authority, as applicable, access to the automatically generated logs of the high-risk AI system referred to in Article 12(1), to the extent such logs are under their control. Article 21 2.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; Article 22 3.(c)]
    Monitoring and measurement Log Management
    Establish, implement, and maintain a cross-organizational audit sharing agreement. CC ID 10595 Monitoring and measurement Establish/Maintain Documentation
    Include a commitment to cooperate with applicable statutory bodies in the Statement of Compliance. CC ID 12370
    [Importers shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it. Article 23 7.
    Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation. Article 25 2.
    Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation. Article 26 12.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to the high-risk AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system; Article 22 3.(d)
    Distributors shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by it. Article 24 6.
    Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national competent authorities in the exercise of their competences and powers pursuant to this Regulation. Article 53 3.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to the general-purpose AI model, including when the model is integrated into AI systems placed on the market or put into service in the Union. Article 54 3.(d)
    The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing the competent authorities of such action. Article 73 6. ¶ 2]
    Audits and risk management Establish/Maintain Documentation
    Accept the attestation engagement when all preconditions are met. CC ID 13933 Audits and risk management Business Processes
    Review documentation to determine the effectiveness of in scope controls. CC ID 16522 Audits and risk management Process or Activity
    Establish, implement, and maintain a risk management program. CC ID 12051
    [A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. Article 9 1.]
    Audits and risk management Establish/Maintain Documentation
    Include the scope of risk management activities in the risk management program. CC ID 13658 Audits and risk management Establish/Maintain Documentation
    Integrate the risk management program with the organization's business activities. CC ID 13661 Audits and risk management Business Processes
    Integrate the risk management program into daily business decision-making. CC ID 13659 Audits and risk management Business Processes
    Include managing mobile risks in the risk management program. CC ID 13535 Audits and risk management Establish/Maintain Documentation
    Take into account if the system will be accessed by or have an impact on children in the risk management program. CC ID 14992
    [When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups. Article 9 9.]
    Audits and risk management Audits and Risk Management
    Include regular updating in the risk management system. CC ID 14990
    [{continuous life cycle} The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: Article 9 2.]
    Audits and risk management Business Processes
    Establish, implement, and maintain a risk management policy. CC ID 17192 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain risk management strategies. CC ID 13209
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a). Article 9 2.(d)]
    Audits and risk management Establish/Maintain Documentation
    Include off-site storage of supplies in the risk management strategies. CC ID 13221 Audits and risk management Establish/Maintain Documentation
    Include data quality in the risk management strategies. CC ID 15308 Audits and risk management Data and Information Management
    Include minimizing service interruptions in the risk management strategies. CC ID 13215 Audits and risk management Establish/Maintain Documentation
    Include off-site storage in the risk mitigation strategies. CC ID 13213 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain the risk assessment framework. CC ID 00685 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain a risk assessment program. CC ID 00687 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain insurance requirements. CC ID 16562 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate insurance options to interested personnel and affected parties. CC ID 16572 Audits and risk management Communicate
    Disseminate and communicate insurance requirements to interested personnel and affected parties. CC ID 16567 Audits and risk management Communicate
    Address cybersecurity risks in the risk assessment program. CC ID 13193 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain fundamental rights impact assessments. CC ID 17217
    [Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of: Article 27 1.
    The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information. Article 27 2.]
    Audits and risk management Audits and Risk Management
    Include the categories of data used by the system in the fundamental rights impact assessment. CC ID 17248 Audits and risk management Establish/Maintain Documentation
    Include metrics in the fundamental rights impact assessment. CC ID 17249 Audits and risk management Establish/Maintain Documentation
    Include the benefits of the system in the fundamental rights impact assessment. CC ID 17244 Audits and risk management Establish/Maintain Documentation
    Include user safeguards in the fundamental rights impact assessment. CC ID 17255 Audits and risk management Establish/Maintain Documentation
    Include the outputs produced by the system in the fundamental rights impact assessment. CC ID 17247 Audits and risk management Establish/Maintain Documentation
    Include the purpose in the fundamental rights impact assessment. CC ID 17243 Audits and risk management Establish/Maintain Documentation
    Include monitoring procedures in the fundamental rights impact assessment. CC ID 17254 Audits and risk management Establish/Maintain Documentation
    Include risk management measures in the fundamental rights impact assessment. CC ID 17224
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms. Article 27 1.(f)]
    Audits and risk management Establish/Maintain Documentation
    Include human oversight measures in the fundamental rights impact assessment. CC ID 17223
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the implementation of human oversight measures, according to the instructions for use; Article 27 1.(e)]
    Audits and risk management Establish/Maintain Documentation
    Include risks in the fundamental rights impact assessment. CC ID 17222
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13; Article 27 1.(d)]
    Audits and risk management Establish/Maintain Documentation
    Include affected parties in the fundamental rights impact assessment. CC ID 17221
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: the categories of natural persons and groups likely to be affected by its use in the specific context; Article 27 1.(c)]
    Audits and risk management Establish/Maintain Documentation
    Include the frequency in the fundamental rights impact assessment. CC ID 17220
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; Article 27 1.(b)]
    Audits and risk management Establish/Maintain Documentation
    Include the usage duration in the fundamental rights impact assessment. CC ID 17219
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; Article 27 1.(b)]
    Audits and risk management Establish/Maintain Documentation
    Include system use in the fundamental rights impact assessment. CC ID 17218
    [{fundamental rights impact assessment} For that purpose, deployers shall perform an assessment consisting of: a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; Article 27 1.(a)]
    Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain Data Protection Impact Assessments. CC ID 14830
    [Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680. Article 26 9.]
    Audits and risk management Process or Activity
    Include an assessment of the relationship between the data subject and the parties processing the data in the Data Protection Impact Assessment. CC ID 16371 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the Data Protection Impact Assessment to interested personnel and affected parties. CC ID 15313 Audits and risk management Communicate
    Include consideration of the data subject's expectations in the Data Protection Impact Assessment. CC ID 16370 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain a risk assessment policy. CC ID 14026 Audits and risk management Establish/Maintain Documentation
    Include compliance requirements in the risk assessment policy. CC ID 14121 Audits and risk management Establish/Maintain Documentation
    Include coordination amongst entities in the risk assessment policy. CC ID 14120 Audits and risk management Establish/Maintain Documentation
    Include management commitment in the risk assessment policy. CC ID 14119 Audits and risk management Establish/Maintain Documentation
    Include roles and responsibilities in the risk assessment policy. CC ID 14118 Audits and risk management Establish/Maintain Documentation
    Include the scope in the risk assessment policy. CC ID 14117 Audits and risk management Establish/Maintain Documentation
    Include the purpose in the risk assessment policy. CC ID 14116 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the risk assessment policy to interested personnel and affected parties. CC ID 14115 Audits and risk management Communicate
    Analyze the organization's information security environment. CC ID 13122 Audits and risk management Technical Security
    Engage appropriate parties to assist with risk assessments, as necessary. CC ID 12153 Audits and risk management Human Resources Management
    Employ risk assessment procedures that take into account risk factors. CC ID 16560 Audits and risk management Audits and Risk Management
    Include risks to critical personnel and assets in the threat and risk classification scheme. CC ID 00698
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; Article 9 2.(a)]
    Audits and risk management Audits and Risk Management
    Approve the threat and risk classification scheme. CC ID 15693 Audits and risk management Business Processes
    Disseminate and communicate the risk assessment procedures to interested personnel and affected parties. CC ID 14136 Audits and risk management Communicate
    Perform risk assessments for all target environments, as necessary. CC ID 06452
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; Article 55 1.(b)]
    Audits and risk management Testing
    Include the probability and potential impact of pandemics in the scope of the risk assessment. CC ID 13241 Audits and risk management Establish/Maintain Documentation
    Document any reasons for modifying or refraining from modifying the organization's risk assessment when the risk assessment has been reviewed. CC ID 13312 Audits and risk management Establish/Maintain Documentation
    Create a risk assessment report based on the risk assessment results. CC ID 15695 Audits and risk management Establish/Maintain Documentation
    Notify the organization upon completion of the external audits of the organization's risk assessment. CC ID 13313 Audits and risk management Communicate
    Establish, implement, and maintain a risk assessment awareness and training program. CC ID 06453
    [In identifying the most appropriate risk management measures, the following shall be ensured: provision of information required pursuant to Article 13 and, where appropriate, training to deployers. Article 9 5. ¶ 2 (c)]
    Audits and risk management Business Processes
    Disseminate and communicate information about risks to all interested personnel and affected parties. CC ID 06718
    [In identifying the most appropriate risk management measures, the following shall be ensured: provision of information required pursuant to Article 13 and, where appropriate, training to deployers. Article 9 5. ¶ 2 (c)
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1]
    Audits and risk management Behavior
    Include recovery of the critical path in the Business Impact Analysis. CC ID 13224 Audits and risk management Establish/Maintain Documentation
    Include acceptable levels of data loss in the Business Impact Analysis. CC ID 13264 Audits and risk management Establish/Maintain Documentation
    Include Recovery Point Objectives in the Business Impact Analysis. CC ID 13223 Audits and risk management Establish/Maintain Documentation
    Include the Recovery Time Objectives in the Business Impact Analysis. CC ID 13222 Audits and risk management Establish/Maintain Documentation
    Include pandemic risks in the Business Impact Analysis. CC ID 13219 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the Business Impact Analysis to interested personnel and affected parties. CC ID 15300 Audits and risk management Communicate
    Establish, implement, and maintain a risk register. CC ID 14828 Audits and risk management Establish/Maintain Documentation
    Analyze and quantify the risks to in scope systems and information. CC ID 00701
    [The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose; Article 9 2.(a)
    The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72; Article 9 2.(c)
    The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse; Article 9 2.(b)
    Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken. Article 20 2.]
    Audits and risk management Audits and Risk Management
    Approve the risk acceptance level, as necessary. CC ID 17168 Audits and risk management Process or Activity
    Document the results of the gap analysis. CC ID 16271 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain a risk treatment plan. CC ID 11983
    [In identifying the most appropriate risk management measures, the following shall be ensured: where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated; Article 9 5. ¶ 2 (b)
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk; Article 55 1.(b)]
    Audits and risk management Establish/Maintain Documentation
    Include roles and responsibilities in the risk treatment plan. CC ID 16991
    [With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. Article 9 5. ¶ 3]
    Audits and risk management Establish/Maintain Documentation
    Include time information in the risk treatment plan. CC ID 16993 Audits and risk management Establish/Maintain Documentation
    Include allocation of resources in the risk treatment plan. CC ID 16989 Audits and risk management Establish/Maintain Documentation
    Include the date of the risk assessment in the risk treatment plan. CC ID 16321 Audits and risk management Establish/Maintain Documentation
    Include the release status of the risk assessment in the risk treatment plan. CC ID 16320 Audits and risk management Audits and Risk Management
    Include requirements for monitoring and reporting in the risk treatment plan, as necessary. CC ID 13620 Audits and risk management Establish/Maintain Documentation
    Include a description of usage in the risk treatment plan. CC ID 11977
    [With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used. Article 9 5. ¶ 3]
    Audits and risk management Establish/Maintain Documentation
    Document all constraints applied to the risk treatment plan, as necessary. CC ID 13619 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the risk treatment plan to interested personnel and affected parties. CC ID 15694 Audits and risk management Communicate
    Review and approve material risks documented in the residual risk report, as necessary. CC ID 13672
    [The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. Article 9 5. ¶ 1]
    Audits and risk management Business Processes
    Establish, implement, and maintain an artificial intelligence risk management program. CC ID 16220 Audits and risk management Establish/Maintain Documentation
    Include diversity and equal opportunity in the artificial intelligence risk management program. CC ID 16255 Audits and risk management Establish/Maintain Documentation
    Analyze the impact of artificial intelligence systems on business operations. CC ID 16356 Audits and risk management Business Processes
    Establish, implement, and maintain a cybersecurity risk management program. CC ID 16827 Audits and risk management Audits and Risk Management
    Include a commitment to continuous improvement In the cybersecurity risk management program. CC ID 16839 Audits and risk management Establish/Maintain Documentation
    Monitor the effectiveness of the cybersecurity risk management program. CC ID 16831 Audits and risk management Monitor and Evaluate Occurrences
    Establish, implement, and maintain a cybersecurity risk management policy. CC ID 16834 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the cybersecurity risk management policy to interested personnel and affected parties. CC ID 16832 Audits and risk management Communicate
    Disseminate and communicate the cybersecurity risk management program to interested personnel and affected parties. CC ID 16829 Audits and risk management Communicate
    Establish, implement, and maintain a cybersecurity risk management strategy. CC ID 11991
    [The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks. Article 15 5. ¶ 2]
    Audits and risk management Establish/Maintain Documentation
    Include a risk prioritization approach in the Cybersecurity Risk Management Strategy. CC ID 12276 Audits and risk management Establish/Maintain Documentation
    Include defense in depth strategies in the cybersecurity risk management strategy. CC ID 15582 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the cybersecurity risk management strategy to interested personnel and affected parties. CC ID 16825 Audits and risk management Communicate
    Acquire cyber insurance, as necessary. CC ID 12693 Audits and risk management Business Processes
    Establish, implement, and maintain a cybersecurity supply chain risk management program. CC ID 16826 Audits and risk management Establish/Maintain Documentation
    Establish, implement, and maintain cybersecurity supply chain risk management procedures. CC ID 16830 Audits and risk management Establish/Maintain Documentation
    Monitor the effectiveness of the cybersecurity supply chain risk management program. CC ID 16828 Audits and risk management Monitor and Evaluate Occurrences
    Establish, implement, and maintain a supply chain risk management policy. CC ID 14663 Audits and risk management Establish/Maintain Documentation
    Include compliance requirements in the supply chain risk management policy. CC ID 14711 Audits and risk management Establish/Maintain Documentation
    Include coordination amongst entities in the supply chain risk management policy. CC ID 14710 Audits and risk management Establish/Maintain Documentation
    Include management commitment in the supply chain risk management policy. CC ID 14709 Audits and risk management Establish/Maintain Documentation
    Include roles and responsibilities in the supply chain risk management policy. CC ID 14708 Audits and risk management Establish/Maintain Documentation
    Include the scope in the supply chain risk management policy. CC ID 14707 Audits and risk management Establish/Maintain Documentation
    Include the purpose in the supply chain risk management policy. CC ID 14706 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the supply chain risk management policy to all interested personnel and affected parties. CC ID 14662 Audits and risk management Communicate
    Establish, implement, and maintain a supply chain risk management plan. CC ID 14713 Audits and risk management Establish/Maintain Documentation
    Include processes for monitoring and reporting in the supply chain risk management plan. CC ID 15619 Audits and risk management Establish/Maintain Documentation
    Include dates in the supply chain risk management plan. CC ID 15617 Audits and risk management Establish/Maintain Documentation
    Include implementation milestones in the supply chain risk management plan. CC ID 15615 Audits and risk management Establish/Maintain Documentation
    Include roles and responsibilities in the supply chain risk management plan. CC ID 15613 Audits and risk management Establish/Maintain Documentation
    Include supply chain risk management procedures in the risk management program. CC ID 13190 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the supply chain risk management procedures to all interested personnel and affected parties. CC ID 14712 Audits and risk management Communicate
    Assign key stakeholders to review and approve supply chain risk management procedures. CC ID 13199 Audits and risk management Human Resources Management
    Disseminate and communicate the risk management policy to interested personnel and affected parties. CC ID 13792 Audits and risk management Communicate
    Establish, implement, and maintain a disclosure report. CC ID 15521 Audits and risk management Establish/Maintain Documentation
    Disseminate and communicate the disclosure report to interested personnel and affected parties. CC ID 15667
    [{market surveillance authority} Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment. Article 26 10. ¶ 6]
    Audits and risk management Communicate
    Establish, implement, and maintain a digital identity management program. CC ID 13713 Technical security Establish/Maintain Documentation
    Establish, implement, and maintain an authorized representatives policy. CC ID 13798
    [Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. Article 22 1.]
    Technical security Establish/Maintain Documentation
    Include authorized representative life cycle management requirements in the authorized representatives policy. CC ID 13802 Technical security Establish/Maintain Documentation
    Include termination procedures in the authorized representatives policy. CC ID 17226
    [The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor. Article 22 4.
    The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor. Article 54 5.]
    Technical security Establish/Maintain Documentation
    Include any necessary restrictions for the authorized representative in the authorized representatives policy. CC ID 13801 Technical security Establish/Maintain Documentation
    Include suspension requirements for authorized representatives in the authorized representatives policy. CC ID 13800 Technical security Establish/Maintain Documentation
    Include the authorized representative's life span in the authorized representatives policy. CC ID 13799 Technical security Establish/Maintain Documentation
    Grant access to authorized personnel or systems. CC ID 12186
    [Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in Chapter III, Section 2; and Article 74 13.(a)
    {testing procedures} Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when both of the following conditions are fulfilled: testing or auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient. Article 74 13.(b)
    The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. Article 92 5.]
    Technical security Configuration
    Disseminate and communicate the access control log to interested personnel and affected parties. CC ID 16442 Technical security Communicate
    Include the user identifiers of all personnel who are authorized to access a system in the system record. CC ID 15171 Technical security Establish/Maintain Documentation
    Include identity information of all personnel who are authorized to access a system in the system record. CC ID 16406 Technical security Establish/Maintain Documentation
    Include the user's location in the system record. CC ID 16996 Technical security Log Management
    Include the date and time that access was reviewed in the system record. CC ID 16416 Technical security Data and Information Management
    Include the date and time that access rights were changed in the system record. CC ID 16415 Technical security Establish/Maintain Documentation
    Control all methods of remote access and teleworking. CC ID 00559
    [{training data} {validation data} {testing data} Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other relevant technical means and tools enabling remote access. Article 74 12.]
    Technical security Technical Security
    Assign virtual escorting to authorized personnel. CC ID 16440 Technical security Process or Activity
    Include information security requirements in the remote access and teleworking program. CC ID 15704 Technical security Establish/Maintain Documentation
    Refrain from allowing individuals to self-enroll into multifactor authentication from untrusted devices. CC ID 17173 Technical security Technical Security
    Implement phishing-resistant multifactor authentication techniques. CC ID 16541 Technical security Technical Security
    Document and approve requests to bypass multifactor authentication. CC ID 15464 Technical security Establish/Maintain Documentation
    Limit the source addresses from which remote administration is performed. CC ID 16393 Technical security Technical Security
    Establish, implement, and maintain a facility physical security program. CC ID 00711
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. Article 55 1.(d)]
    Physical and environmental protection Establish/Maintain Documentation
    Establish, implement, and maintain opening procedures for businesses. CC ID 16671 Physical and environmental protection Establish/Maintain Documentation
    Establish, implement, and maintain closing procedures for businesses. CC ID 16670 Physical and environmental protection Establish/Maintain Documentation
    Establish and maintain a contract for accessing facilities that transmit, process, or store restricted data. CC ID 12050 Physical and environmental protection Establish/Maintain Documentation
    Refrain from providing access to facilities that transmit, process, or store restricted data until the contract for accessing the facility is signed. CC ID 12311 Physical and environmental protection Behavior
    Include identification cards or badges in the physical security program. CC ID 14818 Physical and environmental protection Establish/Maintain Documentation
    Implement audio protection controls on telephone systems in controlled areas. CC ID 16455 Physical and environmental protection Technical Security
    Establish, implement, and maintain security procedures for virtual meetings. CC ID 15581 Physical and environmental protection Establish/Maintain Documentation
    Create security zones in facilities, as necessary. CC ID 16295 Physical and environmental protection Physical and Environmental Protection
    Establish, implement, and maintain floor plans. CC ID 16419 Physical and environmental protection Establish/Maintain Documentation
    Include network infrastructure and cabling infrastructure on the floor plan. CC ID 16420 Physical and environmental protection Establish/Maintain Documentation
    Post floor plans of critical facilities in secure locations. CC ID 16138 Physical and environmental protection Communicate
    Configure the access control system to grant access only during authorized working hours. CC ID 12325 Physical and environmental protection Physical and Environmental Protection
    Log the individual's address in the facility access list. CC ID 16921 Physical and environmental protection Log Management
    Log the contact information for the person authorizing access in the facility access list. CC ID 16920 Physical and environmental protection Log Management
    Log the organization's name in the facility access list. CC ID 16919 Physical and environmental protection Log Management
    Log the individual's name in the facility access list. CC ID 16918 Physical and environmental protection Log Management
    Log the purpose in the facility access list. CC ID 16982 Physical and environmental protection Log Management
    Log the level of access in the facility access list. CC ID 16975 Physical and environmental protection Log Management
    Disallow opting out of wearing or displaying identification cards or badges. CC ID 13030 Physical and environmental protection Human Resources Management
    Implement physical identification processes. CC ID 13715 Physical and environmental protection Process or Activity
    Refrain from using knowledge-based authentication for in-person identity verification. CC ID 13717 Physical and environmental protection Process or Activity
    Issue photo identification badges to all employees. CC ID 12326 Physical and environmental protection Physical and Environmental Protection
    Establish, implement, and maintain lost or damaged identification card procedures, as necessary. CC ID 14819 Physical and environmental protection Establish/Maintain Documentation
    Report lost badges, stolen badges, and broken badges to the Security Manager. CC ID 12334 Physical and environmental protection Physical and Environmental Protection
    Direct each employee to be responsible for their identification card or badge. CC ID 12332 Physical and environmental protection Human Resources Management
    Include name, date of entry, and validity period on disposable identification cards or badges. CC ID 12331 Physical and environmental protection Physical and Environmental Protection
    Include error handling controls in identification issuance procedures. CC ID 13709 Physical and environmental protection Establish/Maintain Documentation
    Include an appeal process in the identification issuance procedures. CC ID 15428 Physical and environmental protection Business Processes
    Include information security in the identification issuance procedures. CC ID 15425 Physical and environmental protection Establish/Maintain Documentation
    Establish, implement, and maintain post-issuance update procedures for identification cards or badges. CC ID 15426 Physical and environmental protection Establish/Maintain Documentation
    Enforce dual control for badge assignments. CC ID 12328 Physical and environmental protection Physical and Environmental Protection
    Enforce dual control for accessing unassigned identification cards or badges. CC ID 12327 Physical and environmental protection Physical and Environmental Protection
    Refrain from imprinting the company name or company logo on identification cards or badges. CC ID 12282 Physical and environmental protection Physical and Environmental Protection
    Assign employees the responsibility for controlling their identification badges. CC ID 12333 Physical and environmental protection Human Resources Management
    Charge a fee for replacement of identification cards or badges, as necessary. CC ID 17001 Physical and environmental protection Business Processes
    Establish, implement, and maintain a door security standard. CC ID 06686 Physical and environmental protection Establish/Maintain Documentation
    Restrict physical access mechanisms to authorized parties. CC ID 16924 Physical and environmental protection Process or Activity
    Establish, implement, and maintain a window security standard. CC ID 06689 Physical and environmental protection Establish/Maintain Documentation
    Use vandal resistant light fixtures for all security lighting. CC ID 16130 Physical and environmental protection Physical and Environmental Protection
    Establish, Implement, and maintain a camera operating policy. CC ID 15456 Physical and environmental protection Establish/Maintain Documentation
    Disseminate and communicate the camera installation policy to interested personnel and affected parties. CC ID 15461 Physical and environmental protection Communicate
    Record the purpose of the visit in the visitor log. CC ID 16917 Physical and environmental protection Log Management
    Record the date and time of entry in the visitor log. CC ID 13255 Physical and environmental protection Establish/Maintain Documentation
    Record the date and time of departure in the visitor log. CC ID 16897 Physical and environmental protection Log Management
    Record the type of identification used in the visitor log. CC ID 16916 Physical and environmental protection Log Management
    Log the exit of a staff member to a facility or designated rooms within the facility. CC ID 11675 Physical and environmental protection Monitor and Evaluate Occurrences
    Include the requestor's name in the physical access log. CC ID 16922 Physical and environmental protection Log Management
    Physically segregate business areas in accordance with organizational standards. CC ID 16718 Physical and environmental protection Physical and Environmental Protection
    Execute fail-safe procedures when an emergency occurs. CC ID 07108
    [{backup plans} The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans. Article 15 4. ¶ 2]
    Operational and Systems Continuity Systems Continuity
    Establish, implement, and maintain high level operational roles and responsibilities. CC ID 00806 Human Resources management Establish Roles
    Define and assign the authorized representatives roles and responsibilities. CC ID 15033
    [The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. Article 22 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 22 3.
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the provider has appointed an authorised representative in accordance with Article 22(1). Article 23 1.(d)
    Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union. Article 54 1.
    The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider. Article 54 2.
    The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: Article 54 3.]
    Human Resources management Human Resources Management
    Train all personnel and third parties, as necessary. CC ID 00785
    [Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. Article 4 ¶ 1]
    Human Resources management Behavior
    Provide new hires limited network access to complete computer-based training. CC ID 17008 Human Resources management Training
    Include evidence of experience in applications for professional certification. CC ID 16193 Human Resources management Establish/Maintain Documentation
    Include supporting documentation in applications for professional certification. CC ID 16195 Human Resources management Establish/Maintain Documentation
    Submit applications for professional certification. CC ID 16192 Human Resources management Training
    Hire third parties to conduct training, as necessary. CC ID 13167 Human Resources management Human Resources Management
    Approve training plans, as necessary. CC ID 17193 Human Resources management Training
    Include prevention techniques in training regarding diseases or sicknesses. CC ID 14387 Human Resources management Training
    Include the concept of disease clusters when training to recognize conditions of diseases or sicknesses. CC ID 14386 Human Resources management Training
    Establish, implement, and maintain a list of tasks to include in the training plan. CC ID 17101 Human Resources management Training
    Designate training facilities in the training plan. CC ID 16200 Human Resources management Training
    Include portions of the visitor control program in the training plan. CC ID 13287 Human Resources management Establish/Maintain Documentation
    Include insider threats in the security awareness program. CC ID 16963 Human Resources management Training
    Conduct personal data processing training. CC ID 13757 Human Resources management Training
    Include in personal data processing training how to provide the contact information for the categories of personal data the organization may disclose. CC ID 13758 Human Resources management Training
    Complete security awareness training prior to being granted access to information systems or data. CC ID 17009 Human Resources management Training
    Establish, implement, and maintain a security awareness and training policy. CC ID 14022 Human Resources management Establish/Maintain Documentation
    Include compliance requirements in the security awareness and training policy. CC ID 14092 Human Resources management Establish/Maintain Documentation
    Include coordination amongst entities in the security awareness and training policy. CC ID 14091 Human Resources management Establish/Maintain Documentation
    Establish, implement, and maintain security awareness and training procedures. CC ID 14054 Human Resources management Establish/Maintain Documentation
    Disseminate and communicate the security awareness and training procedures to interested personnel and affected parties. CC ID 14138 Human Resources management Communicate
    Include management commitment in the security awareness and training policy. CC ID 14049 Human Resources management Establish/Maintain Documentation
    Include roles and responsibilities in the security awareness and training policy. CC ID 14048 Human Resources management Establish/Maintain Documentation
    Include the scope in the security awareness and training policy. CC ID 14047 Human Resources management Establish/Maintain Documentation
    Include the purpose in the security awareness and training policy. CC ID 14045 Human Resources management Establish/Maintain Documentation
    Include configuration management procedures in the security awareness program. CC ID 13967 Human Resources management Establish/Maintain Documentation
    Include media protection in the security awareness program. CC ID 16368 Human Resources management Training
    Document security awareness requirements. CC ID 12146 Human Resources management Establish/Maintain Documentation
    Include identity and access management in the security awareness program. CC ID 17013 Human Resources management Training
    Include the encryption process in the security awareness program. CC ID 17014 Human Resources management Training
    Include physical security in the security awareness program. CC ID 16369 Human Resources management Training
    Include data management in the security awareness program. CC ID 17010 Human Resources management Training
    Include e-mail and electronic messaging in the security awareness program. CC ID 17012 Human Resources management Training
    Include updates on emerging issues in the security awareness program. CC ID 13184 Human Resources management Training
    Include cybersecurity in the security awareness program. CC ID 13183 Human Resources management Training
    Include implications of non-compliance in the security awareness program. CC ID 16425 Human Resources management Training
    Include social networking in the security awareness program. CC ID 17011 Human Resources management Training
    Include the acceptable use policy in the security awareness program. CC ID 15487 Human Resources management Training
    Include a requirement to train all new hires and interested personnel in the security awareness program. CC ID 11800 Human Resources management Establish/Maintain Documentation
    Include remote access in the security awareness program. CC ID 13892 Human Resources management Establish/Maintain Documentation
    Document the goals of the security awareness program. CC ID 12145 Human Resources management Establish/Maintain Documentation
    Compare current security awareness assessment reports to the security awareness baseline. CC ID 12150 Human Resources management Establish/Maintain Documentation
    Establish and maintain a steering committee to guide the security awareness program. CC ID 12149 Human Resources management Human Resources Management
    Document the scope of the security awareness program. CC ID 12148 Human Resources management Establish/Maintain Documentation
    Establish, implement, and maintain a security awareness baseline. CC ID 12147 Human Resources management Establish/Maintain Documentation
    Encourage interested personnel to obtain security certification. CC ID 11804 Human Resources management Human Resources Management
    Train all personnel and third parties on how to recognize and report system failures. CC ID 13475 Human Resources management Training
    Establish, implement, and maintain an environmental management system awareness program. CC ID 15200 Human Resources management Establish/Maintain Documentation
    Establish, implement, and maintain a Code of Conduct. CC ID 04897
    [Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations representing them or by both, including with the involvement of any interested stakeholders and their representative organisations, including civil society organisations and academia. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. Article 95 3.]
    Human Resources management Establish/Maintain Documentation
    Establish, implement, and maintain a code of conduct for financial recommendations. CC ID 16649 Human Resources management Establish/Maintain Documentation
    Include anti-coercion requirements and anti-tying requirements in the Code of Conduct. CC ID 16720 Human Resources management Establish/Maintain Documentation
    Include limitations on referrals for products and services in the Code of Conduct. CC ID 16719 Human Resources management Behavior
    Include classifications of ethics violations in the Code of Conduct. CC ID 14769 Human Resources management Establish/Maintain Documentation
    Include definitions of ethics violations in the Code of Conduct. CC ID 14768 Human Resources management Establish/Maintain Documentation
    Include exercising due professional care in the Code of Conduct. CC ID 14210 Human Resources management Establish/Maintain Documentation
    Include health and safety provisions in the Code of Conduct. CC ID 16206 Human Resources management Establish/Maintain Documentation
    Include responsibilities to the public trust in the Code of Conduct. CC ID 14209 Human Resources management Establish/Maintain Documentation
    Include environmental responsibility criteria in the Code of Conduct. CC ID 16209 Human Resources management Establish/Maintain Documentation
    Include social responsibility criteria in the Code of Conduct. CC ID 16210 Human Resources management Establish/Maintain Documentation
    Include labor rights criteria in the Code of Conduct. CC ID 16208 Human Resources management Establish/Maintain Documentation
    Include the employee's legal responsibilities and rights in the Terms and Conditions of employment. CC ID 15701 Human Resources management Establish/Maintain Documentation
    Establish, implement, and maintain a Governance, Risk, and Compliance framework. CC ID 01406 Operational management Establish/Maintain Documentation
    Analyze the effect of the Governance, Risk, and Compliance capability to achieve organizational objectives. CC ID 12809
    [{is transparent} High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. Article 13 1.]
    Operational management Audits and Risk Management
    Establish, implement, and maintain an information security program. CC ID 00812 Operational management Establish/Maintain Documentation
    Establish, implement, and maintain operational control procedures. CC ID 00831 Operational management Establish/Maintain Documentation
    Include alternative actions in the operational control procedures. CC ID 17096
    [Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 53 4.
    Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 55 2.]
    Operational management Establish/Maintain Documentation
    Establish, implement, and maintain a Standard Operating Procedures Manual. CC ID 00826
    [{is accessible} {is comprehensible} High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. Article 13 2.
    {is accessible} {is comprehensible} High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers. Article 13 2.]
    Operational management Establish/Maintain Documentation
    Use systems in accordance with the standard operating procedures manual. CC ID 15049
    [Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 53 4.]
    Operational management Process or Activity
    Include system use information in the standard operating procedures manual. CC ID 17240 Operational management Establish/Maintain Documentation
    Include metrics in the standard operating procedures manual. CC ID 14988
    [The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use. Article 15 3.]
    Operational management Establish/Maintain Documentation
    Include maintenance measures in the standard operating procedures manual. CC ID 14986
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Operational management Establish/Maintain Documentation
    Include logging procedures in the standard operating procedures manual. CC ID 17214
    [The instructions for use shall contain at least the following information: where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to properly collect, store and interpret the logs in accordance with Article 12. Article 13 3.(f)]
    Operational management Establish/Maintain Documentation
    Include the expected lifetime of the system in the standard operating procedures manual. CC ID 14984
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Operational management Establish/Maintain Documentation
    Include resources in the standard operating procedures manual. CC ID 17212
    [The instructions for use shall contain at least the following information: the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including as regards software updates; Article 13 3.(e)]
    Operational management Establish/Maintain Documentation
    Include technical measures used to interpret output in the standard operating procedures manual. CC ID 14982
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output; Article 13 3.(b)(iv)
    The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately; Article 13 3.(b)(vii)
    The instructions for use shall contain at least the following information: the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; Article 13 3.(d)]
    Operational management Establish/Maintain Documentation
    Include human oversight measures in the standard operating procedures manual. CC ID 17213
    [The instructions for use shall contain at least the following information: the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of the high-risk AI systems by the deployers; Article 13 3.(d)]
    Operational management Establish/Maintain Documentation
    Include predetermined changes in the standard operating procedures manual. CC ID 14977
    [The instructions for use shall contain at least the following information: the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any; Article 13 3.(c)]
    Operational management Establish/Maintain Documentation
    Include specifications for input data in the standard operating procedures manual. CC ID 14975
    [{training data} {validation data} {testing data} The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the high-risk AI system; Article 13 3.(b)(vi)
    Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system. Article 26 4.]
    Operational management Establish/Maintain Documentation
    Include risks to health and safety or fundamental rights in the standard operating procedures manual. CC ID 14973
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights referred to in Article 9(2); Article 13 3.(b)(iii)]
    Operational management Establish/Maintain Documentation
    Include circumstances that may impact the system in the standard operating procedures manual. CC ID 14972
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; Article 13 3.(b)(ii)]
    Operational management Establish/Maintain Documentation
    Include what the system was tested and validated for in the standard operating procedures manual. CC ID 14969
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity; Article 13 3.(b)(ii)]
    Operational management Establish/Maintain Documentation
    Adhere to operating procedures as defined in the Standard Operating Procedures Manual. CC ID 06328
    [{technical measures} Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6. Article 26 1.]
    Operational management Business Processes
    Include the intended purpose in the standard operating procedures manual. CC ID 14967
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: its intended purpose; Article 13 3.(b)(i)]
    Operational management Establish/Maintain Documentation
    Include information on system performance in the standard operating procedures manual. CC ID 14965
    [The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: Article 13 3.(b)
    The instructions for use shall contain at least the following information: the characteristics, capabilities and limitations of performance of the high-risk AI system, including: when appropriate, its performance regarding specific persons or groups of persons on which the system is intended to be used; Article 13 3.(b)(v)]
    Operational management Establish/Maintain Documentation
    Include contact details in the standard operating procedures manual. CC ID 14962
    [The instructions for use shall contain at least the following information: the identity and the contact details of the provider and, where applicable, of its authorised representative; Article 13 3.(a)]
    Operational management Establish/Maintain Documentation
    Establish, implement, and maintain information sharing agreements. CC ID 15645 Operational management Business Processes
    Provide support for information sharing activities. CC ID 15644 Operational management Process or Activity
    Include that explicit management authorization must be given for the use of all technologies and their documentation in the Acceptable Use Policy. CC ID 01351
    [The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. Article 46 3.]
    Operational management Establish/Maintain Documentation
    Establish, implement, and maintain an Intellectual Property Right program. CC ID 00821
    [Providers of general-purpose AI models shall: put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; Article 53 1.(c)]
    Operational management Establish/Maintain Documentation
    Establish, implement, and maintain Intellectual Property Rights protection procedures. CC ID 11512
    [Providers of general-purpose AI models shall: put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790; Article 53 1.(c)]
    Operational management Establish/Maintain Documentation
    Implement and comply with the Governance, Risk, and Compliance framework. CC ID 00818 Operational management Business Processes
    Comply with all implemented policies in the organization's compliance framework. CC ID 06384
    [In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. Article 8 1.
    Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under applicable Union harmonisation legislation. In ensuring the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order to ensure consistency, avoid duplication and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to their product into documentation and procedures that already exist and are required under the Union harmonisation legislation listed in Section A of Annex I. Article 8 2.
    Providers of high-risk AI systems shall: ensure that their high-risk AI systems are compliant with the requirements set out in Section 2; Article 16 ¶ 1 (a)
    Providers of high-risk AI systems shall: comply with the registration obligations referred to in Article 49(1); Article 16 ¶ 1 (i)
    Providers of high-risk AI systems shall: ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882. Article 16 ¶ 1 (l)
    {quality management system} The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation. Article 17 2.
    For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the obligation to put in place a quality management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be taken into account. Article 17 4.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. Article 22 3.(e)
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that those standards cover those requirements or obligations. Article 40 1.
    High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations. Article 41 3.
    High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used shall be presumed to comply with the relevant requirements laid down in Article 10(4). Article 42 1.
    High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. Article 42 2.
    {keep up to date} By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. Article 47 4.
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and Article 53 1.(b)(i)
    Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment by the Commission. Article 55 2.
    For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the background-color:#F0BBBC;" class="term_primary-noun">rules on internal n">governance arrangements, processes and mechanisms pursuant to the relevant financial service law. For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law. Article 26 5. ¶ 2]
    Operational management Establish/Maintain Documentation
    Classify assets according to the Asset Classification Policy. CC ID 07186
    [A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; Article 51 1.(a)
    A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the following conditions: based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII. Article 51 1.(b)]
    Operational management Establish Roles
    Classify virtual systems by type and purpose. CC ID 16332 Operational management Business Processes
    Establish, implement, and maintain a customer service program. CC ID 00846 Operational management Establish/Maintain Documentation
    Establish, implement, and maintain an Incident Management program. CC ID 00853 Operational management Business Processes
    Include incident monitoring procedures in the Incident Management program. CC ID 01207 Operational management Establish/Maintain Documentation
    Record actions taken by investigators during a forensic investigation in the forensic investigation report. CC ID 07095 Operational management Establish/Maintain Documentation
    Include corrective actions in the forensic investigation report. CC ID 17070
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Operational management Establish/Maintain Documentation
    Redact restricted data before sharing incident information. CC ID 16994 Operational management Data and Information Management
    Notify interested personnel and affected parties of an extortion payment in the event of a cybersecurity event. CC ID 16539 Operational management Communicate
    Notify interested personnel and affected parties of the reasons for the extortion payment, along with any alternative solutions. CC ID 16538 Operational management Communicate
    Document the justification for not reporting incidents to interested personnel and affected parties. CC ID 16547 Operational management Establish/Maintain Documentation
    Report to breach notification organizations the reasons for a delay in sending breach notifications. CC ID 16797 Operational management Communicate
    Report to breach notification organizations the distribution list to which the organization will send data loss event notifications. CC ID 16782 Operational management Communicate
    Document any potential harm in the incident finding when concluding the incident investigation. CC ID 13830 Operational management Establish/Maintain Documentation
    Destroy investigative materials, as necessary. CC ID 17082 Operational management Data and Information Management
    Log incidents in the Incident Management audit log. CC ID 00857
    [In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; Article 55 1.(c)]
    Operational management Establish/Maintain Documentation
    Include who the incident was reported to in the incident management audit log. CC ID 16487 Operational management Log Management
    Include the information that was exchanged in the incident management audit log. CC ID 16995 Operational management Log Management
    Include corrective actions in the incident management audit log. CC ID 16466 Operational management Establish/Maintain Documentation
    Establish, implement, and maintain incident reporting time frame standards. CC ID 12142
    [The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident. Article 73 2. ¶ 1
    {be no later than} Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident as defined in Article 3, point (49)(b), the report referred to in paragraph 1 of this Article shall be provided immediately, and not later than two days after the provider or, where applicable, the deployer becomes aware of that incident. Article 73 3.
    {be no later than} Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately after the provider or the deployer has established, or as soon as it suspects, a causal relationship between the high-risk AI system and the serious incident, but not later than 10 days after the date on which the provider or, where applicable, the deployer becomes aware of the serious incident. Article 73 4.]
    Operational management Communicate
    Establish, implement, and maintain an Incident Response program. CC ID 00579 Operational management Establish/Maintain Documentation
    Create an incident response report. CC ID 12700 Operational management Establish/Maintain Documentation
    Include a root cause analysis of the incident in the incident response report. CC ID 12701
    [Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment of the incident, and corrective action. Article 73 6. ¶ 1]
    Operational management Establish/Maintain Documentation
    Mitigate reported incidents. CC ID 12973
    [The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. Article 15 5. ¶ 3]
    Operational management Actionable Reports or Measurements
    Establish, implement, and maintain a disability accessibility program. CC ID 06191
    [The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. Article 50 5.]
    Operational management Establish/Maintain Documentation
    Separate foreground from background when designing and building content. CC ID 15125 Operational management Systems Design, Build, and Implementation
    Establish, implement, and maintain web content accessibility guidelines. CC ID 14949 Operational management Establish/Maintain Documentation
    Conduct web accessibility testing in accordance with organizational standards. CC ID 16950 Operational management Testing
    Configure focus order in a meaningful way. CC ID 15206 Operational management Configuration
    Configure keyboard interfaces to provide all the functionality that is available for the associated website content. CC ID 15151 Operational management Configuration
    Programmatically set the states, properties, and values of user interface components. CC ID 15150 Operational management Configuration
    Notify users of changes to user interface components. CC ID 15149 Operational management Configuration
    Refrain from designing content in a way that is known to cause seizures or physical reactions. CC ID 15203 Operational management Configuration
    Configure content to be compatible with various user agents and assistive technologies. CC ID 15147 Operational management Configuration
    Configure content to be interpreted by various user agents and assistive technologies. CC ID 15146 Operational management Configuration
    Provide captions for prerecorded audio content. CC ID 15204 Operational management Configuration
    Ensure user interface component names include the same text that is presented visually. CC ID 15145 Operational management Configuration
    Configure user interface components to operate device motion and user motion functionality. CC ID 15144 Operational management Configuration
    Configure single pointer functionality to organizational standards. CC ID 15143 Operational management Configuration
    Configure the keyboard operable user interface so the keyboard focus indicator is visible. CC ID 15142 Operational management Configuration
    Provide users with alternative methods to inputting data in online forms. CC ID 16951 Operational management Data and Information Management
    Provide users the ability to disable user motion and device motion. CC ID 15205 Operational management Configuration
    Refrain from duplicating attributes in website content using markup languages. CC ID 15141 Operational management Configuration
    Use unique identifiers when using markup languages. CC ID 15140 Operational management Configuration
    Programmatically determine the status messages to convey to users. CC ID 15139 Operational management Configuration
    Advise users on how to navigate content. CC ID 15138 Operational management Communicate
    Allow users the ability to move focus with the keyboard. CC ID 15136 Operational management Configuration
    Avoid using images of text to convey information. CC ID 15202 Operational management Configuration
    Allow users to pause, stop, or hide moving, blinking or scrolling information. CC ID 15135 Operational management Configuration
    Display website content without loss of information or functionality and without requiring scrolling in two dimensions. CC ID 15134 Operational management Configuration
    Use images of text to convey information, as necessary. CC ID 15132 Operational management Configuration
    Refrain from using color as the only visual means to distinguish content. CC ID 15130 Operational management Configuration
    Refrain from restricting content to a single display orientation. CC ID 15129 Operational management Configuration
    Use text to convey information on web pages, as necessary. CC ID 15128 Operational management Configuration
    Configure the contrast ratio to organizational standards. CC ID 15127 Operational management Configuration
    Programmatically determine the correct reading sequence. CC ID 15126 Operational management Configuration
    Refrain from creating instructions for content that rely on sensory characteristics of components. CC ID 15124 Operational management Establish/Maintain Documentation
    Programmatically determine the information, structure, and relationships conveyed through the presentation. CC ID 15123 Operational management Configuration
    Provide audio descriptions for all prerecorded video content. CC ID 15122 Operational management Configuration
    Provide alternative forms of CAPTCHA, as necessary. CC ID 15121 Operational management Configuration
    Provide alternatives for time-based media. CC ID 15119 Operational management Configuration
    Configure non-text content to be ignored by assistive technology when it is pure decoration or not presented to users. CC ID 15118 Operational management Configuration
    Configure non-text content with a descriptive identification. CC ID 15117 Operational management Configuration
    Provide text alternatives for non-text content, as necessary. CC ID 15078 Operational management Configuration
    Implement functionality for a single pointer so an up-event reverses the outcome of a down-event. CC ID 15076 Operational management Configuration
    Implement functionality for a single pointer so the completion of a down-event is essential. CC ID 15075 Operational management Configuration
    Implement functionality to abort or undo the function when using a single pointer. CC ID 15074 Operational management Configuration
    Implement functionality for a single pointer so the up-event signals the completion of a function. CC ID 15073 Operational management Configuration
    Implement functionality for a single pointer so the down-event is not used to execute any part of a function. CC ID 15072 Operational management Configuration
    Allow users the ability to use various input devices. CC ID 15071 Operational management Configuration
    Implement mechanisms to allow users the ability to bypass repeated blocks of website content. CC ID 15068 Operational management Configuration
    Implement flashes below the general flash and red flash thresholds on web pages. CC ID 15067 Operational management Configuration
    Configure content to be presentable in a manner that is clear and conspicuous to all users. CC ID 15066
    [The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements. Article 50 5.]
    Operational management Configuration
    Configure non-text content that is a control or accepts user input with a name that describes its purpose. CC ID 15065 Operational management Configuration
    Allow users the ability to modify time limits in website content a defined number of times. CC ID 15064 Operational management Configuration
    Provide users with a simple method to extend the time limits set by content. CC ID 15063 Operational management Configuration
    Allow users the ability to disable time limits set by content. CC ID 15062 Operational management Configuration
    Warn users before time limits set by content are about to expire. CC ID 15061 Operational management Configuration
    Allow users the ability to modify time limits set by website or native applications. CC ID 15060 Operational management Configuration
    Provide users time to read and use website content, as necessary. CC ID 15059 Operational management Configuration
    Activate keyboard shortcuts on user interface components only when the appropriate component has focus. CC ID 15058 Operational management Configuration
    Provide users a mechanism to turn off keyboard shortcuts, as necessary. CC ID 15057 Operational management Configuration
    Configure all functionality to be accessible with a keyboard. CC ID 15056 Operational management Configuration
    Establish, implement, and maintain a registration database. CC ID 15048
    [The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or, where applicable, by the authorised representative. Article 71 2.
    The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on behalf of, a public authority, agency or body, in accordance with Article 49(3) and (4). Article 71 3.]
    Operational management Data and Information Management
    Implement access restrictions for information in the registration database. CC ID 17235
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Operational management Data and Information Management
    Include registration numbers in the registration database. CC ID 17272 Operational management Data and Information Management
    Include electronic signatures in the registration database. CC ID 17281 Operational management Data and Information Management
    Include other registrations in the registration database. CC ID 17274 Operational management Data and Information Management
    Include the owners and shareholders in the registration database. CC ID 17273 Operational management Data and Information Management
    Include contact details in the registration database. CC ID 15109
    [The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. Article 71 5.]
    Operational management Establish/Maintain Documentation
    Include personal data in the registration database, as necessary. CC ID 15108
    [The EU database shall contain personal data only in so far as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable. Article 71 5.]
    Operational management Establish/Maintain Documentation
    Publish the registration information in the registration database in an official language. CC ID 17280 Operational management Data and Information Management
    Make the registration database available to the public. CC ID 15107
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Operational management Communicate
    Maintain non-public information in a protected area in the registration database. CC ID 17237
    [For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in: Article 49 4.]
    Operational management Data and Information Management
    Impose conditions or restrictions on the termination or suspension of a registration. CC ID 16796 Operational management Business Processes
    Publish the IP addresses being used by each external customer in the registration database. CC ID 16403 Operational management Data and Information Management
    Update registration information upon changes. CC ID 17275 Operational management Data and Information Management
    Maintain the accuracy of registry information published in registration databases. CC ID 16402
    [For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct. Article 22 3.(e)]
    Operational management Data and Information Management
    Maintain ease of use for information in the registration database. CC ID 17239
    [{be publicly available} {machine-readable format} {navigation} With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider has given consent for also making the information accessible the public. Article 71 4.]
    Operational management Data and Information Management
    Include all required information in the registration database. CC ID 15106 Operational management Data and Information Management
    Establish, implement, and maintain an artificial intelligence system. CC ID 14943
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.]
    Operational management Systems Design, Build, and Implementation
    Provide affected parties with the role of artificial intelligence in decision making. CC ID 17236
    [Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken. Article 86 1.]
    Operational management Communicate
    Provide the reasons for adverse decisions made by artificial intelligence systems. CC ID 17253 Operational management Process or Activity
    Authorize artificial intelligence systems for use under defined conditions. CC ID 17210
    [{not be taken} {adverse effect} The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. Article 5 3. ¶ 2]
    Operational management Process or Activity
    Refrain from notifying users when images, videos, or audio have been artificially generated or manipulated if use of the artificial intelligence system is authorized by law. CC ID 15051
    [Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Article 50 4. ¶ 1
    Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. Article 50 4. ¶ 2]
    Operational management Communicate
    Establish, implement, and maintain a post-market monitoring system. CC ID 15050
    [Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system. Article 72 1.
    The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.
    The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are law-enforcement authorities. Article 72 2.
    The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). Article 72 3.]
    Operational management Monitor and Evaluate Occurrences
    Limit artificial intelligence systems authorizations to the time period until conformity assessment procedures are complete. CC ID 15043
    [By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Operational management Business Processes
    Terminate authorizations for artificial intelligence systems when conformity assessment procedures are complete. CC ID 15042 Operational management Business Processes
    Authorize artificial intelligence systems to be put into service for exceptional reasons while conformity assessment procedures are being conducted. CC ID 15039
    [In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.]
    Operational management Business Processes
    Discard the outputs of the artificial intelligence system when authorizations are denied. CC ID 17225
    [In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.]
    Operational management Process or Activity
    Authorize artificial intelligence systems to be placed on the market for exceptional reasons while conformity assessment procedures are being conducted. CC ID 15037
    [By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay. Article 46 1.]
    Operational management Business Processes
    Prohibit artificial intelligence systems from being placed on the market when it is not in compliance with the requirements. CC ID 15029
    [Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. Article 24 2.
    Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. Article 23 2.]
    Operational management Acquisition/Sale of Assets or Services
    Ensure the artificial intelligence system performs at an acceptable level of accuracy, robustness, and cybersecurity. CC ID 15024
    [High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. Article 15 1.
    In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk shall: ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model. Article 55 1.(d)]
    Operational management Process or Activity
    Implement an acceptable level of accuracy, robustness, and cybersecurity in the development of artificial intelligence systems. CC ID 15022
    [High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. Article 15 1.]
    Operational management Systems Design, Build, and Implementation
    Take into account the nature of the situation when determining the possibility of using 'real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement. CC ID 15020
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used; Article 5 2.(a)]
    Operational management Process or Activity
    Notify users when images, videos, or audio on the artificial intelligence system has been artificially generated or manipulated. CC ID 15019
    [Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences. Article 50 2.
    Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Article 50 4. ¶ 1
    Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. Article 50 4. ¶ 2]
    Operational management Communicate
    Refrain from notifying users of artificial intelligence systems using biometric categorization for law enforcement. CC ID 15017
    [{applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Operational management Communicate
    Use a remote biometric identification system under defined conditions. CC ID 15016
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    {post-remote biometric identification system} Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. Article 26 10. ¶ 1]
    Operational management Process or Activity
    Notify users when they are using an artificial intelligence system. CC ID 15015
    [Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply. Article 26 11.
    Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives. Article 26 7.
    {applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Operational management Communicate
    Receive prior authorization for the use of a remote biometric identification system. CC ID 15014
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    {post-remote biometric identification system} Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence. Article 26 10. ¶ 1]
    Operational management Business Processes
    Prohibit artificial intelligence systems that deploys subliminal techniques from being placed on the market. CC ID 15012 Operational management Acquisition/Sale of Assets or Services
    Prohibit artificial intelligence systems that use social scores for unfavorable treatment from being placed on the market. CC ID 15010 Operational management Acquisition/Sale of Assets or Services
    Prohibit artificial intelligence systems that evaluate or classify the trustworthiness of individuals from being placed on the market. CC ID 15008 Operational management Acquisition/Sale of Assets or Services
    Prohibit artificial intelligence systems that exploits vulnerabilities of a specific group of persons from being placed on the market. CC ID 15006 Operational management Acquisition/Sale of Assets or Services
    Refrain from making a decision based on system output unless verified by at least two natural persons. CC ID 15004
    [{not be taken} {adverse effect} The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system. Article 5 3. ¶ 2
    {human oversight} {not taken} For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority. Article 14 5. ¶ 1
    The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate. Article 14 5. ¶ 2
    {not be used} {not be taken} {adverse effect} In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. Article 26 10. ¶ 3]
    Operational management Business Processes
    Establish, implement, and maintain human oversight over artificial intelligence systems. CC ID 15003
    [High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use. Article 14 1.
    Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section. Article 14 2.
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: Article 14 3.
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; Article 14 3.(a)
    {human oversight} The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures: measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer. Article 14 3.(b)
    Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support. Article 26 2.]
    Operational management Behavior
    Implement measures to enable personnel assigned to human oversight to intervene or interrupt the operation of the artificial intelligence system. CC ID 15093
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state. Article 14 4.(e)]
    Operational management Process or Activity
    Implement measures to enable personnel assigned to human oversight to be aware of the possibility of automatically relying or over-relying on outputs to make decisions. CC ID 15091
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; Article 14 4.(b)]
    Operational management Human Resources Management
    Implement measures to enable personnel assigned to human oversight to interpret output correctly. CC ID 15089
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available; Article 14 4.(c)]
    Operational management Data and Information Management
    Implement measures to enable personnel assigned to human oversight to decide to refrain from using the artificial intelligence system or override disregard, or reverse the output. CC ID 15079
    [For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system; Article 14 4.(d)]
    Operational management Behavior
    Enable users to interpret the artificial intelligence system's output and use. CC ID 15002
    [{is transparent} High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Section 3. Article 13 1.]
    Operational management Business Processes
    Develop artificial intelligence systems involving the training of models with data sets that meet the quality criteria. CC ID 14996
    [{training data} {validation data} {testing data} High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used. Article 10 1.]
    Operational management Systems Design, Build, and Implementation
    Withdraw the technical documentation assessment certificate when the artificial intelligence system is not in compliance with requirements. CC ID 15099
    [Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision. Article 44 3. ¶ 1]
    Operational management Establish/Maintain Documentation
    Reassess the designation of artificial intelligence systems. CC ID 17230
    [Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers may request reassessment at the earliest six months after the designation decision. Where the Commission, following its reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request reassessment at the earliest six months after that decision. Article 52 5.]
    Operational management Process or Activity
    Define a high-risk artificial intelligence system. CC ID 14959
    [{high-risk artificial intelligence system} Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; Article 6 1.(a)
    {high-risk artificial intelligence system} Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I. Article 6 1.(b)
    {high-risk artificial intelligence system} By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. Article 6 3. ¶ 1
    {not be considered a high-risk artificial intelligence system} {assigned task} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to perform a narrow procedural task; Article 6 3. ¶ 2 (a)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to improve the result of a previously completed human activity; Article 6 3. ¶ 2 (b)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or Article 6 3. ¶ 2 (c)
    {not be considered a high-risk artificial intelligence system} The first subparagraph shall apply where any of the following conditions is fulfilled: the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III. Article 6 3. ¶ 2 (d)
    {high-risk artificial intelligence system} Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons. Article 6 3. ¶ 3]
    Operational management Establish/Maintain Documentation
    Take into account the consequences for the rights and freedoms of persons when using ‘real-time’ remote biometric identification systems for law enforcement. CC ID 14957
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences. Article 5 2.(b)]
    Operational management Process or Activity
    Allow the use of 'real-time' remote biometric identification systems for law enforcement under defined conditions. CC ID 14955
    [The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements: Article 5 2.]
    Operational management Process or Activity
    Document the use of remote biometric identification systems. CC ID 17215
    [{post-remote biometric identification system} Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Article 26 10. ¶ 5]
    Operational management Business Processes
    Notify interested personnel and affected parties of the use of remote biometric identification systems. CC ID 17216
    [{post-remote biometric identification system} Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities. Article 26 10. ¶ 5]
    Operational management Communicate
    Refrain from using remote biometric identification systems under defined conditions. CC ID 14953
    [The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: Article 5 1.(h)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; Article 5 1.(h)(i)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; Article 5 1.(h)(ii)
    {is necessary} The following AI practices shall be prohibited: the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. Article 5 1.(h)(iii)
    For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. Article 26 10. ¶ 2
    {not be used} {not be taken} {adverse effect} In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems. Article 26 10. ¶ 3]
    Operational management Process or Activity
    Prohibit the use of artificial intelligence systems under defined conditions. CC ID 14951
    [The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm; Article 5 1.(a)
    The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm; Article 5 1.(b)
    The following AI practices shall be prohibited: the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: Article 5 1.(c)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; Article 5 1.(d)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity; Article 5 1.(d)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; Article 5 1.(e)
    The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons; Article 5 1.(f)
    {religious beliefs} The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; Article 5 1.(g)
    {religious beliefs} The following AI practices shall be prohibited: the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement; Article 5 1.(g)
    In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded. Article 46 2.
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions. Article 60 7.]
    Operational management Process or Activity
    Establish, implement, and maintain a declaration of conformity. CC ID 15038
    [Providers of high-risk AI systems shall: draw up an EU declaration of conformity in accordance with Article 47; Article 16 ¶ 1 (g)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. Article 47 3.
    {keep up to date} By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate. Article 47 4.]
    Operational management Establish/Maintain Documentation
    Include compliance requirements in the declaration of conformity. CC ID 15105
    [Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates. Article 47 3.]
    Operational management Establish/Maintain Documentation
    Translate the declaration of conformity into an official language. CC ID 15103
    [The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.
    Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. Article 21 1.]
    Operational management Establish/Maintain Documentation
    Disseminate and communicate the declaration of conformity to interested personnel and affected parties. CC ID 15102
    [Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system shall provide that authority with all the information and documentation regarding their actions pursuant to paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in Section 2. Article 24 5.
    Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. Article 23 6.
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official languages of the institutions of the Union as indicated by the Member State concerned. Article 21 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider; Article 22 3.(c)]
    Operational management Communicate
    Include all required information in the declaration of conformity. CC ID 15101
    [The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.]
    Operational management Establish/Maintain Documentation
    Include a statement that the artificial intelligence system meets all requirements in the declaration of conformity. CC ID 15100
    [Providers of high-risk AI systems shall: upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Section 2; Article 16 ¶ 1 (k)
    The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available. Article 47 2.]
    Operational management Establish/Maintain Documentation
    Identify the artificial intelligence system in the declaration of conformity. CC ID 15098
    [The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.]
    Operational management Establish/Maintain Documentation
    Establish, implement, and maintain a Configuration Management program. CC ID 00867 System hardening through configuration management Establish/Maintain Documentation
    Establish, implement, and maintain configuration control and Configuration Status Accounting. CC ID 00863 System hardening through configuration management Business Processes
    Establish, implement, and maintain appropriate system labeling. CC ID 01900
    [Providers of high-risk AI systems shall: indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable, their name, registered trade name or registered trade mark, the address at which they can be contacted; Article 16 ¶ 1 (b)
    Providers of high-risk AI systems shall: affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with this Regulation, in accordance with Article 48; Article 16 ¶ 1 (h)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; Article 23 1.(c)
    Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable. Article 23 3.
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    {digital form} For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means. Article 48 2.
    The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. Article 48 3.
    The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. Article 48 3.
    Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.
    Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law. Article 48 5.]
    System hardening through configuration management Establish/Maintain Documentation
    Include the identification number of the third party who performed the conformity assessment procedures on all promotional materials. CC ID 15041
    [Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.]
    System hardening through configuration management Establish/Maintain Documentation
    Include the identification number of the third party who conducted the conformity assessment procedures after the CE marking of conformity. CC ID 15040
    [Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking. Article 48 4.]
    System hardening through configuration management Establish/Maintain Documentation
    Establish, implement, and maintain system hardening procedures. CC ID 12001 System hardening through configuration management Establish/Maintain Documentation
    Configure the system security parameters to prevent system misuse or information misappropriation. CC ID 00881
    [High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. Article 15 5. ¶ 1]
    System hardening through configuration management Configuration
    Configure Hypertext Transfer Protocol headers in accordance with organizational standards. CC ID 16851 System hardening through configuration management Configuration
    Configure Hypertext Transfer Protocol security headers in accordance with organizational standards. CC ID 16488 System hardening through configuration management Configuration
    Configure "Enable Structured Exception Handling Overwrite Protection (SEHOP)" to organizational standards. CC ID 15385 System hardening through configuration management Configuration
    Configure Microsoft Attack Surface Reduction rules in accordance with organizational standards. CC ID 16478 System hardening through configuration management Configuration
    Configure "Remote host allows delegation of non-exportable credentials" to organizational standards. CC ID 15379 System hardening through configuration management Configuration
    Configure "Configure enhanced anti-spoofing" to organizational standards. CC ID 15376 System hardening through configuration management Configuration
    Configure "Block user from showing account details on sign-in" to organizational standards. CC ID 15374 System hardening through configuration management Configuration
    Configure "Configure Attack Surface Reduction rules" to organizational standards. CC ID 15370 System hardening through configuration management Configuration
    Configure "Turn on e-mail scanning" to organizational standards. CC ID 15361 System hardening through configuration management Configuration
    Configure "Prevent users and apps from accessing dangerous websites" to organizational standards. CC ID 15359 System hardening through configuration management Configuration
    Configure "Enumeration policy for external devices incompatible with Kernel DMA Protection" to organizational standards. CC ID 15352 System hardening through configuration management Configuration
    Configure "Prevent Internet Explorer security prompt for Windows Installer scripts" to organizational standards. CC ID 15351 System hardening through configuration management Configuration
    Store state information from applications and software separately. CC ID 14767 System hardening through configuration management Configuration
    Configure the "aufs storage" to organizational standards. CC ID 14461 System hardening through configuration management Configuration
    Configure the "AppArmor Profile" to organizational standards. CC ID 14496 System hardening through configuration management Configuration
    Configure the "device" argument to organizational standards. CC ID 14536 System hardening through configuration management Configuration
    Configure the "Docker" group ownership to organizational standards. CC ID 14495 System hardening through configuration management Configuration
    Configure the "Docker" user ownership to organizational standards. CC ID 14505 System hardening through configuration management Configuration
    Configure "Allow upload of User Activities" to organizational standards. CC ID 15338 System hardening through configuration management Configuration
    Configure the "ulimit" to organizational standards. CC ID 14499 System hardening through configuration management Configuration
    Configure the computer-wide, rather than per-user, use of Microsoft Spynet Reporting for Windows Defender properly. CC ID 05282 System hardening through configuration management Configuration
    Configure the "Turn off Help Ratings" setting. CC ID 05285 System hardening through configuration management Configuration
    Configure the "Decoy Admin Account Not Disabled" policy properly. CC ID 05286 System hardening through configuration management Configuration
    Configure the "Anonymous access to the registry" policy properly. CC ID 05288 System hardening through configuration management Configuration
    Configure the File System Checker and Popups setting. CC ID 05289 System hardening through configuration management Configuration
    Configure the System File Checker setting. CC ID 05290 System hardening through configuration management Configuration
    Configure the System File Checker Progress Meter setting. CC ID 05291 System hardening through configuration management Configuration
    Configure the Protect Kernel object attributes properly. CC ID 05292 System hardening through configuration management Configuration
    Verify crontab files are owned by an appropriate user or group. CC ID 05305 System hardening through configuration management Configuration
    Restrict the exporting of files and directories, as necessary. CC ID 16315 System hardening through configuration management Technical Security
    Verify the /etc/syslog.conf file is owned by an appropriate user or group. CC ID 05322 System hardening through configuration management Configuration
    Verify the traceroute executable is owned by an appropriate user or group. CC ID 05323 System hardening through configuration management Configuration
    Verify the /etc/passwd file is owned by an appropriate user or group. CC ID 05325 System hardening through configuration management Configuration
    Configure the "Prohibit Access of the Windows Connect Now Wizards" setting. CC ID 05380 System hardening through configuration management Configuration
    Configure the "Allow remote access to the PnP interface" setting. CC ID 05381 System hardening through configuration management Configuration
    Configure the "Do not create system restore point when new device driver installed" setting. CC ID 05382 System hardening through configuration management Configuration
    Configure the "Turn Off Access to All Windows Update Feature" setting. CC ID 05383 System hardening through configuration management Configuration
    Configure the "Turn Off Automatic Root Certificates Update" setting. CC ID 05384 System hardening through configuration management Configuration
    Configure the "Turn Off Event Views 'Events.asp' Links" setting. CC ID 05385 System hardening through configuration management Configuration
    Configure the "Turn Off Internet File Association Service" setting. CC ID 05389 System hardening through configuration management Configuration
    Configure the "Turn Off Registration if URL Connection is Referring to Microsoft.com" setting. CC ID 05390 System hardening through configuration management Configuration
    Configure the "Turn off the 'Order Prints' Picture task" setting. CC ID 05391 System hardening through configuration management Configuration
    Configure the "Turn Off Windows Movie Maker Online Web Links" setting. CC ID 05392 System hardening through configuration management Configuration
    Configure the "Turn Off Windows Movie Maker Saving to Online Video Hosting Provider" setting. CC ID 05393 System hardening through configuration management Configuration
    Configure the "Don't Display the Getting Started Welcome Screen at Logon" setting. CC ID 05394 System hardening through configuration management Configuration
    Configure the "Turn off Windows Startup Sound" setting. CC ID 05395 System hardening through configuration management Configuration
    Configure the "Prevent IIS Installation" setting. CC ID 05398 System hardening through configuration management Configuration
    Configure the "Turn off Active Help" setting. CC ID 05399 System hardening through configuration management Configuration
    Configure the "Turn off Untrusted Content" setting. CC ID 05400 System hardening through configuration management Configuration
    Configure the "Turn off downloading of enclosures" setting. CC ID 05401 System hardening through configuration management Configuration
    Configure "Allow indexing of encrypted files" to organizational standards. CC ID 05402 System hardening through configuration management Configuration
    Configure the "Prevent indexing uncached Exchange folders" setting. CC ID 05403 System hardening through configuration management Configuration
    Configure the "Turn off Windows Calendar" setting. CC ID 05404 System hardening through configuration management Configuration
    Configure the "Turn off Windows Defender" setting. CC ID 05405 System hardening through configuration management Configuration
    Configure the "Turn off the communication features" setting. CC ID 05410 System hardening through configuration management Configuration
    Configure the "Turn off Windows Meeting Space" setting. CC ID 05413 System hardening through configuration management Configuration
    Configure the "Turn on Windows Meeting Space auditing" setting. CC ID 05414 System hardening through configuration management Configuration
    Configure the "Disable unpacking and installation of gadgets that are not digitally signed" setting. CC ID 05415 System hardening through configuration management Configuration
    Configure the "Override the More Gadgets Link" setting. CC ID 05416 System hardening through configuration management Configuration
    Configure the "Turn Off User Installed Windows Sidebar Gadgets" setting. CC ID 05417 System hardening through configuration management Configuration
    Configure the "Turn off Downloading of Game Information" setting. CC ID 05419 System hardening through configuration management Configuration
    Set the noexec_user_stack flag on the user stack properly. CC ID 05439 System hardening through configuration management Configuration
    Configure the "restrict guest access to system log" policy, as appropriate. CC ID 06047 System hardening through configuration management Configuration
    Configure the Trusted Platform Module (TPM) platform validation profile, as appropriate. CC ID 06056 System hardening through configuration management Configuration
    Enable or disable the standby states, as appropriate. CC ID 06060 System hardening through configuration management Configuration
    Configure the Trusted Platform Module startup options properly. CC ID 06061 System hardening through configuration management Configuration
    Configure the "Obtain Software Package Updates with apt-get" setting to organizational standards. CC ID 11375 System hardening through configuration management Configuration
    Configure the "display a banner before authentication" setting for "LightDM" to organizational standards. CC ID 11385 System hardening through configuration management Configuration
    Configure Logging settings in accordance with organizational standards. CC ID 07611 System hardening through configuration management Configuration
    Provide the reference database used to verify input data in the logging capability. CC ID 15018
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the reference database against which input data has been checked by the system; Article 12 3.(b)]
    System hardening through configuration management Log Management
    Configure the log to capture the user's identification. CC ID 01334
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5). Article 12 3.(d)]
    System hardening through configuration management Configuration
    Configure the log to capture a date and time stamp. CC ID 01336
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: recording of the period of each use of the system (start date and time and end date and time of each use); Article 12 3.(a)]
    System hardening through configuration management Configuration
    Configure all logs to capture auditable events or actionable events. CC ID 06332 System hardening through configuration management Configuration
    Configure the log to capture user queries and searches. CC ID 16479
    [For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum: the input data for which the search has led to a match; Article 12 3.(c)]
    System hardening through configuration management Log Management
    Establish, implement, and maintain an information management program. CC ID 14315 Records management Establish/Maintain Documentation
    Archive appropriate records, logs, and database tables. CC ID 06321
    [Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. Article 19 1.
    Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation kept under the relevant financial services law. Article 19 2.]
    Records management Records Management
    Retain records in accordance with applicable requirements. CC ID 00968
    [The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, #B7D8ED;" class="term_primary-verb">keepan> at the disposal of the national competent authorities: the technical documentation referred to in Article 11; Article 18 1.(a)
    Providers of high-risk AI systems shall: keep the documentation referred to in Article 18; Article 16 ¶ 1 (d)
    Providers of high-risk AI systems shall: when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in Article 19; Article 16 ¶ 1 (e)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the documentation concerning the quality management system referred to in Article 17; Article 18 1.(b)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the documentation concerning the changes approved by notified bodies, where applicable; Article 18 1.(c)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the decisions and other documents issued by the notified bodies, where applicable; Article 18 1.(d)
    The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: the EU declaration of conformity referred to in Article 47. Article 18 1.(e)
    Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal data. Article 19 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep at the disposal of the competent authorities and national authorities or bodies referred to in Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details of the provider that appointed the authorised representative, a copy of the EU declaration of conformity referred to in Article 47, the technical documentation and, if applicable, the certificate issued by the notified body; Article 22 3.(b)
    Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity referred to in Article 47. Article 23 5.
    The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. Article 47 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; Article 54 3.(b)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and the contact details of the provider that appointed the authorised representative; Article 54 3.(b)
    Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data. Article 26 6. ¶ 1]
    Records management Records Management
    Capture and maintain logs as official records. CC ID 06319
    [Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the F0BBBC;" class="term_primary-noun">logs as part of the documentation kept pursuant to the relevant Union financial service law. Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law. Article 26 6. ¶ 2]
    Records management Log Management
    Establish and maintain access controls for all records. CC ID 00371
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; Article 10 5.(c)
    {training data} {validation data} {testing data} Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other relevant technical means and tools enabling remote access. Article 74 12.]
    Records management Records Management
    Initiate the System Development Life Cycle planning phase. CC ID 06266 Systems design, build, and implementation Systems Design, Build, and Implementation
    Establish, implement, and maintain system design requirements. CC ID 06618 Systems design, build, and implementation Establish/Maintain Documentation
    Design and develop built-in redundancies, as necessary. CC ID 13064
    [{be resilient} {technical measures} High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. Article 15 4. ¶ 1
    {be resilient} {technical measures} High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard. Article 15 4. ¶ 1]
    Systems design, build, and implementation Systems Design, Build, and Implementation
    Initiate the System Development Life Cycle development phase or System Development Life Cycle build phase. CC ID 06267 Systems design, build, and implementation Systems Design, Build, and Implementation
    Establish, implement, and maintain human interface guidelines. CC ID 08662
    [Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. Article 50 1.
    Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence. Article 50 1.]
    Systems design, build, and implementation Establish/Maintain Documentation
    Ensure users can navigate content. CC ID 15163 Systems design, build, and implementation Configuration
    Create text content using language that is readable and is understandable. CC ID 15167 Systems design, build, and implementation Configuration
    Ensure user interface components are operable. CC ID 15162 Systems design, build, and implementation Configuration
    Implement mechanisms to review, confirm, and correct user submissions. CC ID 15160 Systems design, build, and implementation Configuration
    Allow users to reverse submissions. CC ID 15168 Systems design, build, and implementation Configuration
    Provide a mechanism to control audio. CC ID 15158 Systems design, build, and implementation Configuration
    Allow modification of style properties without loss of content or functionality. CC ID 15156 Systems design, build, and implementation Configuration
    Programmatically determine the name and role of user interface components. CC ID 15148 Systems design, build, and implementation Configuration
    Programmatically determine the language of content. CC ID 15137 Systems design, build, and implementation Configuration
    Provide a mechanism to dismiss content triggered by mouseover or keyboard focus. CC ID 15164 Systems design, build, and implementation Configuration
    Configure repeated navigational mechanisms to occur in the same order unless overridden by the user. CC ID 15166 Systems design, build, and implementation Configuration
    Refrain from activating a change of context when changing the setting of user interface components, as necessary. CC ID 15165 Systems design, build, and implementation Configuration
    Provide users a mechanism to remap keyboard shortcuts. CC ID 15133 Systems design, build, and implementation Configuration
    Identify the components in a set of web pages that consistently have the same functionality. CC ID 15116 Systems design, build, and implementation Process or Activity
    Provide captions for live audio content. CC ID 15120 Systems design, build, and implementation Configuration
    Programmatically determine the purpose of each data field that collects information from the user. CC ID 15114 Systems design, build, and implementation Configuration
    Provide labels or instructions when content requires user input. CC ID 15077 Systems design, build, and implementation Configuration
    Allow users to control auto-updating information, as necessary. CC ID 15159 Systems design, build, and implementation Configuration
    Use headings on all web pages and labels in all content that describes the topic or purpose. CC ID 15070 Systems design, build, and implementation Configuration
    Display website content triggered by mouseover or keyboard focus. CC ID 15152 Systems design, build, and implementation Configuration
    Ensure the purpose of links can be determined through the link text. CC ID 15157 Systems design, build, and implementation Configuration
    Use a unique title that describes the topic or purpose for each web page. CC ID 15069 Systems design, build, and implementation Configuration
    Allow the use of time limits, as necessary. CC ID 15155 Systems design, build, and implementation Configuration
    Include mechanisms for changing authenticators in human interface guidelines. CC ID 14944 Systems design, build, and implementation Establish/Maintain Documentation
    Refrain from activating a change of context in a user interface component. CC ID 15115 Systems design, build, and implementation Configuration
    Include functionality for managing user data in human interface guidelines. CC ID 14928 Systems design, build, and implementation Establish/Maintain Documentation
    Establish, implement, and maintain sandboxes. CC ID 14946 Systems design, build, and implementation Testing
    Allow personal data collected for other purposes to be used to develop and test artificial intelligence systems in regulatory sandboxes under defined conditions. CC ID 15044
    [In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: public safety and public health, including disease detection, diagnosis prevention, control and treatment and improvement of health care systems; Article 59 1.(a)(i)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection against pollution, green transition measures, climate change mitigation and adaptation measures; Article 59 1.(a)(ii)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: energy sustainability; Article 59 1.(a)(iii)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: safety and resilience of transport systems and mobility, critical infrastructure and networks; Article 59 1.(a)(iv)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or legal person and in one or more of the following areas: efficiency and quality of public administration and public services; Article 59 1.(a)(v)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other non-personal data; Article 59 1.(b)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where necessary, stop the processing; Article 59 1.(c)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the prospective provider and only authorised persons have access to those data; Article 59 1.(d)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: providers can further share the originally collected data only in accordance with Union data protection law; any personal data created in the sandbox cannot be shared outside the sandbox; Article 59 1.(e)
    {do not lead} {do not affect} In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data; Article 59 1.(f)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox, unless provided otherwise by Union or national law; Article 59 1.(h)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: a complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation referred to in Annex IV; Article 59 1.(i)
    In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities. Article 59 1.(j)
    {technical measures} In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met: any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period; Article 59 1.(g)]
    Systems design, build, and implementation Systems Design, Build, and Implementation
    Initiate the System Development Life Cycle implementation phase. CC ID 06268 Systems design, build, and implementation Systems Design, Build, and Implementation
    Submit the information system's security authorization package to the appropriate stakeholders, as necessary. CC ID 13987
    [The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities. Article 46 3.]
    Systems design, build, and implementation Establish/Maintain Documentation
    Establish and maintain technical documentation. CC ID 15005
    [The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. Article 11 1. ¶ 1
    The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. Article 11 1. ¶ 1
    Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the technical documentation as part of the documentation kept under the relevant Union financial services law. Article 18 3.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider; Article 22 3.(a)
    Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV; Article 23 1.(b)
    Providers of general-purpose AI models shall: draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; Article 53 1.(a)
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: verify that the technical documentation specified in Annex XI has been drawn up and all obligations referred to in Article 53 and, where applicable, Article 55 have been fulfilled by the provider; Article 54 3.(a)
    Providers of general-purpose AI models shall: draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. Article 53 1.(d)
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: Article 53 1.(b)]
    Systems design, build, and implementation Establish/Maintain Documentation
    Retain technical documentation on the premises where the artificial intelligence system is located. CC ID 15104 Systems design, build, and implementation Establish/Maintain Documentation
    Include the risk mitigation measures in the technical documentation. CC ID 17246 Systems design, build, and implementation Establish/Maintain Documentation
    Include the intended outputs of the system in the technical documentation. CC ID 17245 Systems design, build, and implementation Establish/Maintain Documentation
    Include the limitations of the system in the technical documentation. CC ID 17242 Systems design, build, and implementation Establish/Maintain Documentation
    Include the types of data used to train the artificial intelligence system in the technical documentation. CC ID 17241 Systems design, build, and implementation Establish/Maintain Documentation
    Include all required information in the technical documentation. CC ID 15094
    [The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2
    The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2
    Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up containing all the information set out in paragraph 1, as well as the information required under those legal acts. Article 11 2.
    Providers of general-purpose AI models shall: draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of providing it, upon request, to the AI Office and the national competent authorities; Article 53 1.(a)
    The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2). Article 72 3.
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and Article 53 1.(b)(i)
    Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: contain, at a minimum, the elements set out in Annex XII; Article 53 1.(b)(ii)]
    Systems design, build, and implementation Establish/Maintain Documentation
    Include information that demonstrates compliance with requirements in the technical documentation. CC ID 15088
    [The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment. Article 11 1. ¶ 2]
    Systems design, build, and implementation Establish/Maintain Documentation
    Disseminate and communicate technical documentation to interested personnel and affected parties. CC ID 17229
    [Providers of general-purpose AI models shall: draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and national law, the information and documentation shall: Article 53 1.(b)
    Providers of general-purpose AI models shall: draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office. Article 53 1.(d)]
    Systems design, build, and implementation Communicate
    Establish, implement, and maintain system acquisition contracts. CC ID 14758 Acquisition or sale of facilities, technology, and services Establish/Maintain Documentation
    Disseminate and communicate the system documentation to interested personnel and affected parties. CC ID 14285
    [Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that: the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use; Article 23 1.(c)
    Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3). Article 24 1.
    Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities. Article 23 6.]
    Acquisition or sale of facilities, technology, and services Communicate
    Register new systems with the program office or other applicable stakeholder. CC ID 13986
    [In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay. Article 5 2. ¶ 1
    Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. Article 49 1.
    Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71. Article 49 1.
    Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. Article 49 2.
    Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71. Article 49 2.
    Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. Article 49 3.
    Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71. Article 49 3.]
    Acquisition or sale of facilities, technology, and services Business Processes
    Establish, implement, and maintain a consumer complaint management program. CC ID 04570
    [In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market surveillance authorities. Article 85 ¶ 2
    Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. A complaint shall be duly reasoned and indicate at least: Article 89 2.]
    Acquisition or sale of facilities, technology, and services Business Processes
    Document consumer complaints. CC ID 13903
    [{natural persons} Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market surveillance authority. Article 85 ¶ 1
    A complaint shall be duly reasoned and indicate at least: the point of contact of the provider of the general-purpose AI model concerned; Article 89 2.(a)
    A complaint shall be duly reasoned and indicate at least: a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the downstream provider considers that the provider of the general-purpose AI model concerned infringed this Regulation; Article 89 2.(b)
    {is relevant} A complaint shall be duly reasoned and indicate at least: any other information that the downstream provider that sent the request considers relevant, including, where appropriate, information gathered on its own initiative. Article 89 2.(c)]
    Acquisition or sale of facilities, technology, and services Business Processes
    Assess consumer complaints and litigation. CC ID 16521 Acquisition or sale of facilities, technology, and services Investigate
    Notify the complainant about their rights after receiving a complaint. CC ID 16794 Acquisition or sale of facilities, technology, and services Communicate
    Include how to access information from the dispute resolution body in the consumer complaint management program. CC ID 13816 Acquisition or sale of facilities, technology, and services Establish/Maintain Documentation
    Include any requirements for using information from the dispute resolution body in the consumer complaint management program. CC ID 13815 Acquisition or sale of facilities, technology, and services Establish/Maintain Documentation
    Post contact information in an easily seen location at facilities. CC ID 13812 Acquisition or sale of facilities, technology, and services Communicate
    Provide users a list of the available dispute resolution bodies. CC ID 13814 Acquisition or sale of facilities, technology, and services Communicate
    Post the dispute resolution body's contact information on the organization's website. CC ID 13811 Acquisition or sale of facilities, technology, and services Communicate
    Disseminate and communicate the consumer complaint management program to interested personnel and affected parties. CC ID 16795 Acquisition or sale of facilities, technology, and services Communicate
    Establish, implement, and maintain notice and take-down procedures. CC ID 09963 Acquisition or sale of facilities, technology, and services Establish/Maintain Documentation
    Establish, implement, and maintain a privacy framework that protects restricted data. CC ID 11850 Privacy protection for information and data Establish/Maintain Documentation
    Establish, implement, and maintain personal data choice and consent program. CC ID 12569
    [Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking their informed consent and may request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out. Article 60 5.]
    Privacy protection for information and data Establish/Maintain Documentation
    Provide a copy of the data subject's consent to the data subject. CC ID 17234
    [The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. Article 61 2.]
    Privacy protection for information and data Communicate
    Date the data subject's consent. CC ID 17233
    [The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal representative. Article 61 2.]
    Privacy protection for information and data Establish/Maintain Documentation
    Establish, implement, and maintain data request procedures. CC ID 16546 Privacy protection for information and data Establish/Maintain Documentation
    Refrain from discriminating against data subjects who have exercised privacy rights. CC ID 13435 Privacy protection for information and data Human Resources Management
    Refrain from charging a fee to implement an opt-out request. CC ID 13877 Privacy protection for information and data Business Processes
    Establish and maintain disclosure authorization forms for authorization of consent to use personal data. CC ID 13433 Privacy protection for information and data Establish/Maintain Documentation
    Include procedures for revoking authorization of consent to use personal data in the disclosure authorization form. CC ID 13438 Privacy protection for information and data Establish/Maintain Documentation
    Include the identity of the person seeking consent in the disclosure authorization. CC ID 13999 Privacy protection for information and data Establish/Maintain Documentation
    Include the recipients of the disclosed personal data in the disclosure authorization form. CC ID 13440 Privacy protection for information and data Establish/Maintain Documentation
    Include the signature of the data subject and the signing date in the disclosure authorization form. CC ID 13439 Privacy protection for information and data Establish/Maintain Documentation
    Include the identity of the data subject in the disclosure authorization form. CC ID 13436 Privacy protection for information and data Establish/Maintain Documentation
    Include the types of personal data to be disclosed in the disclosure authorization form. CC ID 13442 Privacy protection for information and data Establish/Maintain Documentation
    Include how personal data will be used in the disclosure authorization form. CC ID 13441 Privacy protection for information and data Establish/Maintain Documentation
    Include agreement termination information in the disclosure authorization form. CC ID 13437 Privacy protection for information and data Establish/Maintain Documentation
    Offer incentives for consumers to opt-in to provide their personal data to the organization. CC ID 13781 Privacy protection for information and data Business Processes
    Refrain from using coercive financial incentive programs to entice opt-in consent. CC ID 13795 Privacy protection for information and data Business Processes
    Allow data subjects to opt out and refrain from granting an authorization of consent to use personal data. CC ID 00391 Privacy protection for information and data Data and Information Management
    Treat an opt-out direction by an individual joint consumer as applying to all associated joint consumers. CC ID 13452 Privacy protection for information and data Business Processes
    Treat opt-out directions separately for each customer relationship the data subject establishes with the organization. CC ID 13454 Privacy protection for information and data Business Processes
    Establish, implement, and maintain an opt-out method in accordance with organizational standards. CC ID 16526 Privacy protection for information and data Data and Information Management
    Establish, implement, and maintain a notification system for opt-out requests. CC ID 16880 Privacy protection for information and data Technical Security
    Comply with opt-out directions by the data subject, unless otherwise directed by compliance requirements. CC ID 13451 Privacy protection for information and data Business Processes
    Confirm the individual's identity before granting an opt-out request. CC ID 16813 Privacy protection for information and data Process or Activity
    Highlight the section regarding data subject's consent from other sections in contracts and agreements. CC ID 13988 Privacy protection for information and data Establish/Maintain Documentation
    Allow consent requests to be provided in any official languages. CC ID 16530 Privacy protection for information and data Business Processes
    Notify interested personnel and affected parties of the reasons the opt-out request was refused. CC ID 16537 Privacy protection for information and data Communicate
    Collect and retain disclosure authorizations for each data subject. CC ID 13434 Privacy protection for information and data Records Management
    Refrain from requiring consent to collect, use, or disclose personal data beyond specified, legitimate reasons in order to receive products and services. CC ID 13605 Privacy protection for information and data Data and Information Management
    Refrain from obtaining consent through deception. CC ID 13556 Privacy protection for information and data Data and Information Management
    Give individuals the ability to change the uses of their personal data. CC ID 00469 Privacy protection for information and data Data and Information Management
    Notify data subjects of the implications of withdrawing consent. CC ID 13551 Privacy protection for information and data Data and Information Management
    Establish, implement, and maintain a personal data accountability program. CC ID 13432 Privacy protection for information and data Establish/Maintain Documentation
    Require data controllers to be accountable for their actions. CC ID 00470 Privacy protection for information and data Establish Roles
    Notify the supervisory authority. CC ID 00472
    [Notified bodies shall inform the notifying authority of the following: any refusal, restriction, suspension or withdrawal of a Union background-color:#F0BBBC;" class="term_primary-noun">technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII; Article 45 1.(b)
    Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. Article 5 4.
    Providers of high-risk AI systems shall: take the necessary corrective actions and provide information as required in Article 20; Article 16 ¶ 1 (j)
    Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect. Article 24 2.
    Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect. Article 23 2.
    Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities. Article 26 5. ¶ 1
    Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor. Article 26 8.
    Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. Article 52 1.
    Providers or prospective providers shall notify the national market surveillance authority in the Member State where the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world conditions and of the final outcomes. Article 60 8.]
    Privacy protection for information and data Behavior
    Establish, implement, and maintain approval applications. CC ID 16778 Privacy protection for information and data Establish/Maintain Documentation
    Define the requirements for approving or denying approval applications. CC ID 16780 Privacy protection for information and data Business Processes
    Submit approval applications to the supervisory authority. CC ID 16627 Privacy protection for information and data Communicate
    Include required information in the approval application. CC ID 16628 Privacy protection for information and data Establish/Maintain Documentation
    Extend the time limit for approving or denying approval applications. CC ID 16779 Privacy protection for information and data Business Processes
    Approve the approval application unless applicant has been convicted. CC ID 16603 Privacy protection for information and data Process or Activity
    Provide the supervisory authority with any information requested by the supervisory authority. CC ID 12606
    [Notified bodies shall inform the notifying authority of the following: any circumstances affecting the scope of or F0BBBC;" class="term_primary-noun">conditions for notification; Article 45 1.(c)
    Notified bodies shall inform the notifying authority of the following: any request for information which they have received from ound-color:#F0BBBC;" class="term_primary-noun">market surveillance authorities regarding conformity assessment activities; Article 45 1.(d)
    Notified bodies shall inform the notifying authority of the following: on request, conformity assessment activities performed within the le="background-color:#F0BBBC;" class="term_primary-noun">scope of their pan style="background-color:#F0BBBC;" class="term_primary-noun">notification and any other activity performed, including cross-border activities and subcontracting. Article 45 1.(e)
    Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data. Article 5 4.
    Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes known that it will be met. That notification shall include the information necessary to demonstrate that the relevant requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of which it has not been notified, it may decide to designate it as a model with systemic risk. Article 52 1.
    For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks: provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to in point (b), necessary to demonstrate compliance with the obligations in this Chapter; Article 54 3.(c)
    The provider of the general-purpose AI model concerned, or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading. Article 91 5.
    The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned. Article 92 5.]
    Privacy protection for information and data Process or Activity
    Notify the supervisory authority of the safeguards employed to protect the data subject's rights. CC ID 12605 Privacy protection for information and data Communicate
    Respond to questions about submissions in a timely manner. CC ID 16930 Privacy protection for information and data Communicate
    Establish, implement, and maintain a personal data use limitation program. CC ID 13428 Privacy protection for information and data Establish/Maintain Documentation
    Establish, implement, and maintain a personal data use purpose specification. CC ID 00093 Privacy protection for information and data Establish/Maintain Documentation
    Dispose of media and restricted data in a timely manner. CC ID 00125
    [For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted. Article 5 3. ¶ 1
    To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first; Article 10 5.(e)
    If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted. Article 26 10. ¶ 2]
    Privacy protection for information and data Data and Information Management
    Refrain from destroying records being inspected or reviewed. CC ID 13015 Privacy protection for information and data Records Management
    Notify the data subject after their personal data is disposed, as necessary. CC ID 13502 Privacy protection for information and data Communicate
    Establish, implement, and maintain restricted data use limitation procedures. CC ID 00128 Privacy protection for information and data Establish/Maintain Documentation
    Establish and maintain a record of processing activities when processing restricted data. CC ID 12636
    [{was necessary} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. Article 10 5.(f)]
    Privacy protection for information and data Establish/Maintain Documentation
    Refrain from maintaining a record of processing activities if the data processor employs a limited number of persons. CC ID 13378 Privacy protection for information and data Establish/Maintain Documentation
    Refrain from maintaining a record of processing activities if the personal data relates to criminal records. CC ID 13377 Privacy protection for information and data Establish/Maintain Documentation
    Refrain from maintaining a record of processing activities if the data being processed is restricted data. CC ID 13376 Privacy protection for information and data Establish/Maintain Documentation
    Refrain from maintaining a record of processing activities if it could result in a risk to the data subject's rights or data subject's freedom. CC ID 13375 Privacy protection for information and data Establish/Maintain Documentation
    Include the data protection officer's contact information in the record of processing activities. CC ID 12640 Privacy protection for information and data Records Management
    Include the data processor's contact information in the record of processing activities. CC ID 12657 Privacy protection for information and data Records Management
    Include the data processor's representative's contact information in the record of processing activities. CC ID 12658 Privacy protection for information and data Records Management
    Include a general description of the implemented security measures in the record of processing activities. CC ID 12641 Privacy protection for information and data Records Management
    Include a description of the data subject categories in the record of processing activities. CC ID 12659 Privacy protection for information and data Records Management
    Include the purpose of processing restricted data in the record of processing activities. CC ID 12663
    [{was necessary} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data. Article 10 5.(f)]
    Privacy protection for information and data Records Management
    Include the personal data processing categories in the record of processing activities. CC ID 12661 Privacy protection for information and data Records Management
    Include the time limits for erasing each data category in the record of processing activities. CC ID 12690 Privacy protection for information and data Records Management
    Include the data recipient categories to whom restricted data has been or will be disclosed in the record of processing activities. CC ID 12664 Privacy protection for information and data Records Management
    Include a description of the personal data categories in the record of processing activities. CC ID 12660 Privacy protection for information and data Records Management
    Include the joint data controller's contact information in the record of processing activities. CC ID 12639 Privacy protection for information and data Records Management
    Include the data controller's representative's contact information in the record of processing activities. CC ID 12638 Privacy protection for information and data Records Management
    Include documentation of the transferee's safeguards for transferring restricted data in the record of processing activities. CC ID 12643 Privacy protection for information and data Records Management
    Include the identification of transferees for transferring restricted data in the record of processing activities. CC ID 12642 Privacy protection for information and data Records Management
    Include the data controller's contact information in the record of processing activities. CC ID 12637 Privacy protection for information and data Records Management
    Process restricted data lawfully and carefully. CC ID 00086
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data; Article 10 5.(a)
    To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; Article 10 5.(b)]
    Privacy protection for information and data Establish Roles
    Implement technical controls that limit processing restricted data for specific purposes. CC ID 12646 Privacy protection for information and data Technical Security
    Process personal data pertaining to a patient's health in order to treat those patients. CC ID 00200 Privacy protection for information and data Data and Information Management
    Refrain from disclosing Individually Identifiable Health Information when in violation of territorial or federal law. CC ID 11966 Privacy protection for information and data Records Management
    Document the conditions for the use or disclosure of Individually Identifiable Health Information by a covered entity to another covered entity. CC ID 00210 Privacy protection for information and data Establish/Maintain Documentation
    Disclose Individually Identifiable Health Information for a covered entity's own use. CC ID 00211 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for a healthcare provider's treatment activities by a covered entity. CC ID 00212 Privacy protection for information and data Data and Information Management
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is permitted with the consent of the data subject. CC ID 11970 Privacy protection for information and data Records Management
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is to support the treatment of the individual. CC ID 11969 Privacy protection for information and data Process or Activity
    Rely upon the warranty of the covered entity that the record disclosure request for Individually Identifiable Health Information is permitted by law. CC ID 11976 Privacy protection for information and data Records Management
    Disclose Individually Identifiable Health Information for payment activities between covered entities or healthcare providers. CC ID 00213 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for Treatment, Payment, and Health Care Operations activities when both covered entities have a relationship with the data subject. CC ID 00214 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for Treatment, Payment, and Health Care Operations activities between a covered entity and a participating healthcare provider when the information is collected from the data subject and a third party. CC ID 00215 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in accordance with agreed upon restrictions. CC ID 06249 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in accordance with the privacy notice. CC ID 06250 Privacy protection for information and data Data and Information Management
    Disclose permitted Individually Identifiable Health Information for facility directories. CC ID 06251 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for cadaveric organ donation purposes, eye donation purposes, or tissue donation purposes. CC ID 06252 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for medical suitability determinations. CC ID 06253 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for armed forces personnel appropriately. CC ID 06254 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in order to provide public benefits by government agencies. CC ID 06255 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for fundraising. CC ID 06256 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information for research use when the appropriate requirements are included in the approval documentation or waiver documentation. CC ID 06257 Privacy protection for information and data Establish/Maintain Documentation
    Document the conditions for the disclosure of Individually Identifiable Health Information by an organization providing healthcare services to organizations other than business associates or other covered entities. CC ID 00201 Privacy protection for information and data Establish/Maintain Documentation
    Disclose Individually Identifiable Health Information when the data subject cannot physically or legally provide consent and the disclosing organization is a healthcare provider. CC ID 00202 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information to provide appropriate treatment to the data subject when the disclosing organization is a healthcare provider. CC ID 00203 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information when it is not contrary to the data subject's wish prior to becoming unable to provide consent and the disclosing organization is a healthcare provider. CC ID 00204 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information that is reasonable or necessary for the disclosure purpose when the disclosing organization is a healthcare provider. CC ID 00205 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information consistent with the law when the disclosing organization is a healthcare provider. CC ID 00206 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in order to carry out treatment when the disclosing organization is a healthcare provider. CC ID 00207 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in order to carry out treatment when the data subject has provided consent and the disclosing organization is a healthcare provider. CC ID 00208 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in order to carry out treatment when the data subject's guardian or representative has provided consent and the disclosing organization is a healthcare provider. CC ID 00209 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information when the disclosing organization is a healthcare provider that supports public health and safety activities. CC ID 06248 Privacy protection for information and data Data and Information Management
    Disclose Individually Identifiable Health Information in order to report abuse or neglect when the disclosing organization is a healthcare provider. CC ID 06819 Privacy protection for information and data Data and Information Management
    Refrain from disclosing Individually Identifiable Health Information related to reproductive health care, as necessary. CC ID 17250 Privacy protection for information and data Business Processes
    Document how Individually Identifiable Health Information is used and disclosed when authorization has been granted. CC ID 00216 Privacy protection for information and data Establish/Maintain Documentation
    Define and implement valid authorization control requirements. CC ID 06258 Privacy protection for information and data Establish/Maintain Documentation
    Obtain explicit consent for authorization to release Individually Identifiable Health Information. CC ID 00217 Privacy protection for information and data Data and Information Management
    Obtain explicit consent for authorization to release psychotherapy notes. CC ID 00218 Privacy protection for information and data Data and Information Management
    Cease the use or disclosure of Individually Identifiable Health Information under predetermined conditions. CC ID 17251 Privacy protection for information and data Business Processes
    Refrain from using Individually Identifiable Health Information related to reproductive health care, as necessary. CC ID 17256 Privacy protection for information and data Business Processes
    Refrain from using Individually Identifiable Health Information to determine eligibility or continued eligibility for credit. CC ID 00219 Privacy protection for information and data Data and Information Management
    Process personal data after the data subject has granted explicit consent. CC ID 00180 Privacy protection for information and data Data and Information Management
    Process personal data in order to perform a legal obligation or exercise a legal right. CC ID 00182 Privacy protection for information and data Data and Information Management
    Process personal data relating to criminal offenses when required by law. CC ID 00237 Privacy protection for information and data Data and Information Management
    Process personal data in order to prevent personal injury or damage to the data subject's health. CC ID 00183 Privacy protection for information and data Data and Information Management
    Process personal data in order to prevent personal injury or damage to a third party's health. CC ID 00184 Privacy protection for information and data Data and Information Management
    Process personal data for statistical purposes or scientific purposes. CC ID 00256 Privacy protection for information and data Data and Information Management
    Process personal data during legitimate activities with safeguards for the data subject's legal rights. CC ID 00185 Privacy protection for information and data Data and Information Management
    Process traffic data in a controlled manner. CC ID 00130 Privacy protection for information and data Data and Information Management
    Process personal data for health insurance, social insurance, state social benefits, social welfare, or child protection. CC ID 00186 Privacy protection for information and data Data and Information Management
    Process personal data when it is publicly accessible. CC ID 00187 Privacy protection for information and data Data and Information Management
    Process personal data for direct marketing and other personalized mail programs. CC ID 00188 Privacy protection for information and data Data and Information Management
    Refrain from processing personal data for marketing or advertising to children. CC ID 14010 Privacy protection for information and data Business Processes
    Process personal data for the purposes of employment. CC ID 16527 Privacy protection for information and data Data and Information Management
    Process personal data for justice administration, lawsuits, judicial decisions, and investigations. CC ID 00189 Privacy protection for information and data Data and Information Management
    Process personal data for debt collection or benefit payments. CC ID 00190 Privacy protection for information and data Data and Information Management
    Process personal data in order to advance the public interest. CC ID 00191 Privacy protection for information and data Data and Information Management
    Process personal data for surveys, archives, or scientific research. CC ID 00192 Privacy protection for information and data Data and Information Management
    Process personal data absent consent for journalistic purposes, artistic purposes, or literary purposes. CC ID 00193 Privacy protection for information and data Data and Information Management
    Process personal data for academic purposes or religious purposes. CC ID 00194 Privacy protection for information and data Data and Information Management
    Process personal data when it is used by a public authority for National Security policy or criminal policy. CC ID 00195 Privacy protection for information and data Data and Information Management
    Refrain from storing data in newly created files or registers which directly or indirectly reveals the restricted data. CC ID 00196 Privacy protection for information and data Data and Information Management
    Follow legal obligations while processing personal data. CC ID 04794
    [{applicable requirements} Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law. Article 50 3.]
    Privacy protection for information and data Data and Information Management
    Start personal data processing only after the needed notifications are submitted. CC ID 04791 Privacy protection for information and data Data and Information Management
    Limit the redisclosure and reuse of restricted data. CC ID 00168
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation; Article 10 5.(b)]
    Privacy protection for information and data Data and Information Management
    Refrain from redisclosing or reusing restricted data. CC ID 00169 Privacy protection for information and data Data and Information Management
    Document the redisclosing restricted data exceptions. CC ID 00170 Privacy protection for information and data Establish/Maintain Documentation
    Redisclose restricted data when the data subject consents. CC ID 00171 Privacy protection for information and data Data and Information Management
    Redisclose restricted data when it is for criminal law enforcement. CC ID 00172 Privacy protection for information and data Data and Information Management
    Redisclose restricted data in order to protect public revenue. CC ID 00173 Privacy protection for information and data Data and Information Management
    Redisclose restricted data in order to assist a Telecommunications Ombudsman. CC ID 00174 Privacy protection for information and data Data and Information Management
    Redisclose restricted data in order to prevent a life-threatening emergency. CC ID 00175 Privacy protection for information and data Data and Information Management
    Redisclose restricted data when it deals with installing, maintaining, operating, or providing access to a Public Telecommunications Network or a telecommunication facility. CC ID 00176 Privacy protection for information and data Data and Information Management
    Redisclose restricted data in order to preserve human life at sea. CC ID 00177 Privacy protection for information and data Data and Information Management
    Establish, implement, and maintain a data handling program. CC ID 13427 Privacy protection for information and data Establish/Maintain Documentation
    Establish, implement, and maintain data handling policies. CC ID 00353 Privacy protection for information and data Establish/Maintain Documentation
    Establish, implement, and maintain data and information confidentiality policies. CC ID 00361 Privacy protection for information and data Establish/Maintain Documentation
    Establish, implement, and maintain record structures to support information confidentiality. CC ID 00360
    [Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78. Article 45 4.]
    Privacy protection for information and data Data and Information Management
    Include passwords, Personal Identification Numbers, and card security codes in the personal data definition. CC ID 04699 Privacy protection for information and data Configuration
    Store payment card data in secure chips, if possible. CC ID 13065 Privacy protection for information and data Configuration
    Refrain from storing data elements containing sensitive authentication data after authorization is approved. CC ID 04758 Privacy protection for information and data Configuration
    Render unrecoverable sensitive authentication data after authorization is approved. CC ID 11952 Privacy protection for information and data Technical Security
    Automate the disposition process for records that contain "do not store" data or "delete after transaction process" data. CC ID 06083 Privacy protection for information and data Data and Information Management
    Log the disclosure of personal data. CC ID 06628 Privacy protection for information and data Log Management
    Log the modification of personal data. CC ID 11844 Privacy protection for information and data Log Management
    Encrypt, truncate, or tokenize data fields, as necessary. CC ID 06850 Privacy protection for information and data Technical Security
    Implement security measures to protect personal data. CC ID 13606
    [To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations; Article 10 5.(c)]
    Privacy protection for information and data Technical Security
    Establish, implement, and maintain a personal data transfer program. CC ID 00307
    [{have not transmitted} {have not transferred} {have not accessed} To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur: the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties; Article 10 5.(d)]
    Privacy protection for information and data Establish/Maintain Documentation
    Obtain consent from an individual prior to transferring personal data. CC ID 06948 Privacy protection for information and data Data and Information Management
    Include procedures for transferring personal data from one data controller to another data controller in the personal data transfer program. CC ID 00351 Privacy protection for information and data Establish/Maintain Documentation
    Refrain from requiring independent recourse mechanisms when transferring personal data from one data controller to another data controller. CC ID 12528 Privacy protection for information and data Business Processes
    Notify data subjects when their personal data is transferred. CC ID 00352 Privacy protection for information and data Behavior
    Include procedures for transferring personal data to third parties in the personal data transfer program. CC ID 00333 Privacy protection for information and data Establish/Maintain Documentation
    Notify data subjects of the geographic locations of the third parties when transferring personal data to third parties. CC ID 14414 Privacy protection for information and data Communicate
    Provide an adequate data protection level by the transferee prior to transferring personal data to another country. CC ID 00314 Privacy protection for information and data Data and Information Management
    Refrain from restricting personal data transfers to member states of the European Union. CC ID 00312 Privacy protection for information and data Data and Information Management
    Prohibit personal data transfers when security is inadequate. CC ID 00345 Privacy protection for information and data Data and Information Management
    Meet the use of limitation exceptions in order to transfer personal data. CC ID 00346 Privacy protection for information and data Data and Information Management
    Refrain from transferring past the first transfer. CC ID 00347 Privacy protection for information and data Data and Information Management
    Document transfer disagreements by the data subject in writing. CC ID 00348 Privacy protection for information and data Establish/Maintain Documentation
    Allow the data subject the right to object to the personal data transfer. CC ID 00349 Privacy protection for information and data Data and Information Management
    Authorize the transfer of restricted data in accordance with organizational standards. CC ID 16428 Privacy protection for information and data Records Management
    Follow the instructions of the data transferrer. CC ID 00334 Privacy protection for information and data Behavior
    Define the personal data transfer exceptions for transferring personal data to another country when adequate protection level standards are not met. CC ID 00315 Privacy protection for information and data Establish/Maintain Documentation
    Include publicly available information as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00316 Privacy protection for information and data Data and Information Management
    Include transfer agreements between data controllers and third parties when it is for the data subject's interest as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00317 Privacy protection for information and data Data and Information Management
    Include personal data for the health field and for treatment as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00318 Privacy protection for information and data Data and Information Management
    Include personal data for journalistic purposes or private purposes as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00319 Privacy protection for information and data Data and Information Management
    Include personal data for important public interest as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00320 Privacy protection for information and data Data and Information Management
    Include consent by the data subject as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00321 Privacy protection for information and data Data and Information Management
    Include personal data used for a contract as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00322 Privacy protection for information and data Data and Information Management
    Include personal data for protecting the data subject or the data subject's interests, such as saving his/her life or providing healthcare as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00323 Privacy protection for information and data Data and Information Management
    Include personal data that is necessary to fulfill international law obligations as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00324 Privacy protection for information and data Data and Information Management
    Include personal data used for legal investigations as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00325 Privacy protection for information and data Data and Information Management
    Include personal data that is authorized by a legislative act as a personal data transfer exception for transferring personal data to another country outside an adequate data protection level. CC ID 00326 Privacy protection for information and data Data and Information Management
    Require transferees to implement adequate data protection levels for the personal data. CC ID 00335 Privacy protection for information and data Data and Information Management
    Refrain from requiring a contract between the data controller and trusted third parties when personal information is transferred. CC ID 12527 Privacy protection for information and data Business Processes
    Define the personal data transfer exceptions for transferring personal data to another organization when adequate protection level standards are not met. CC ID 00336 Privacy protection for information and data Establish/Maintain Documentation
    Include personal data that is publicly available information as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00337 Privacy protection for information and data Data and Information Management
    Include personal data that is used for journalistic purposes or private purposes as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00338 Privacy protection for information and data Data and Information Management
    Include personal data that is used for important public interest as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00339 Privacy protection for information and data Data and Information Management
    Include consent by the data subject as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00340 Privacy protection for information and data Data and Information Management
    Include personal data that is used for a contract as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00341 Privacy protection for information and data Data and Information Management
    Include personal data that is used for protecting the data subject or the data subject's interests, such as providing healthcare or saving his/her life as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00342 Privacy protection for information and data Data and Information Management
    Include personal data that is used for a legal investigation as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00343 Privacy protection for information and data Data and Information Management
    Include personal data that is authorized by a legislative act as a personal data transfer exception for transferring personal data to a third party outside adequate data protection levels. CC ID 00344 Privacy protection for information and data Data and Information Management
    Notify data subjects about organizational liability when transferring personal data to third parties. CC ID 12353 Privacy protection for information and data Communicate
    Notify the data subject of any personal data changes during the personal data transfer. CC ID 00350 Privacy protection for information and data Behavior
    Establish, implement, and maintain Internet interactivity data transfer procedures. CC ID 06949 Privacy protection for information and data Establish/Maintain Documentation
    Obtain consent prior to storing cookies on an individual's browser. CC ID 06950 Privacy protection for information and data Data and Information Management
    Obtain consent prior to downloading software to an individual's computer. CC ID 06951 Privacy protection for information and data Data and Information Management
    Refrain from installing software on an individual's computer unless acting in accordance with a court order. CC ID 14000 Privacy protection for information and data Process or Activity
    Remove or uninstall software from an individual's computer, as necessary. CC ID 13998 Privacy protection for information and data Process or Activity
    Remove or uninstall software from an individual's computer when consent is revoked. CC ID 13997 Privacy protection for information and data Process or Activity
    Obtain consent prior to tracking Internet traffic patterns or browsing history of an individual. CC ID 06961 Privacy protection for information and data Data and Information Management
    Develop remedies and sanctions for privacy policy violations. CC ID 00474 Privacy protection for information and data Data and Information Management
    Define the organization's liability based on the applicable law. CC ID 00504
    [{be liable} {refrain from imposing} Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. Article 57 12.
    The provider or prospective provider shall be liable under applicable Union and national liability law for any damage caused in the course of their testing in real world conditions. Article 60 9.]
    Privacy protection for information and data Establish/Maintain Documentation
    Define the sanctions and fines available for privacy rights violations based on applicable law. CC ID 00505
    [{be liable} {refrain from imposing} Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for their participation and follow in good faith the guidance given by the national competent authority, no administrative fines shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided guidance for compliance, no administrative fines shall be imposed regarding that law. Article 57 12.
    The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this Article, the provider shall be subject to fines in accordance with Article 99. Article 80 4.
    Where, in the course of the evaluation pursuant to paragraph 1 of this Article, the market surveillance authority establishes that the AI system was misclassified by the provider as non-high-risk in order to circumvent the application of requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Article 99. Article 80 7.]
    Privacy protection for information and data Establish/Maintain Documentation
    Define the appeal process based on the applicable law. CC ID 00506
    [An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available. Article 44 3. ¶ 2]
    Privacy protection for information and data Establish/Maintain Documentation
    Define the fee structure for the appeal process. CC ID 16532 Privacy protection for information and data Process or Activity
    Define the time requirements for the appeal process. CC ID 16531 Privacy protection for information and data Process or Activity
    Disseminate and communicate instructions for the appeal process to interested personnel and affected parties. CC ID 16544 Privacy protection for information and data Communicate
    Disseminate and communicate a written explanation of the reasons for appeal decisions to interested personnel and affected parties. CC ID 16542 Privacy protection for information and data Communicate
    Write contractual agreements in clear and conspicuous language. CC ID 16923 Third Party and supply chain oversight Acquisition/Sale of Assets or Services
    Establish, implement, and maintain software exchange agreements with all third parties. CC ID 11615 Third Party and supply chain oversight Establish/Maintain Documentation
    Establish, implement, and maintain rules of engagement with third parties. CC ID 13994 Third Party and supply chain oversight Establish/Maintain Documentation
    Include the purpose in the information flow agreement. CC ID 17016 Third Party and supply chain oversight Establish/Maintain Documentation
    Include the type of information being transmitted in the information flow agreement. CC ID 14245 Third Party and supply chain oversight Establish/Maintain Documentation
    Include the costs in the information flow agreement. CC ID 17018 Third Party and supply chain oversight Establish/Maintain Documentation
    Include the security requirements in the information flow agreement. CC ID 14244 Third Party and supply chain oversight Establish/Maintain Documentation
    Include the interface characteristics in the information flow agreement. CC ID 14240 Third Party and supply chain oversight Establish/Maintain Documentation
    Include text about participation in the organization's testing programs in third party contracts. CC ID 14402 Third Party and supply chain oversight Establish/Maintain Documentation
    Include the contract duration in third party contracts. CC ID 16221 Third Party and supply chain oversight Establish/Maintain Documentation
    Include cryptographic keys in third party contracts. CC ID 16179 Third Party and supply chain oversight Establish/Maintain Documentation
    Include bankruptcy provisions in third party contracts. CC ID 16519 Third Party and supply chain oversight Establish/Maintain Documentation
    Include cybersecurity supply chain risk management requirements in third party contracts. CC ID 15646 Third Party and supply chain oversight Establish/Maintain Documentation
    Include requirements to cooperate with competent authorities in third party contracts. CC ID 17186 Third Party and supply chain oversight Establish/Maintain Documentation
    Include compliance with the organization's data usage policies in third party contracts. CC ID 16413 Third Party and supply chain oversight Establish/Maintain Documentation
    Include financial reporting in third party contracts, as necessary. CC ID 13573 Third Party and supply chain oversight Establish/Maintain Documentation
    Include on-site visits in third party contracts. CC ID 17306 Third Party and supply chain oversight Establish/Maintain Documentation
    Include training requirements in third party contracts. CC ID 16367 Third Party and supply chain oversight Acquisition/Sale of Assets or Services
    Include location requirements in third party contracts. CC ID 16915 Third Party and supply chain oversight Acquisition/Sale of Assets or Services
    Include the dispute resolution body's contact information in the terms and conditions in third party contracts. CC ID 13813 Third Party and supply chain oversight Establish/Maintain Documentation
    Include a usage limitation of restricted data clause in third party contracts. CC ID 13026 Third Party and supply chain oversight Establish/Maintain Documentation
    Include end-of-life information in third party contracts. CC ID 15265 Third Party and supply chain oversight Establish/Maintain Documentation
    Approve or deny third party recovery plans, as necessary. CC ID 17124 Third Party and supply chain oversight Systems Continuity
    Disseminate and communicate third party contracts to interested personnel and affected parties. CC ID 17301 Third Party and supply chain oversight Communicate