fbpx
Frame-14

Privacy Ninja

        • DATA PROTECTION

        • CYBERSECURITY

        • Penetration Testing

          Secure your network against various threat points. VA starts at only S$1,000, while VAPT starts at S$4,000. With Price Beat Guarantee!

        • API Penetration Testing
        • Enhance your digital security posture with our approach that identifies and addresses vulnerabilities within your API framework, ensuring robust protection against cyber threats targeting your digital interfaces.

        • On-Prem & Cloud Network Penetration Testing
        • Boost your network’s resilience with our assessment that uncovers security gaps, so you can strengthen your defences against sophisticated cyber threats targeting your network

        • Web Penetration Testing
        • Fortify your web presence with our specialised web app penetration testing service, designed to uncover and address vulnerabilities, ensuring your website stands resilient against online threats

        • Mobile Penetration Testing
        • Strengthen your mobile ecosystem’s resilience with our in-depth penetration testing service. From applications to underlying systems, we meticulously probe for vulnerabilities

        • Cyber Hygiene Training
        • Empower your team with essential cybersecurity knowledge, covering the latest vulnerabilities, best practices, and proactive defence strategies

        • Thick Client Penetration Testing
        • Elevate your application’s security with our thorough thick client penetration testing service. From standalone desktop applications to complex client-server systems, we meticulously probe for vulnerabilities to fortify your software against potential cyber threats.

        • Source Code Review
        • Ensure the integrity and security of your codebase with our comprehensive service, meticulously analysing code quality, identifying vulnerabilities, and optimising performance for various types of applications, scripts, plugins, and more

        • Email Spoofing Prevention
        • Check if your organisation’s email is vulnerable to hackers and put a stop to it. Receive your free test today!

        • Email Phishing Excercise
        • Strengthen your defense against email threats via simulated attacks that test and educate your team on spotting malicious emails, reducing breach risks and boosting security.

        • Cyber Essentials Bundle
        • Equip your organisation with essential cyber protection through our packages, featuring quarterly breached accounts monitoring, email phishing campaigns, cyber hygiene training, and more. LAUNCHING SOON.

AI Auditing Framework: Draft Guidance for Organizations

ai auditing framework
AI Auditing Framework increasingly permeate many aspects of our lives. We understand the distinct benefits that AI can bring, but also the risks it can pose to the rights and freedoms of individuals.

AI Auditing Framework: Draft Guidance for Organizations

AI Auditing Framework increasingly permeate many aspects of our lives. We understand the distinct benefits that AI can bring, but also the risks it can pose to the rights and freedoms of individuals.


This is why we have developed a framework for auditing AI, focusing on best practices for data protection compliance – whether you design your own AI system, or implement one from a third party. It provides a solid methodology to audit AI applications and ensure they process personal data fairly. It comprises:
• auditing tools and procedures that we will use in audits and
investigations; and
• this detailed guidance on AI and data protection, which includes
indicative risk and control measures that you can deploy when you use
AI to process personal data.

What is the draft guidance?

The draft guidance sets out best practice for data protection compliance for artificial intelligence (“AI”). It clarifies how to assess the data protection risks posed by AI and identifies technical and organisational measures that can be put in place to help mitigate these risks.

Who does the draft guidance apply to?

The draft guidance applies broadly – to both companies that design, build and deploy their own AI systems and those that use AI developed by third parties.

The draft guidance explicitly states that it is intended for two audiences; those with a compliance focus such as DPO’s and general counsel, and technology specialists such as machine learning experts, data scientists, software developers/engineers and cyber security and IT risk managers.

It stresses the importance of considering the data protection implications of implementing AI throughout each stage of development – from training to deployment and highlights that compliance specialists and DPO’s must be involved in AI projects from the earliest stages to address relevant risks.

Also read: Guidance on the AI auditing framework – ICO

AI Auditing Framework highlights that the accountability principle requires that organisations must be responsible for the compliance of their AI system with data protection requirements.

Primary concepts:

1. Accountability and governance

The AI Auditing Framework highlights that the accountability principle requires that organisations must be responsible for the compliance of their AI system with data protection requirements. They must assess and mitigate the risks posed by such systems, document and demonstrate how the system is compliant and justify the choices they have made.

The AI Auditing Framework recommends that the organisation’s internal structures, roles and responsibility maps, training, policies and incentives to overall AI governance and risk management strategy should be aligned. The AI Auditing Framework notes that senior management, including data protection officers, are accountable for understanding and addressing data protection by design and default in the organisation’s culture and processes, including in relation to use of AI where this can be more complex.

  • Data Protection Impact Assessments (“DPIAs”). There is a strong focus on the importance of DPIAs in the draft guidance, and the AI Auditing Framework notes that organizations are under a legal obligation to complete a DPIA if they use AI systems to process personal data. The AI Auditing Framework states that DPIAs should not be seen as a mere “box-ticking compliance” exercise and that they can act as road-maps to identify and control risks that AI can pose.
  • Controller/Processor Relationship. The draft guidance emphasizes the importance and challenges of understanding and identifying controller/processor relationships in the context of AI systems. It highlights that as AI involves processing personal data at several different phases, an entity may be a controller or joint controller for some stages and a processor for others.
  • “AI-related trade-offs”. Interestingly the draft guidance recognizes that the use of AI is likely to result in necessary “trade-offs”. For example, further training of a model using additional data points to improve the statistical accuracy of a model may enhance fairness, but increasing the volume of personal data included in a data set to facilitate further training will increase the privacy risk. 

2. Fair, lawful and transparent processing

The draft guidance sets out specific recommendations and guidance on how the principles of lawfulness, fairness, and transparency apply to AI.

  • Lawfulness. The draft guidance highlights that the development and deployment of AI systems involve processing personal data in different ways for different purposes and the AI Auditing Framework emphasizes the importance of distinguishing each distinct processing operation involved and identifying an appropriate lawful basis for each.
  • Fairness. The draft guidance promotes two key concepts in relation to fairness: statistical accuracy and addressing bias and discrimination.
  • Transparency. The draft guidance recognizes that the ability to explain AI is one of the key challenges in ensuring compliance, but does not go into further detail on how to address the transparency principle.

 3. Data minimization and security

  • Security. The draft guidance highlights that using AI to process personal data can increase known security risks. For instance, the AI Auditing Framework notes that the large amounts of personal data often needed to train AI systems increase the potential for loss or misuse of such data. In addition, the complexity of AI systems, which often rely heavily on third-party code and/or relationships with suppliers, introduces new potential for security breaches and software vulnerabilities. The draft guidance includes information on the types of attacks to which AI systems are likely to be particularly vulnerable and the types of security measures controllers should consider implementing to guard against such attacks. 
  • Data Minimization. Whilst the AI Auditing Framework recognizes that large amounts of data are generally required for AI, it emphasizes that the data minimization principle will still apply, and AI systems should not process more personal data than is needed for their purpose. Further, whilst models may need to retain data for training purposes, any training data that is no longer required should be erased.

 4. The exercise of individual rights

The draft guidance also addresses the specific challenges that AI systems pose to ensuring individuals have effective mechanisms for exercising their personal data rights.

  • Training Data. The AI Auditing Framework states that converting personal data into a different format does not necessarily take the data out of scope of data protection legislation.
  • Access, rectification and erasure. The draft guidance confirms that requests for access, rectification or erasure of training data should not be considered unfounded or excessive simply because they may be more difficult to fulfill (for example in the context of personal data contained in a large training data set). 
  • Portability. The draft guidance clarifies that whilst personal data used to train a model is likely to be considered to have been “provided” by the individuals and therefore subject to the right to data portability, pre-processing methods often significantly change the data from its original form.
  • Right to be informed. Individuals should be informed if their personal data is going to be used to train an AI system. However, the AI Auditing Framework recognizes that where a data set has been stripped of personal identifiers and contact addresses, it may be impossible or involve disproportionate effort to provide the information directly to individuals.
  • Solely automated decisions with legal or similar effect. The draft guidance sets out specific steps that should be taken to fulfill rights related to automated decision making. For example, the system requirements needed to allow meaningful human review should be taken into account from the design phase onwards and appropriate training and support should be provided to human reviewers, with the authority to override an AI system’s decision if necessary.
AI Auditing Framework draft guidance sets out specific steps that should be taken to fulfill rights related to automated decision making.

Also Read: How to Make Data Protection Addendum Template in Simple Way

What should organisations do now?

While the draft guidance is not yet in final form, it nevertheless providers an indication of the AI Auditing Framework current thinking and the steps it will expect organisations to take to mitigate the privacy risks AI presents.

It will therefore be important to follow the development of the draft guidance carefully. In addition, at this stage it would be prudent to review how you currently develop and deploy AI systems and how you process personal data in this context to help you prepare for when the draft guidance is finalized. Some practical steps to take at this stage include:

  • Reviewing existing accountability and governance frameworks around your use of AI models, including your current approach to DPIAs in this context. In particular, DPIAs for existing projects or services may need to be conducted or updated, and risk mitigation measures identified, documented and implemented;
  • Considering your current approach to developing, training and deploying AI models and how you will demonstrate compliance with the core data protection principles, particularly the requirements of fairness, lawfulness, transparency, and data minimization;
  • Reviewing the security measures you currently employ to protect AI systems, and updating these if necessary depending on level of risk; and
  • Ensuring you have appropriate policies and processes for addressing data subjects’ rights in the AI context, including in relation to solely automated decision-making.

Also read: ICO Consultation on Draft AI Auditing Framework Guidance for Organizations

0 Comments

KEEP IN TOUCH

Subscribe to our mailing list to get free tips on Data Protection and Data Privacy updates weekly!

Personal Data Protection

REPORTING DATA BREACH TO PDPC?

We have assisted numerous companies to prepare proper and accurate reports to PDPC to minimise financial penalties.
×

Hello!

Click one of our contacts below to chat on WhatsApp

× Chat with us