The Rise of AI: Are your data privacy systems future fit?

South Africa

Since the dawn of the new millennium, the rate of technological development has progressed at an unprecedented rate, and it has changed the way we live and the way we think. Through the emergence of technological interventions such as the internet, cloud computing and smart phones (amongst others), even the most widely used conventional concepts that define the way we live, become relics of the past in a matter of years, and in some instances, months. 

The development and prevalence of disruptive, cutting-edge technologies and tools, which are broad based in their scope, has transformed the manner in which organisations carry out their commercial operations. Through the optimisation of certain technologies and tools, organisations are able to (amongst other things): 

  • automate certain back-office functions such as accounting, payroll and record keeping, which significantly improves efficiency;
  • create secure environments for maintaining sensitive business and/or consumer information; and
  • aggregate, analyse and process data to assist organisations in exploiting novel communication tools (such as social media) to increase sales and profitability.

Even with the prevalence and importance of the abovementioned technological optimisation tools, there is arguably no technological development that has, in recent times, garnered more attention than the emergence of complex Artificial Intelligence (“AI”) systems. 

The emergence of AI systems is viewed by subject matter specialists, to be an inherent byproduct of significantly improved computing power and the increasing availability of data, which is a driving force in enhancing the analytical capabilities of AI systems. Notwithstanding the foregoing, for organisations, the rapid development and integration of AI systems in their business operations poses numerous challenges, specifically in the management and use of data/personal information belonging to employees, customers and suppliers, as discussed in more detail below. 

What is AI? 

Broadly speaking, AI refers to systems that exhibit intelligent behaviour, and which are capable of rapidly analysing various activities and environments, in order to make an independent decision, and for purposes of achieving a specific objective. Conventional AI systems are characterised by their ability to perform activities typically associated with the human minds, such as the ability to perceive, learn, interact with an environment, problem solve and in certain instances, exhibit creativity.  

For many organisations, the technical attributes that are intrinsic to AI systems, offer a myriad of benefits, which can transform complex business processes into streamlined solutions that require little to no human intervention. Despite the said benefits, the compliance obligations placed on organisations with respect to the processing and management of data/personal information, presents numerous legal risks and challenges, which must be considered in detail prior to the deployment of AI systems and solutions, as discussed in greater detail below.

AI and South African Data Privacy legislation

As a starting point, it is imperative that local organisations understand the two distinguishable AI systems that bear relevance to their business operations for purposes of the processing and management of the personal information/data they hold, when integrating and deploying AI systems and solutions. Generative AI models use algorithms to generate content based on the analysis of patterns and data, and they are capable of learning how to improve their own output.  Conversely, Applied AI models use machine learning algorithms to analyse data and make predictions and/or decisions based on the data processed. 

What becomes apparent in the examples of AI models set out above, is that data plays an integral role in defining the operational parameters of AI and realising the benefits of AI systems, and it is through the processing and analysis of such data that organisations are able to undertake a sophisticated analysis of large volumes of data, to benefit their commercial operations. In this regard, it is important to note that in South Africa, there is no comprehensive legislative framework to regulate the integration and use of AI and machine learning technologies. There are, however, provisions in the Protection of Personal Information Act No. 4 of 2013 (“POPI”) that impose certain obligations on the manner in which organisations are able to utilise and deploy AI systems. 

While POPI does not explicitly address the comprehensive operational parameters and capabilities of modern AI systems, it does regulate the processing of data/personal information using automated means. Section 71 (1) of POPI provides that data subjects cannot be subjected to a decision which is based solely on automated decision-making, which results in legal consequences for the data subject and the data subject being profiled. As an example, the use of automated decision-making tools to perform a credit assessment to establish the credit worthiness of a given data subject is generally not permitted in terms of section 71 of POPI, unless the said decision:

  • has been taken in connection with the conclusion or execution of a contract, and—
    • the request of the data subject in terms of the contract has been met; or
    • appropriate measures have been taken to protect the data subject’s legitimate interests; or
  • is governed by a law or code of conduct in which appropriate measures are specified for protecting the legitimate interests of data subjects. 

In addition to the above, the South African Information Regulator has noted its concerns with the use of AI platforms, and it acknowledges it still needs to better understand technical issues associated with AI platforms if it is to ensure that personal information of data subjects is not compromised. As such, if organisations do not implement appropriate data protection compliance measures and programmes, in conjunction with the integration and deployment of AI systems, there is a risk that the deployment of certain AI systems may result in the issuance of enforcement notices, sanctions and fines to organisations who deploy AI systems that do not comply with the prescripts of POPI. In this regard, and to assist organisations in deploying AI systems that specifically comply with the provisions of POPI, organisations should consider taking the following measures: 

  • conducting a POPI impact assessment on the AI related systems that it utilises to ensure that such processing activities are carried out within the prescripts and parameters provided in POPI; or 
  • preparing and delivering to the Information Regulator a prior authorisation application in terms of section 57 of POPI, which generally requires responsible parties to obtain the prior authorisation of the Information Regulator if they intend to (i) process any unique identifiers of data subjects for another purpose than what was intended during the collection of such personal information; and (ii) with the aim of linking the information with information processed by other responsible parties.

While the immense potential of AI systems may present meaningful opportunities for local organisations to optimise and/or diversify their commercial operations, such potential cannot be realised without the implementation of appropriate controls to ensure compliance with POPI and other related governing legislation that may apply to AI systems. It is, accordingly, of paramount importance that organisations adopt a pragmatic and meticulous approach in assessing the risks associated with the integration of AI systems and AI related solutions, to ensure that such systems and solutions yield the intended results, without exposing a given organisation to further regulatory risk.