The Australian Government has released a discussion paper seeking public feedback from 15 October 2024 to 12 November 2024 (“Discussion Paper”) regarding the interaction between the Australian Consumer Law (“ACL”) and Artificial Intelligence (“AI”) enabled goods and services.
The Discussion Paper seeks feedback on the following:
- The effectiveness of the ACL in protecting Australian consumers and businesses from potential consumer law risks associated with AI-enabled goods and services;
- The application of the ACL principles to AI-enabled goods and services;
- The remedies available to consumers of AI-enabled goods and services under the ACL; and
- The procedure for apportioning liability among manufacturers and suppliers of AI-enabled goods and services.
A. Current Position Under Australian Law
Currently, the ACL provides for:
- General protections prohibiting misleading or deceptive conduct (sections 18 to 19), unconscionable conduct (sections 20 to 22) and unfair contract terms (sections 23 to 28A); as well as
- Specific protections for certain defined business practices such as prohibiting false or misleading representations (sections 4, 29 to 38), unsolicited supplies (sections 39 to 43), providing for consumer guarantees (sections 51 to 68) and determining liability of manufacturers for goods with safety defects (sections 138 to 150).
The ACL provides remedies such as repair, replacement, refund, resupply, or contract cancellation for goods or services that do not meet consumer guarantees. Consumers typically approach suppliers, who have primary responsibility for these remedies. Suppliers can seek indemnities from manufacturers if goods fail to meet quality standards.
As the ACL’s principles and provisions are technology neutral, the ACL has thus far remained applicable to goods and services created since the enactment of the ACL.
B. Challenges with AI-Enabled Goods and Services
The complexity and unique algorithms present in AI can make it difficult to identify failures and responsible parties. Self-learning AI can act unpredictably, complicating access to remedies. For instance, the introduction of AI in the causal chain between a safety defect and loss or damages suffered may complicate the determination of whether the consumer may successfully bring an action against the manufacturer for safety defects in an AI-enabled good. An example provided in the Discussion Paper relates to an AI-enabled vacuum cleaner that malfunctions through a purported failure in its speech recognition voice command AI technology. When establishing the causal link between the vacuum’s safety defect and the loss suffered, the consumer may be required to demonstrate that the manufacturer’s training data sets were incompatible with the situations where the product was to be deployed in. However, this may prove difficult as the consumer would not have access to such training data sets or information regarding how testing was conducted.
Next, AI may, on its own volition, provide or suggest individualised content to guide user preferences and purchasing decisions. AI may also shape the AI-enabled goods and services to become goods and services that differ from what was originally envisioned by the manufacturer or supplier. In such scenarios, care must be taken to ensure that AI-enabled functions in goods or services do not provide false, misleading or deceptive content to consumers. An example provided in the Discussion Paper relates to Air Canada’s airline chatbot “inaccurately explaining the airline’s bereavement travel policy”. In such an instance, questions will be raised regarding identifying the root cause of the failure and determining the responsible parties.
C. Solutions Considered
The Discussion Paper considers and proposes various solutions to mitigate the risk in AI-enabled goods and services.
For digital products which contain concerns relating to the safety and operability of software provided, additional consumer guarantees related to “cyber security, interoperability and [software updates]” may be required.
The Australian Government is considering mandatory guardrails for AI in high-risk settings. Proposed measures include accountability processes, risk management, testing, transparency, and record-keeping obligations to prevent and mitigate harms from AI deployment.
Greater oversight may also be required to ensure product safety laws are complied with. To this end, pending the introduction of options for mandatory guardrails in relation to AI safety, the Voluntary AI Safety Standard may be adopted by Australian organisations.
The Discussion Paper allows for respondents to raise any other pertinent issues and developments relating to AI-enabled goods and services in relation to the ACL for consideration.
Members of the public who are interested in responding to the Discussion Paper may do so by email or post.
Click here to find out more about the Discussion Paper.
Click here to find out more about the ACL.
The information provided above does not, and is not intended to, constitute legal advice pertaining to the Discussion Paper and the ACL; information, content, and materials stipulated above are based on our reading of the Discussion Paper and the ACL and are for general informational purposes only.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.