Ethical aspects of artificial intelligence – what employers should consider

Germany

This article describes the ethical aspects that employers should consider when using AI in their company.

AI technologies are not only impacting our everyday lives (often more than we even realise), but have also already left their mark on the working environment. They aid decision-making, improve productivity, and enable unprecedented efficiency. As is so often the case, though, great progress also brings with it new challenges. These include using artificial intelligence (AI) responsibly while taking ethical factors into account. For employers in particular, this is crucial for increasing acceptance and maintaining a trusting relationship with employees. 

There are no uniform principles for the ethical evaluation of AI, but there are numerous initiatives in the discussion on social and scientific ethics. For example, in the EU Commission's 2019 Ethics Guidelines, the four basic principles of "respect for human self-determination", "damage prevention", "fairness" and "explainability" were declared to be the general basis of AI ethics. Most recently, the German Ethics Council commented on the ethical issues in the relationship between humans and machines in its multidisciplinary expert report "Humans and Machines – Challenges of Artificial Intelligence" of 20 March 2023, stressing that it is fundamental that human intelligence, self-determination, and responsibility are not undermined by the use of AI. Only recently, leading US tech companies like Google and Microsoft agreed on voluntary safeguards to mitigate the risks posed by artificial intelligence.

Below we take a closer look at some key principles that – from a practitioner's point of view – can be critical to ensuring successful application of AI in companies.

Why is a risk assessment essential?

Before companies start implementing AI solutions in their work processes, it is essential that they carry out a comprehensive risk assessment. This helps identify and, where possible, minimise potential negative effects on both the company and the individual employees. Employers should therefore consider risk assessment as an integral part of any AI project and allocate sufficient time and resources to this process. By doing this, they can not only help minimise potential negative impacts, but can also ensure a smooth transition to the new technologies.

How can transparency be ensured?

Due to the "black box phenomenon", there is a risk that AI systems and their decisions may not be comprehensible and thus not transparent. Both the function and the processing methods should be disclosed and understandable so that people affected by AI decisions can understand those decisions. 

If employers want to use an AI-based candidate selection tool, for example, they should know the criteria the system uses for selection and how it assesses them. This not only builds trust, but also allows decisions to be understood and challenged if necessary.

Employees working with AI systems should understand their functions and decision-making processes. For example, if a department uses an AI-driven analytics tool for sales forecasting, staff should know what data the system uses to create the forecast and how it analyses them.

What makes the use of AI fair?

AI systems must be free of prejudice and discrimination. But the challenge goes deeper than that: The AI system should also actively help to promote equal opportunities and fairness in the recruitment process.

On a technical level, this requires AI systems to be designed with care and diligence. A central aspect of this is training the AI. This requires large amounts of data, usually from the real world. It is therefore extremely important to ensure that the data used are themselves free of prejudice and non-discriminatory. While it may be virtually impossible to find completely neutral data, it is possible and necessary to minimise bias in training data. This can be prevented, for example, by not using a disproportionate amount of data from male applicants for training, because then the AI would favour men in the selection process.

What does the principle of responsibility mean?

It is fundamentally important that there is always human responsibility for decisions made by AI systems. This is a key element in taking a responsible approach to AI solutions. 

To ensure responsibility, it is necessary to establish clear communication and reporting processes that make AI decision-making and operations transparent. This should include not only the direct effects of a decision, but also its long-term consequences. It may be helpful to conduct continuous checks to ensure that the AI system is working as it should and to identify any unexpected or undesirable results.

How can human autonomy be preserved?

AI systems should be designed to respect and not interfere with human autonomy. AI should be seen as a tool that serves people, not the other way around. Employees using AI-based tools for data analysis should therefore be able to critically examine the conclusions proposed by the AI and ultimately make their own decisions.

This autonomy should also be embedded in the corporate culture. Employees should be aware that although AI can help improve efficiency and productivity, it is ultimately people who are responsible for using these technologies. Training and further education can help employees to better understand how AI works and also to better understand their role in the context of these technologies.

Continuous learning and improvement processes

However, the effective use of AI also requires a high level of technical know-how. In addition to recruiting suitable employees with the necessary expertise, existing employees should be qualified for handling AI through targeted training and further education measures. This can also help to reduce any fears and uncertainties, while promoting the acceptance and effective use of AI technology.

How can companies use AI ethically?

In order to fully comply with the ethical requirements for using AI, it may also make sense to establish internal structures in the area of corporate digital responsibility. These can help to develop a cross-company philosophy and strategy for handling AI and to continuously review and adapt it. This way, ethical issues and challenges that arise in the context of AI can be systematically addressed.

Conclusion

Taking ethical issues into account when implementing and using AI is key for employers. This includes conducting risk assessments, ensuring transparency and fairness, maintaining human responsibility, and respecting human autonomy. In addition, a continuous learning and improvement process should be implemented to promote the understanding and effective use of AI. By establishing internal structures in the area of Corporate Digital Responsibility (CDR), companies can develop and implement a consistent and ethically responsible AI strategy. Last but not least, responsible use of AI systems strengthens employees' trust in the organisation, which can act as a competitive advantage in times of skills shortages.