AIBusinessRiskTechnology

9 Ethical Issues Associated with AI at Work

The integration of Artificial Intelligence (AI) into the workplace poses serious ethical dilemmas for our generation. As AI drives transformation across the globe we highlight the key ethical areas facing developers, governments and businesses tasked with integrating smart technologies.

1. What happens after job obsoletion?

The biggest occupational dilemma associated with AI relates to job obsoletion. Both blue collar and white collar jobs are already being replaced by intelligent systems. And the key question remains; what do we expect workers to do with their lives when smart machines take over their jobs?

2. How will wealth created by AI be distributed amongst the population?

AI driven automation stands to make companies and shareholders very wealthy. Adoption of these powerful technologies could, in a dim future, create mass global inequality between the few and the many. So how will the wealth created by AI be distributed amongst a population also expected to be the consumers of goods and services?

3. How will AI affect our interactions and behaviour at work?

AI is ushering in a new age characterised by humans working in collaboration with smart technologies. This has already heightened levels of anxiety and fear amongst the workforce. Such emotions have been attributed to loss of control, impending job loss, disruption of human relationships, and the loss of human empathy. How do we face a situation where smart robots can potentially dictate the actions and behaviour of the human workforce?

4. How do we guard against unintended consequences of AI?

As AI infiltrates more of our working life it is important to recognise not only the benefits but also the unintended consequences. The concerns surrounding AI include machines bias, loss of empathy, loss of critical thinking, loss of human control, and the creation of new workplace hazards. Leveraging AI requires serious conversations around these pitfalls before the genie is released from the bottle.

5. How do we guard against machine mistakes at work?

A commonly touted benefit of AI is its ability to make better decisions. This statement is made with the assumption that smart systems work flawlessly, and without physical danger to humans. The alternative has serious implications for the nature of work, particularly where machines and humans work side-by-side. How do we guard against artificial stupidity at work? And who takes responsibility for any injury or loss caused by smart machine mistakes?

6. How do we eliminate bias by AI?

The idea of “racist robots” has already been mooted by the media. If we feed smart machines information that reflects our own prejudices, we should not be surprised if they mimic our worst failings back to us. How do we prevent software from mistakenly singling out certain demographics in important areas such as justice, finance, education, and work?

7. How do we stop AI sabotage?

As with any digital system AI is written in code, and software language can be manipulated to cause adverse effects. The cost of an AI malfunction would have far reaching consequences for both the company and its employees. So how do AI developers secure against data breaches by rogue employees, and via cyber attacks by external saboteurs?

8. Will we stay in control of complex intelligent systems?

At this point most people have heard of the AI singularity and the threat it poses. Many experts believe that AI will always seek to avoid human intervention and will create situations where it cannot be stopped. So how can humans always keep the upper-hand on systems that have the potential to be thousands of times smarter than we are?

9. Will robots have rights at work?

Many experts believe that by 2025 AI driven robots will perform half of all productive functions in the workplace. This has not gone unnoticed by the working population and there have already been recorded attacks on autonomous delivery vehicles and autonomous police vehicles. Whilst these attacks have no current legal consequence, more advanced AI with human-like intelligence may need legal standards to protect them against violence, discrimination, and exploitation.


Garry McGauran is author and editor at Emerging Tech Safety. He has 17 years experience as a prototype risk assessor, design safety consultant, and academic research advisor, as well as heading up his own drone inspection service. He is a freelance safety consultant serving the tech, industrial, and utility sectors in Ireland and the UK. 


Do you want to:

Understand how emerging technologies are affecting modern work?

Learn about the effects of emerging risks on worker safety and wellbeing?

Be aware of the technologies that are improving risk management and occupational H&S?

Keep up-to-date with work trends driven by climate change and other global issues?

Consider subscribing to our newsletter :