The Moral Dilemma of Automation

by | 7 Dec 2021 | Business Tools, Sales Psychology

The Moral Dilemma Of Automation

Are we stuck?
Companies all over the world are automating everything from optimising logistics, internal processes, external customer service to the daily operations driven by Artificial Intelligence. Is this because humans aren’t up for the job?

With all our frailties, vulnerabilities and risks are we on the verge of being replaced by machines that exhibit none of these?

Today, we are using technology to make decisions that we humans can’t agree upon, deliver messages to avoid difficult conversations and kick start sales processes to expedite revenue.

Technology may be reprogramming our attitude towards communication.

Is it time to talk about the boundless landscape of artificial intelligence?

Considering the way technology has shaped our habits, in many ways we need to address this new frontier and the moral dilemma of automation.

What are the pressing issues we should be concerned about?

1. Unemployment. Where to next?
As we’ve invented ways to automate jobs and create a super conductor of efficiency, are we at risk of replacing ourselves? Technically speaking it does seem that way however, automation has made room for people to assume more complex roles. That is, moving from the more physical work to more cognitive labour that characterises strategic and developmental business acumen.

2. Communication. How do machines affect our behaviour and interaction?
Artificially intelligent bots are becoming better and better at modelling conversations and behaviours. However communication is one of the most human of processes. It is not just about patterns, templates, data and processes, it’s about socialising, speaking, touching, listening and using all of our senses to actually engage.

This has never been more important in business than connecting to close a sale or provide customer service. But increasingly, more parts of business are becoming automated, as we look for ways to increase our reach and market share. That being said, more and more of us are aware of the machine triggers and the outcomes they deliver.

With the exponential increase in digital communication and the virtual world never before have people craved the consistency of human interaction.

People like connecting with people.
People like communicating with people.
People like buying from people.
People like to share experiences with people.

These are the innate characteristics of the human race that cannot be replaced.

3. Artificial Stupidity. Have we thought about mistakes?
Intelligence comes from learning, whether you are human or machine. Learning also comes from mistakes and as humans, we can control the impact they make. Whether it’s by introducing a correction, seeking further advice, brainstorming with colleagues or making a split second decision to ‘pull the plug’. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned and we have a safety net of redundancies.

Remember garbage in equals garbage out.

4. Racist Robots. Can AI be biased?
Although artificial intelligence is capable of speed and the capacity of processing is far beyond that of humans, algorithms cannot always be trusted to be fair and neutral. The shortcomings of AI identification for example are the inconsistencies in identifying people, objects and scenes. It can go all wrong when a camera introduces racial sensitivities or when a software used to predict future criminals shows bias against coloured people.

We shouldn’t forget that AI systems are created by humans who have their own biases, influences and judgments.

5. Evil genius. How do we protect ourselves?
There will always be a risk of artificial intelligence getting into the wrong hands and what was designed intentionally for good, is now used for evil.

Undoubtedly, the machine itself wouldn’t be the instigator of evil but in fact would be the mechanism of evil. After a lot of data input and computer crunching, an AI system could spit out a formula that does, in fact, end the anomalies of cancer – but by killing everyone on the planet.

It would have certainly achieved its goal but with detrimental consequences.

It seems terribly convenient to install a machine to mimic the behaviours and interactions of a human being. While automation certainly has its place and technological progress means a better life for everyone, its responsible use and implementation is up to us.

So the question….How do we engineer human morality into automation?

ABOUT THE AUTHOR

Dr Leanne Elich PhD. GAICD. CEO, Leanne Elich Consulting

Leanne Elich is a Sales Psychology & Business Strategist with expertise in Neuroscience and Behavioural Science sales techniques. Leanne and her team work with individuals, teams and organisations to create powerful sales strategies to ethically influence consumer behaviour and create visibility in a busy market.