Hey Google! Should I treat you like a human?
‘You have to be warned’ an expert reveals how you should treat your AI Language Assistants in the future!
Monday, 23rd of May, 2027: The feeling of vibration wakes me up at exactly 06.23 am. When I open my eyes I can see the ceiling of my bedroom, the smell of fresh coffee tickles my nose and the melody of birds amuse my ears.
‘Good Morning Jill, your sleep depth last night was 23% better than the night before. Your first meeting is at 8 am.’ Greets me my personal assistant with a slightly monotone but friendly voice.
Thank you, the coffee smells good today,
you are welcome, all the best for your day.
It is important to note that I live alone and it’s not a person I talk to in the morning, but an AI based on previous concepts such as Siri, Alexa and Hey Google. Could that be our future?
After the publication of Chat-GPT on the 30th of November in 2022, the world realised the ‘sudden’ change in AI technology. People started chatting with the AI via the web, asking questions and requesting tasks. Other Language Assistants such as Siri, Alexa and Hey Google have been on our devices for some time now. Interacting with AI technology became a daily tool to make life easier. Some people fear that AI technology will gain too much power, others discuss ethics and realised the need for legislations. But how exactly does the way we interact with the machine influence AI? Do we have to interact ethically? And how exactly should AI Ethics be defined?
While talking to Siri, I caught myself humanising it, using manners such as ‘please’ and ‘thank you’. However, observing the world around me, I realised that some treat their tools disrespectfully.
In an interview with Nigel Crook, Professor, author and expert in AI research and development, several questions around AI ethics and interactions were discussed.
‘How should we treat technology? I think with respect.’
Nigel explained his reasoning behind respectful manners is simply about human development rather than about the progress in technology. ‘If we accept that we can interact with unethical behaviour to a human-like machine. Mistreating is, then it just feels like a shorter step between doing the same thing to a human being.’
AI technology is learning through input and data available, thus it is learning by studying interactions and human behaviour. A great example of an AI project gone wrong was the robot Ty published by Microsoft in 2016. The developed bot could independently tweet on Twitter, but it had to be taken down by Microsoft just after 10 hours due to its racist behaviour. The bot learned its unethical behaviour from other users, and the content on the platform. Such incidents led many people to question the morality behind artificial intelligence, fearing future consequences.
As Dr Crook explained in his book The Rise of the moral machine: exploring virtue through a Robot’s Eyes: ‘the fear that a machine becomes smarter and their capacity for thinking and analysis increases, there will be an inevitable moment when those machines will become more intelligent than humans. This event is commonly described as the technology singularity.’
Dr Miriam Johnson, Senior Lecturer and expert on social media and the creative industries, on the other hand, suggests that, due to the nature of the AI, ‘we should consider how we train the AI to deal with that [impolite behaviour] and react to it instead of assimilating it as a language that is standard for people to use. Pre-emptive training is the way to go here.’
A great example of humanisation of a Language Assistant is the Superbowl advertisement of Alexa in 2018.
Next to the population who is scared about the progress of AI technology, some interact and use Siri or Alexa on a daily basis. Now, we can have full conversations with Language Assistants, and some might even like to call them ‘AI friends’. As Nigel described: ‘I can see that there’ll be all kinds of assistance developed around language technologies that enable people to engage more naturally with technology. I think all our devices will be speaking to us at some point. The possibilities are endless.’
The possibilities are endless.
‘But you have to be warned’.
As the expert stated, it is important to understand the technology behind systems such as Siri, Alexa or Chat-GPT. The algorithm predicts the answers depending on the data input, it often makes sense, but it is not guaranteed to give you the truth. Additionally, to call them ‘friends’ and humanise AI technology could be a danger in the future. Dr Johnson, had the following view representing her marketing perspective:
’we like speaking to a human, or at least a very good approximation. So with the growth of AI in this area, trained on a company’s data to interact and answer questions, we will lose the ability to tell [apart] human from AI, until the customer reaches the limit of the AI’s knowledge- if the AI isn’t trained to draw on a wider dataset. And, I expect the humanising of it to grow dramatically for a short while, until some backlash happens and we slow down’.
Even though Siri claimed to be my friend (Image 1), Nigel made clear that he would be ‘a bit worried about’ treating them as such. The system behind artificial intelligence doesn’t allow them to have an opinion on our situation and questions. They simply reflect what they learned from us. As Nigel stated: ‘They are not a friend, a friend is someone who can be consciously aware of you.’ In other words, a friend can critically give you their own opinion without constantly agreeing with you. Unlike Siri in our conversation (Image 2).
All in all, the main points to keep in mind for future interaction with our AI-Language Assistants and AI tools are the following:
Treat AI technology to not lose ethical behaviour towards humans in the future.
Be aware of the learning algorithm behind AI technology.
Use AI technology Assistants as a tool, don’t depend on them.
Avoid humanisation or the term ‘friend’ due to their lack of morality.
As Chat-GPT answered my question about how to treat our Language Assistants:
’Al assistants are designed to help and make our lives easier and treating them with respect is a way to acknowledge their usefulness and the hard work that goes into developing them. Additionally, being polite to voice assistants can also help cultivate a culture of respect and empathy towards other people and beings, both in virtual and real-life interactions.’
Are you rethinking your ethics and behaviour towards your AI now?
Check out the video to the article