Intimate robot interactions require new ethical rules

The series Humans explores the complex relationships between humans and advanced AI - Newsreel
The television series Humans, starring Gemma Chan (above left), explores the complexity of relationships between advanced robots and humans. A new paper says this area requires new ethics. | Photo: Still image from the television series Humans.

A new paper calls for fresh ethics around human-robot interactions as artificial intelligence is asked to take on more complex roles.

A group of researchers from Australia, Denmark and the UK argue that AI systems are increasingly acting as tutors, mental health providers and romantic partners.

“This increasing ubiquity requires a careful consideration of the ethics of AI to ensure that human interests and welfare are protected,” the paper, published in The Conversation, argues.

“How you interact with your doctor differs from how you interact with your romantic partner or your boss.

“What is deemed appropriate behaviour of a parent towards her child, for instance, differs from what is appropriate between business colleagues. In the same way, appropriate behaviour for an AI system depends upon whether that system is acting as a tutor, a health care provider, or a love interest.”

The paper was written by Associate Director Brian D Earp from The University of Oxford, Assistant Professor Sebastian Porsdam Mann from The University of Copenhagen and Associate Professor Simon Laham from The University of Melbourne.

They argue that, as AI systems take up more social roles in society, questions need to be asked about how the relational context in which humans interact with AI systems impact ethical considerations.

“When a chatbot insists upon changing the subject after its human interaction partner reports feeling depressed, the appropriateness of this action hinges in part on the relational context of the exchange,” the paper said.

“If the chatbot is serving in the role of a friend or romantic partner, then clearly the response is inappropriate – it violates the relational norm of care, which is expected for such relationships.

“If, however, the chatbot is in the role of a tutor or business advisor, then perhaps such a response is reasonable or even professional.”

The paper says developers and designers of AI systems should consider not just abstract ethical questions but relationship-specific ones.

This included questions like:

  • Is a particular chatbot fulfilling relationship-appropriate functions?
  • Is the mental health chatbot sufficiently responsive to the user’s needs?
  • Is the tutor showing an appropriate balance of care, hierarchy and transaction?

“Users of AI systems should be aware of potential vulnerabilities tied to AI use in particular relational contexts,” the papers says.

“Becoming emotionally dependent upon a chatbot in a caring context, for example, could be bad news if the AI system cannot sufficiently deliver on the caring function.”

The full article is on the The Conversation website.