People are increasingly using information provided by artificial intelligence – such as virtual assistants like Apple’s Siri – but is there a difference in the way we react to messages from AI and those from humans? The answer to that question has implications for the way business uses AI and how it’s regulated.
Hey Siri, how persuasive is AI?
Marketing researcher Dr TaeWoo Kim of UTS Business School and colleague Adam Duhachek of the University of Illinois have run a number of experiments to tease out the differences.
“We found AI messages are more persuasive when they highlight ‘how’ an action should be performed, rather than why,” Dr Kim says. “For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.”
It’s important to understand how AI influences our decisions
In another experiment, the researchers considered whether people ascribe “free will” to AI. Dr Kim explains: “Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair.”
In this latest study, however, the researchers found people were much more likely to accept this “unfair” offer if proposed by AI. “This is because we don’t think an AI developed to serve humans has a malicious intent to exploit us – it’s just an algorithm,” Dr Kim says.
The fact people could accept unfair offers from AI concerns him. “It might mean this phenomenon could be used maliciously,” he says. For example, a mortgage company might try to charge unfairly high-interest rates by framing the decision as being calculated by an algorithm.
In other work, the researchers found people tend to disclose personal information more willingly to AI technology than to a human.
“We told participants to imagine they’re at the doctor for a urinary tract infection. Half spoke to a human doctor, and half to an AI doctor. We told them the doctor would ask a few questions to find the best treatment and it was up to the individual how much personal information they provided,” Dr Kim says.
Participants disclosed more personal information to the AI doctor than the human one, including answers to potentially embarrassing questions. “We found this was because people don’t think AI judges our behaviour,” Dr Kim says.
However, the researchers found that giving AI human-like features or a human name could change the way we react – AI could then better persuade people on questions of “why”, those “unfair” offers were less likely to be accepted, and people started to feel judged and disclose less information.
“We are likely to see more and different types of AI and robots in future. They might cook, serve, sell us cars, tend to us at the hospital and even sit on a dining table as a dating partner,” Dr Kim says. “It’s important to understand how AI influences our decisions, so we can regulate AI to protect ourselves from possible harms.”
Artificial Intelligence and Persuasion: A Construal-Level Account, TaeWoo Kim, Adam Duhachek Psychological Science, doi.org/10.1177/0956797620904985
UTS Experts: TaeWoo Kim