From choosing the next song on your playlist to choosing the right size pants, people are relying more on the advice of algorithms to help make everyday decisions and streamline their lives.

Developers may consider limiting AI assistants’ ability to use first-person language or engaging in language that is indicative of personhood, avoiding human-like visual representations, and including user interface elements that remind users that AI assistants are not people. Participatory approaches could actively involve users in de-anthropomorphising AI assistant design protocols, in ways that remain sensitive to their needs and overall quality of experience.

This hypothetical danger became a reality at a start-up called Babylon Health, which developed an AI powered app called GP at Hand . The app promised to make the health care triaging process more efficient and much cheaper. Patients would type in their symptoms and the app would give them advice about what kind of health care professional they needed to see (if at all). After the launch of the app, several doctors in the UK discovered the app was giving incorrect advice. For instance, BBC’s Newsnight featured a story with a doctor demonstrating how the app suggested two conditions that didn’t require emergency treatment, when in fact the symptoms could have been indicators of a heart attack. The correct advice would have been to visit an emergency department."

the French-owned parcel delivery firm DPD launched a chatbot to answer customer’s questions. There was at least one instance of the chatbot swearing and writing haikus which criticized the company.