Loading...

Microsoft releases guidelines for ‘responsible’ conversational AI

Microsoft releases guidelines for ‘responsible’ conversational AI
Photo Credit: Photo Credit: Reuters
Loading...

Redmond-based tech giant Microsoft has released guidelines for responsible conversational artificial intelligence (AI), the company said in a blog post. Microsoft has its own conversational AI platform Cortana that competes with Amazon's Alexa, Google Assistant and Apple's Siri.

The firm said that the initiative happened after a lot of its partners, clients and customers, who were designing such conversational platforms, sought Microsoft's advice on how to maintain trust even as virtual assistant tools collected a lot of customer information.

"The lessons we have learned from those experiences, and from our more recent work with tools such as Cortana and Zo, have helped us shape these guidelines, which we follow in our own efforts to develop responsible and trusted bot," said Lili Cheng, corporate vice-president for conversational AI at Microsoft. She also added that the company has incorporated the learnings from its own cross-company work focused on responsible AI and by listening to its customers and partners.

Loading...

Cheng has been working on conversational AI with Microsoft since 1995 when the company developed Comic Chat, a graphical chat service that was embedded in an early version of its browser Internet Explorer.

The guidelines have taken into account the potential of the platform to affect people in "consequential ways", such as help the data owners to navigate information related to employment, finances, physical health and mental well-being. The guidelines suggest that in such situations, people should pause and make sure that there are people involved to provide judgment, expertise and empathy to those affected.

Microsoft also provides tools such as offensive text classifiers to protect the bot from abuse. It also gives insights to build trace-capabilities into the bot, which are helpful in determining the cause of errors and maintaining reliability.

Loading...

"The guidelines encourage companies and organisations to stop and think about how their bot will be used and take the steps necessary to prevent abuse," Cheng wrote, adding that if people did not trust the technology, they would not use it. It also suggests that the organisations should gain customers' trust with transparency.

Microsoft said that a bot designed to do a certain thing should restrict itself to doing just that. "A bot designed to take pizza orders, for example, should avoid engaging on sensitive topics such as race, gender, religion and politics," Cheng said.


Sign up for Newsletter

Select your Newsletter frequency