To Give A.I. the Gift of Gab, Silicon Valley Needs to Offend You

0
237

Thay says something terrible. She is a racist, exclusive and downright dirty. She once said that the massacre did not happen. But she’s old technology.

Nearly two years ago, Tay was an experimental system established by Microsoft (Microsoft). Her design is chatting with digital trendsetter in the breeze, sometimes rude jargon, American Internet users quickly realized they can deceive her, let her spit, speak with insulting language. This is largely the result of a simple design flaw – Tay is programmed to repeat what she said to her – but the damage has been caused. Within a few hours, Microsoft closed her.

Since then, a new technology for dialogue between Microsoft and other Internet giants has emerged, which is more flexible and effective than the technology that supports Tay. Researchers believe that when these new systems are released on the Internet, they will increase at a faster rate. But sometimes, like Tay, these dialogue systems reflect the worst side of human nature. Considering the history of this, companies like Microsoft do not want to make them free – at least now.

These systems do not simply repeat what they have said to them, nor do they answer them in ready-made answers. They teach themselves to engage in dialogue by careful analysis of real human dialogues. For example, in Microsoft, a new system is learning to chat by analyzing thousands of online discussions from services such as Twitter and Reddit. When you send a message to the robot, it will pick up a response after generating dozens of possibilities, and sort it according to its reaction with human dialogue.

If you complain about breaking your ankle in a football match, it’s flexible enough to give you some sympathy. “哎哟,那可不太好,”它可能会说。 I hope your ankle will recover soon. If you mention a guest or dinner plan at home, it will respond in a very precise and familiar way.

Microsoft researchers believe that they can significantly improve the technology by chatting with a large number of people. This will help identify its defects and generate clearer conversion data for the system. Bill Dolan, a senior researcher in Microsoft, said, “if we can’t solve this problem in front of the real users, it’s a problem, let them tell us what is right and what is not right.”

But this is a difficult problem. Because its flaws may lead to public complaints – and bad media – Microsoft is cautious about promoting the technology to the Internet.

The project represents a broader effort to build a truly conversational new computing system. Amason in companies such as Facebook, Salesforce, and Microsoft, hopes that the technology will provide a smooth and easy way to interact with the machine than a keyboard and a mouse, a touch screen, more easily easier than Siri and other digital assistant on the market now, there is still a long way from the fluid conversation.

Over the years, Silicon Valley companies has advocated that chatting robot can help you, for example, book the next airplane, or solve the problem of your new computer tablet computer. But these do not meet billing standards and only provide a response to common queries.

Now, because of the rise of algorithms, we can learn tasks quickly, and Research on session computing is advancing. But the whole industry is facing the same problem as Microsoft: a new generation of chat robots is more like a person, but it’s not always a good thing.

“It’s more powerful,” said Alex Lebrun, working at the Facebook Artificial Intelligence Laboratory in Paris. “But it’s more dangerous.”

The new generation is dependent on the “neural network”, which is a complex algorithm that can learn tasks by identifying the patterns of large numbers of data. Over the past five years, these algorithms have speeded up the evolution of the system. These systems can automatically recognize faces and objects, identify instructions on smart phones, and transform from one language to another. They are also accelerating the development of the session system – although the research is much more complex and takes longer to mature.

It seems surprising that Microsoft researchers are training their dialogue systems through Twitter and Reddit dialogues. But even on Twitter and Reddit, when people really start talking, they are usually polite, and these services are full of the dialogue.

Microsoft researchers use small ways to massage their conversation data, but to a large extent, they just input the original dialogue into their neural network, so these algorithms can learn from a very humane interaction. According to Mr. Dolan, the system can perform well even in the case of spelling and bad grammar in the analysis of these data. If you enter “Winne tonight drink resttaurant”, it may respond: “I’m not a fan.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here