Eliza is widely described as the first chatbot, but not as versatile as similar services today. It reacts to keywords and then basically returns the conversation to the user.

Chatbots: A Long and Complicated History

Eliza was widely described as the first chatbot, but it wasn’t as versatile as similar services today. The program relies on natural language understanding, reacts to key words, and then basically returns the conversation to the user. Still, as Joseph Weizenbaum, the MIT computer scientist who created Eliza, wrote in a 1966 research paper, “some subjects have a hard time convincing ELIZA (and its current script) that it’s not human.”

For Weizenbaum, this fact is worrisome, according to his 2008 MIT obituary. Those who interacted with Eliza were willing to open up to it, even knowing it was a computer program. “ELIZA shows how easy it is to create and maintain the illusion of understanding, and thus possibly the illusion of judgment. Trustworthy,” Weizenbaum wrote in 1966. “There is some danger lurking in it. “He warned against giving machines too much responsibility in the final moments of his career and became a harsh, philosophical critique of artificial intelligence.

Even before that, our complex relationship with AI and machines was evident in the plots of Hollywood movies like “Her” or “Ex Machina,” let alone with those who insist on saying “thank you” to voice assistants Have a harmless debate, like Alexa or Siri.

Contemporary chatbots can also elicit strong emotional responses from users when they don’t work as expected — or when they become so good at imitating the flawed human language they’ve received, they start spewing out racist and inflammatory comments. Meta’s new chatbot, for example, sparked some controversy shortly after this month by spewing inauthentic political comments and anti-Semitic remarks in conversations with users.
Even so, proponents of the technology argue that it could simplify customer service efforts and increase efficiency in the wider industry. The technology underpins the fact that many of us use digital assistants every day to play music, order deliveries or check homework. Some have also made cases for these chatbots, offering comfort to people who are lonely, old or isolated. At least one startup is even using it as a tool to make dead relatives appear to be alive by creating computer-generated versions of them from uploaded chats.

Meanwhile, others warn that the technology behind AI-powered chatbots is still far more limited than some hoped. “These technologies are very good at disguising humans and sounding like humans, but they’re not deep,” said Gary Marcus, an AI researcher and professor emeritus at NYU. “They’re imitators, these systems, but they’re very superficial imitations. They don’t really understand what they’re saying.”

Still, as these services expand into more corners of our lives, and as companies take steps to make these tools more personal, our relationship with them will likely only become more complex.

The evolution of chatbots

Sanjeev P. Khudanpur remembers chatting with Eliza in graduate school. Despite its historic importance in the tech industry, he said it quickly saw its limits.

It can only convincingly mimic a text conversation about a dozen back and forth, and then “you realize, no, it’s not smart, it’s just trying to prolong the conversation in one way or another,” said Khudanpur, an expert in the field. Application of Information Theoretical Methods to Human Language Technology, Professor at Johns Hopkins University.

Eliza inventor Joseph Weizenbaum sits in front of a computer desk at the Computer Museum in Paderborn, Germany, in May 2005.
Another early chatbot was developed by Stanford psychiatrist Kenneth Colby in 1971 and named “Parry” because it was designed to mimic a person with paranoid schizophrenia. (The New York Times’ 2001 obituary for Colby included a colorful chat in which the researchers brought Eliza and Parry together.)

In the decades since these tools, however, the idea of ​​”talking to a computer” has shifted. Khudanpur said this was “because the problem has proven to be very, very difficult”. Instead, the focus has shifted to “goal-driven conversations,” he said.

It didn't take long for Meta's new chatbot to say offensive words

To understand the difference, consider your current conversation with Alexa or Siri. Often, you’ll turn to these digital assistants for help buying a plane ticket, checking the weather, or playing a song. It’s goal-oriented conversations that have become a major focus of academic and industry research as computer scientists try to glean useful information from computers’ ability to scan human language.

While they use similar technology to earlier social chatbots, Khudanpur said, “You can’t really call them chatbots. You can call them voice assistants, or just digital assistants that help you perform specific tasks.”

He added that the technology experienced a decades-long “stagnation” before widespread adoption of the internet. “A major breakthrough could happen this millennium,” Khudanpur said. “With the rise of companies that successfully use computerized agents to perform day-to-day tasks.”

With the rise of smart speakers like Alexa, it has become more common for people to chat with machines.

“People are always frustrated when their packets are lost, and the human agents who deal with them are always stressed out with all the negativity, so they say, ‘Let’s hand it over to the computer,'” Khudanpur said. “You can yell at the computer and it just wants to know ‘Do you have your tag number so I can tell you where your bag is?'”

For example, in 2008, Alaska Airlines launched “Jenn,” a digital assistant that helps travelers. As a sign of our tendency to humanize these tools, an early New York Times review of the service noted: “Jenn isn’t nasty. On the site, she’s depicted as a smiling young brunette. Her The sound has the proper ”

Back to Social Chatbots and Social Issues

In the early 2000s, researchers began to revisit the development of social chatbots that could engage in broad conversations with humans. These chatbots are often trained on reams of data from the internet and have learned to mimic human speech very well – but they also run the risk of echoing the worst parts of the internet.

For example, in 2015, Microsoft’s public experiment with an AI chatbot called Tay crashed and burned in less than 24 hours. Tay was designed to speak like a teen, but soon began making racist and hate speech that Microsoft shut it down. (The company said there was also a concerted effort by humans to trick Tay into making certain offensive remarks.)

“The more you chat with Tay, the smarter she gets, so the experience for you can be more personalized,” Microsoft said at the time.

This claim will be repeated by other tech giants releasing public chatbots, including Meta’s BlenderBot3, which was released earlier this month. Meta chatbot falsely claims that Donald Trump is still president and that there is “certainly a lot of evidence” of election theft, among other controversial remarks.

BlenderBot3 also claims to be more than just a robot. In one conversation, it claimed “the fact that I am alive and conscious now makes me human.”

Meta's new chatbot, BlenderBot3, explains to users why it's actually human. It didn't take long, however, for the chatbot to spark controversy by making inflammatory remarks.

Despite all the progress since Eliza, and the vast amount of new data on which to train these language processors, NYU professor Marcus said, “It’s not clear to me that you can actually build a reliable and secure chatbot.”

He cites a 2015 Facebook project called “M.” An automated personal assistant that’s supposed to be the company’s text-based answer to services like Siri and Alexa “The idea is that it’s going to be a general-purpose assistant that can help you order romantic dinners, have musicians play and deliver flowers for you — far beyond what Siri can do,” Marcus said. Instead, the service Closed in 2018 due to poor performance.

On the other hand, Khudanpur remains optimistic about its potential use cases. “I have a complete vision of how AI will empower humans at the individual level,” he said. “Imagine if my robot could read all the scientific articles in my field, then I wouldn’t have to read them all, I would just think, ask questions and have conversations,” he said. “In other words, I will have an alter ego with complementary superpowers.”

Leave a Comment

Your email address will not be published.