Skip to main content
Dublin Library

The Publishing Project

AI, and Assistants

 

As an undergraduate, I did fairly extensive work in early text-based virtual communities, particularly the way in which people interacted and communicated in text-based virtual realities. The research may color my vision of Duplex and other technologies that make it harder for people to know who they are speaking with.

This makes the argument that we have the "right" to know that we're speaking to a 'bot harder to accept for me. Unless you know the person you're speaking with how can you really tell who are you communicating with? How do you know that the knowledge they claim is factually correct and that they even possess it, to begin with?

The issue, as I see it #

I think that the problem began with the Duplex Demo at Google I/O this year. It wasn't a complete app demo and it was a very limited show of what both the assistant and a new technology called Duplex.

People jumped the gun and started accusing Google, and Amazon before them, to be trying to create machines that fool humans or that not enough care has been put into the ethics and morals of such interactions.

Enhancements to Google Assistant #

Google assistant for Android and, in a much more limited capacity, iOS will be enhanced with interesting features:

New voices

Google's Wavenet technology is bringing improvements to computer voices in general and the Assistant voices in particular, creating system voices that sound more natural and human-like than before. Assistant has added six new voices using this technology.

Also as part of Wavenet, John Legend's voice will be coming for some context using Assistant later this year.

Continued Conversation

You no longer have to trigger Assistant with hot words like 'Hey Google' and 'OK Google' before every sentence, allowing you to have a more natural conversation. Assistant will be able to distinguish between when you are talking to it or to another person.

Multiple Actions

Multiple Actions allows Google's Assistant to perform simultaneous actions in the same voice command, using coordination reduction to figure out exactly what the user means even with longer commands.

Pretty Please

Google Assistant is adding skills that compliment and provide positive reinforcement to users (especially kids) when they converse with the Assistant using polite words like Please and Thank You.

I think it's the first point that makes people nervous when it comes to the Assistant. If the voices on your devices become harder to distinguish from a human or if we can have long conversations where the agent/assistant can interpret what we're saying without us having to take specific actions, it all leads to apprehension and fear.

An aspect of this "fear the intelligent machine" is that, in giving agency and independent action capabilities to a piece of software, we take away that agency from ourselves… We don't make reservations ourselves, we let the machine make them for us… What will we give up next?

But it pays to remember that the assistant will not do anything on its own. Even if you somehow used the assistant to perform a highly controversial action, there was still human agency in the action, there was no self-directed AI that actually pulled the trigger.

That's where the real concerns should be… How do we ensure human control over the actions of our AI agents?

Duplex #

As described in the blog post announcing the technology: Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone:

The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

So we see in these paragraph describing the technology we can see a few things:

  • It's task-specific and trained in specific domains
  • It is not an open-ended conversational agent. It won't initiate communication on its own
  • Using a more human voice makes it easier to interact without having to face the notion you're dealing with a machine, and potential rejection that talking to a machine may cause

Further down the article there is a statement that I find interesting and that prompts a series of questions:

The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (like scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

  • Are these operators always available when Duplex is making a call?
  • Are these operators similar to the drivers in a "self-driving" car?
  • How do hand-offs to a human operator happen? How does the machine evaluate a call as 'unusually complex'?
  • How has the human at the other end of the conversation reacted when Duplex handed the call to a human? Did the call complete as expected?

It is also important to remember that, like Assistant, Duplex can only be triggered by human interaction. You tell Assistant that you want to make a reservation and Assistant uses Duplex to make the reservation, it won't happen without a human in the loop to initiate the AI action.

Ethical And Moral Considerations #

As much as I'm for technology like Google Assistant and Duplex, there's a lot to be worried about, but I also think that right now we're worried about the wrong things and we're only considering potential wrongs that the technologies may have.

DeepMind created a research group around Ethics & Society. One area of research for this group is Privacy, transparency and fairness. I think this is where the biggest short-term value for both AI researchers and for the communities that they seek to serve. If we can show that the AI agents, both monitored and autonomous, can treat privacy and transparency in the same way humans can it'll go a long way in calming some of the fears people have. It won't settle them completely as people say they want to talk to 'real people' and 'not to a machine'.

Another research area that I find intriguing is AI morality and values. I find it interesting because it asks one of the questions that I've always had about technology, not just AI: How can we program morals and ethics into a system when we, or the people who will actually control the AI, may not share or have those morals or have a different ethical viewpoint than we do? As our machines become smarter than we are, better networked than we are and achieve global reach, there will be a commensurately larger need for machines that can interpret and act upon our values (whatever they may be). Perhaps the most important question is what will happen when AIs with different morals, programmed by people with different moral, religious and ethical backgrounds enter into conflict with each other? With humans?

Will we have unleashed one or more Skynet?

Ethics and morals are hard to regulate and impossible to program. While it's true that we can agree on some basic standards it's also true that even those agreed upon standards are not universal.

Furthermore, whose morals, ethics, and biases should we feed into our AI tools? Are these the same for autonomous AI? What happens if the ethical subroutines in the machine are different than mine?

An interesting example of programming morals are Asimov's three laws of Robotics, first appeared in Runaround (March 1942 issue of Astounding Stories of Super-Science). The three laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

This is as close as we've managed to get to codifying the behavior or autonomous robots and artificial intelligence; but it's also interesting to see that in many of Asimov's stories, Robots circumvent the laws in one form or another because of its own actions or those of a human involved.

How bulletproof will the morals we that we're allowed to input in our AI agents?

Deception is not exclusive to machines. In Identity and Deception in the Virtual Community, and Being Real, Judit Donath discusses identity and deception in the early virtual communities of the 1990s. Donath points out that:

“One can have...?” Who is this “one”? It is, of course, the embodied self, the body that is synonymous with identity, the body at the keyboard. The two worlds are not truly disjoint, but their relationship is murky and intricate. A single person can create multiple electronic identities, their only link their common progenitor, a link which is invisible in the virtual world. A man can create a female identity; a high-school student can claim to be an expert in virology. If the two worlds were truly unconnected, the result might be simply a wholly independent society, wherein a virtual identity would be taken on its own terms. Yet this is not the case. Other explorers in virtual space, males in the real world, develop relationships with the ostensible female, and are shocked to discover her “actual” gender. The virtual virologist’s pronouncements on AIDS breakthroughs and treatments are assumed to be backed by real-world knowledge. Do virtual personas inherit the qualities – and responsibilities – of their creators?

From Identity and Deception in the Virtual Community

AI doesn't make the issue necessarily worse, some people have decided that it's far worse when a machine does the deception upon human instructions and when we can't tell the difference between human and machine. Is it just a difference of degrees that makes us doubt the machine any more than the person who sent us the spear phishing email to try to compromise the business we work for?

In the late 1990s and early 2000 scientists and people worried that we were becoming too familiar with technology to the point where we've started losing the ability to communicate with other humans but I wonder if this being a problem is actually a generational issue.

This is not to say we should be complacent either. The fact that we're getting comfortable with the technology doesn't mean we should become lax in enforcing and understanding a common set of best practices for interacting with AI and how AI behaves when interacting with us.

Kate Crawford presented an interesting lecture at the University of Washington's Tech Policy Lab on the social and political implications of AI.

We have to ask ourselves if we're bringing our biases into AI and machine learning. I don't think is possible to take our biases into the AI world? Why is it that, as Crawford, most biases in AI are reflections of systemic biases present in our culture since before the last wave of AI came into the popular consciousness?

Who would benefit the most? #

So far I've also fallen into the trap of just talking about the negative potential of AI and conversational agents.

The first group that comes to mind is those who rely on computer-assisted speech for day-to-day interactions.

Another group that would benefit from having a conversational agent are people who suffer from social anxiety disorders or who, like the hikikomori in Japan, choose to have little or no contact with the outside world.

How do we reconcile the needs of groups like this with our desire for privacy and the need we all have to communicate?

Do the good aspects overwhelm the bad? How do we use machine learning, deep learning, and other AI applications to ameliorate the underlying systemic biases that caused the technological biases we are becoming aware of?

Conclusion: Bringing It All Together #

AI is an interesting and dangerous topic. I think, for the purpose of this final discussion, we can group it in two areas: Human/AI interaction and fully autonomous AI.

Human/AI interaction is the most recent and, apparently, the most controversial. We're afraid that the machine will "trick" us when talking to us rather than see the conversation as a mutually beneficial exchange it is.

There are at least three issues that we need to address when talking about human/AI interaction:

  • How do we set ethical, moral, and legal boundaries for AI?
  • How do you build trust from humans towards AI?
  • As conversational UI and AI developers, how do we address users' concerns?

Autonomous AI presents a different set of challenges that are a larger degree of biases both cultural and against AI. Autonomous, self-sufficient AI, is too wide a topic to include here, but it's important to keep in mind.

How we deal with these issues and build comfort with the technologies will have huge implications for society and the individual.

Edit on Github