Tags

, , , , , , , ,

Thursday’s posts look at sociolinguistics or child language acquisition: accents, stereotypes and how children learn to speak

One thing I love about studying language (at least, the side of language I look at: the social, communication side) is that you’re looking at real people, and people are fascinating. You can’t study social constraints on language without talking to people, and I love talking to people, so it’s a perfect match.

But there’s another dimension closing in: artificial intelligence.

It all started off with Eliza, a therapist. If you talk to her, you’ll notice that she doesn’t actually take part in the conversation so much as repeat what you say in a question form or respond with a stock phrase (“Go on”, “Does it please you to…”, and so on). This is actually quite frustrating, especially as she’s meant to be a therapist. While the conversation is meant to be therapy, and so most of the talking is done by the ‘patient’, it’s frustrating to have someone repeat what you say as a question. Try asking for an explicit opinion (we tried asking her opinion on abortion), and she has a number of tricks (“We’re talking about you, not me”) to avoid giving one.

Since Eliza, things have developed a fair amount. Siri, for example, has the ability to participate in a conversation with actual responses, to provide information from the internet, your calendar, and so on. She has voice recognition which means it’s more realistic than typing, as with Eliza.

Other robots, such as the Nabaztag, retain information and provide feedback about your lifestyle. This means that people with dementia, for example, have something to remember whether they’ve eaten lunch when they can’t do so them-self.

Isn’t there something sightly creepy about them, though.

What about this one?

An SVG version of Image:Moriuncannyvalley.gif

The Uncanny Valley

This is an example of the uncanny valley: when something inhuman resembles a human too much, but not enough that they’re completely realistic, they become creepy to look at. Scientists are trying to break through this valley, but haven’t quite managed to yet. You can find out more about the Uncanny Valley here.

Beyond with uncanny aspect of talking robots, I find the concept strange because there’s no personal aspect: surely that’s why we talk to others?

I can see robots being useful for certain groups of people, and I can understand Siri being interesting and funny.

However, robots are programmed. They can be useful, bu they don’t have natural personalities. They can’t have opinions of their own. And, being inhuman means they can’t have natural conversation, beyond what they’ve been programmed for.

If robots don’t have personalities (Except Sonny in I, Robot. Obviously he doesn’t actually exist…), what’s the use in talking to them?

Advertisements