There are now a wide variety of chatbots. You can now order a pizza with a bot. You can plan a weekend getaway with a bot. You can even play Pokémon GO, without playing Pokémon GO, with a bot.

The world’s big tech companies all agree that conversational interfaces are the next big thing. They’ve spent much of 2016 telling us about the amazing things we’ll be doing with bots in the near future.

But there’s one thing they haven’t said much about: personality.

Hey Siri, how do you really feel?

The first bot to become a household name was Siri, Apple’s ubiquitous digital assistant. We liked Siri. She had a name. She even had a sense of humor. But it would be a stretch to say she had, or has, a personality. She’s polite, straightforward … and just a little bit dull.

The same is true of Microsoft’s Cortana, Google’s Assistant (who doesn’t even have a name), and most of the news and travel bots that have cropped up recently.

It’s not surprising. These bots need to work for everyone, and we all have different ideas about what’s funny and what’s annoying. Better to play it safe than create a personality some people can’t stand.

That’s doubly true when the bot doesn’t work very well. Nothing makes tech frustration more frustrating than a peppy personality, as CNN recently discovered. But the right personality can make a bot much more engaging. It’s a powerful tool — if you can get it right.

A little more conversation

The golden rule of writing conversational interfaces is: Tell people what to do next.

“What do you want to do today?” is a question with twenty different answers. Say I want to buy a red cable-knit sweater, like the one Ken Bone wore at the presidential debate. Do I respond, “I want to buy something” or “Shop for sweaters” or “Red cable-knit” or “Ken Bone”?

If your AI can deal with all of these responses, great. If not, you’re going to frustrate a lot of people. That’s why early bots talk like this:

Describe what you’re looking for. For example, “blue hats”.

Technical constraints have led us to simple, neutral copy. But you can keep things clear while injecting some warmth:

What are you looking for today? Blue hats, maybe? Size 9 shoes? Or our biggest discounts?

Or you can dial things up:

Everyone looks great in a blue hat. Type “blue hats” if you’d like to see some.

Now, the interaction feels more engaging — and it’s just as clear.

People power

There are two corner stores near where I live in London. I always go to the same one, because I like the guy who works there. It’s not a decision I’ve thought much about. But over the years, it must have providing that store a nice influx of cash.

People choose couches and laptops and wedding dresses based on which stores have nicer sales staff, simpler websites, or better call centers. Soon, bots will be an important piece in this puzzle. You may even find yourself saying “Alexa found it for me” instead of “I bought it on Amazon.”

Some companies will probably ask us to choose which bot we talk to, like choosing the voice on a GPS in your car. (Siri already comes in male and female versions, though for now they say the same things.) Others will invent characters with rich back stories, or ask us to chat with dogs or superheroes or Roman centurions. A few of these experiences will be fun. Most of them will be terrible.

But being able to buy insurance from Homer Simpson or a sardonic poodle isn’t really what it’s about.

Passing the test

As bot tech matures — and as we get used to talking to bots — the kind of interactions we have will change. Open-ended questions, like “What are you looking for?” will go from being a bad idea to a great one, because they’ll allow a bot to find out more about that customer. Just like a real sales assistant, a bot will pick up on extra information and use it.

You: “I’m going to a wedding next month, and I need a dress to wear.”

Bot: “Great. Any color in particular?”

You: “I’m thinking blue.”

Bot: “Here are our most popular blue wedding dresses. Let me know if you’re looking for a bag, too!”

This only happens if we talk naturally to bots. And that only happens if those bots have enough personality to cultivate a natural conversation. This is a key point: Personality isn’t just a layer of gloss that sits on top of a bot’s functionality. Done right, it can transform what a bot does.

In 1950, Alan Turing devised the Turing test, a point in AI development when humans can’t tell the difference between interacting with a human or an artificial intelligence. Right now, bots are a long way from this point. But personality will be a big part of what bridges the gap. Soon, the conversation will be worth having.