The Next Chapter in Artificial Intelligence: Sense and Sensibility

The Next Chapter in Artificial Intelligence: Sense and Sensibility

Jane Austen wrote her classic “Sense and Sensibility” in 1811. Who could have imagined that more than two centuries later the story would help guide us in using artificial intelligence (AI) to serve our customers?

In the novel, the Dashwood sisters represent reason and emotion — sense and sensibility — respectively. Elinor Dashwood is the embodiment of rational thinking and logical decisions; her younger sister Marianne exhibits admirable sensitivity but also emotional excess. While initially seeming to favor Elinor’s good sense, as the story unfolds Austen shows uncertainty as to whether sense or sensibility should triumph.

Indeed, by the end we accept that both are important — that intelligence informed by emotional empathy is the ideal human behavior. And that conclusion should be our model for artificial intelligence.

With AI technology, software is programmed to “think” like a human and mimic the way a person acts. For what purpose? To contribute value to our business endeavors, and to our lives.

AI is being applied in widely divergent ways. Among other things, the technology is being used to help law enforcement uncover fake crime and theft claims, to predict a mortgage borrower’s likelihood of default, to inform autonomous driving and even to create works of art.

All of those applications rely on the quality of the data used for machine learning to properly understand the world and the humans that inhabit it.

Yet none of them really push AI to extraordinary accomplishments. Even the artwork seems to be an uninspired version or copy of works created by human artists who have come before.

Since we humans are a mix of thoughts and emotions — sense and sensibility — AI needs both to be the best human it can be. In fact, it should be a mimic of ideal human behavior — or even super human behavior.

In “Sense and Sensibility,” we learn that reason and emotions are critical to human interactions. In order to take AI to extraordinary heights, we need to integrate both logic and emotion into the technology.

It may be most important to do that in the health care sector, where AI is emerging in early applications that may eventually mature into tools that could save lives.

Think about the Wong-Baker FACES pain-rating scale, which was developed more than 30 years ago. For children especially, it is difficult to recognize and communicate pain level, and the facial expressions on the chart offered them a good way to let physicians know how much pain they were experiencing. While that chart is still relevant, we have come a long way in applying technology to incorporate emotion and sentiment analysis throughout our health care practices.

Scientists are developing AI robots that are equipped with speech-recognition technology and natural language processing for sentiment analysis that can form diagnoses and suggest appropriate courses of action. They can even detect emotions based on heart sound analysis and use concept mining to engage in an empathetic conversation with patients.

I wonder where the abilities to detect emotion and generate empathetic responses were in my recent interactions with my online pharmacy provider. What began as a simple transaction became a stressed-out call where neither sense nor sensibility were evident. Warning: author rant ahead, proceed with caution.

I had placed two prescription refills online to be sent to our travel address, and one of those refills required an additional approval from our concierge physician. I did not expect that to be a difficult order to fill because I had left ample time for it to be processed. But a week later I learned that both prescriptions were on hold, and the one that did not require extra approval (which happened to the one for which a dosage interruption was considered dangerous) was about to run out. I called the customer service contact center to straighten out the issue and to ask them to split the two prescriptions and expedite delivery of the one that was about to run out. Let’s just say the interaction was not successful.

The service rep advised me that splitting the two prescriptions would involve an additional two-week delay but said I could go to my physician and get an emergency dose. I asked if that could be expedited, please, and also asked if there was another option that recognized we were traveling. Apparently not, because the rep just responded by repeating the options that didn’t work, apparently reading from a script. Where was the sense in that?

I asked to talk to a supervisor. The rep said no until I asked three times (sounds like a prewired script decision to me). At last, I was speaking with a supervisor, a person who could help solve my problem. I anticipated the reasoned decision-making as well as the sensibility that I was sure would come with a more senior role. I explained the problem and asked the supervisor for his understanding and assistance. Instead of offering helpful alternatives, he was belligerent and told me he had been listening all along (really) and said that the rep had offered to help but I had refused her offer.

AI sentiment analysis would clearly have helped in that situation.

The rep and the supervisor would have been advised that I was upset and asking for their help, and they could have suggested a remedy that would ensure that the meds would arrive in time. Most importantly, this would have been integrated into the script. While I am amazed by the fact that we need to program humanity into humans with AI, I can see that we clearly must do that.

How did the call end? I thanked the supervisor for his and the rep’s help (yes sarcasm) and said I would pursue other avenues. The supervisor said, “Have a nice day,” proving he too was quite familiar with sarcasm. Ironically, while AI does still have difficulty recognizing sarcasm, the advances in sentiment algorithms would have helped this contact center do better inquiry handling and exception management from the start.

[Following my call, we had our physician speak to his contact center rep — a different path in the call tree — and he split the order and expedited the meds.]

Whether relying on human intelligence or AI, actions must be based on the available data. The better the data, the more informed the decision. In a CMSWire article titled “Debunking 4 Popular Myths About Robots” that I wrote in 2017, I highlighted some important applications that rely on this premise:

I have since learned of other applications that could change all of our lives for the better.

For example, London-based IntelligentX Brewing, which advertises “beer that learns,” is using AI to personalize taste by collecting and analyzing data for flavor preferences and then tweaking its recipe.

Not to be outdone, the “Beer Fingerprinting” project utilizes machine learning combined with high-tech sensors to collect data and interpret subtle aromas — often linked to emotional responses — to create intelligent beer. The information gained from this system is used to explore new brewing organisms, ultimately leading to the creation of new beers.

Related Article: Is Your Data Fit for the AI Revolution?

Just as the Dashwood sisters discover by the end of “Sense and Sensibility,” both reason and emotion are important guides. 

As we apply AI for better patient care or customer support, or even to brew a better beer, we use intelligent technologies to drive logical process flows and inform our actions with emotional empathy. By combining rules, patterns, data collection and analysis, and techniques like machine learning and natural language processing, AI can “mimic” the way our best human selves might think and should react.

Images Powered by Shutterstock