In the Digital Age, Companies Need a Human Touch

Today, a customer’s entire experience with a company — from first inquiry to final transaction — can and often does occur entirely online. Many consumers seem to prefer it that way, too. According to data collected by Nextiva, 57% of surveyed respondents said that they would rather contact companies via email or social media, instead of by voice-based customer support.

The shift to a tech-savvy business foundation isn’t just convenient for consumers — it comes with considerable benefits for businesses too. In one report published by Juniper Research, analysts projected that automated systems would save a collective $8 billion in customer support costs by 2022. That’s a compelling financial argument for businesses. Smart Insights estimates that 34% of companies have already undergone a digital transformation, while researchers for Seagate anticipate that over 60% of CEOs globally will begin prioritizing digital strategies to improve customer experience by the end of this year.

Integrating technology into our day-to-day business operations is a no-brainer, given its potential to lessen costs, boost convenience, and improve consumer experiences. However, in our race to meet the digital age, we may be leaving one critical aspect of customer service behind: human connection.

As much as consumers appreciate the convenience and speed that digital tools and systems facilitate, they also need to feel a genuine human connection and know that there are people behind the AI-powered customer hotline. As one writer puts the matter in an article for Business Insider, “A satisfactory customer experience depends on how well a company can relate to a customer on an emotional level. To create memorable experiences, employees who are curious and have a genuine desire to assist the customers can set brands apart.”

Business consultant Chris Malone and social psychologist Susan T. Fiske researched this emotional connection for their book, The Human Brand: How We Relate to People, Products, and Companies. In the text, they write that consumers gravitate towards companies similarly to how they flock to friends; if they perceive a business as being emotionally warm and welcoming, but not particularly competent, they will still enjoy the experience and like the brand. In contrast, if a company is competent but cold in its customer engagement, consumers tend to visit only when circumstances demand it.

The ideal, they say, is for companies to be both warm and competent. Within the context of our digitally-driven world, striking that balance means integrating consumer service technology without wholly excising human personality.

Businesses need to identify when their AI-powered chatbot or customer service channel crosses over the line from useful to canned or frustrating. Sometimes, a robot voice just isn’t helpful enough; according to statistics published by American Express, 40% of customers prefer talking to a human service representative when they struggle with complex problems. Consumers should have the means to reach out to human representatives if they can’t solve their problems through automated channels.

People want to connect with a brand that has personality, voice, and empathy — even across digital channels. Social media platforms have evolved into significant touchpoints for businesses today; researchers for Nextiva estimate that over 35% of consumers in the U.S. reached out to a business through social media in 2017. Their expectations, too, are significant — 48% of the customers who contact a company via social media expect a response to their questions or complaints within 24 hours. Even so, a cold, automated response may be just as ineffective as no response at all.

Emerging technology is not a be-all, end-all, unquestionable solution to our problems. Businesses need to treat digital channels with all the personality, empathy, and care that they would offer during a client call. If they rely on canned responses or AI bots, they may find their consumer pool shrinking as customers defect for companies with more perceived warmth.

Technology is convenient, yes — however, the convenience it creates should never come at the cost of human connection.

Originally published on ScoreNYC

By |2020-06-12T21:07:27+00:00November 26th, 2019|Uncategorized|

AI Fails and What They Teach Us About Emerging Technology

These days, we’ve become all but desensitized to the miraculous convenience of AI. We’re not surprised when we open Netflix to find feeds immediately and perfectly tailored to our tastes, and we’re not taken aback when Facebook’s facial recognition tech picks our face out of a group-picture lineup. Ten years ago, we might have made a polite excuse and beat a quick retreat if we heard a friend asking an invisible person to dim the lights or report the weather. Now, we barely blink — and perhaps wonder if we should get an Echo Dot, too. 

We have become so accustomed to AI quietly incorporating itself into almost every aspect of our day-to-day lives that we’ve stopped having hard walls on our perception of possibility. Rather than address new claims of AI’s capabilities with disbelief, we regard it with interested surprise and think — could I use that? 

But what happens when AI doesn’t work as well as we expect? What happens when our near-boundless faith in AI’s usefulness is misplaced, and the high-tech tools we’ve begun to rely on start cracking under the weight of the responsibilities we delegate? 

Let’s consider an example.

AI Can’t Cure Cancer — Or Can It? An IBM Case Study 

When IBM’s Watson debuted in 2014, it charmed investors, consumers, and tech aficionados alike. Proponents boasted that Watson’s information-gathering capabilities would make it an invaluable resource for doctors who might otherwise not have the time or opportunity to keep up with the constant influx of medical knowledge. During a demo that same year, Watson dazzled industry professionals and investors by analyzing an eclectic collection of symptoms and offering a series of potential diagnoses, each ranked by the system’s confidence and linked to relevant medical literature. The AI’s clear knowledge of rare disease and its ability to provide diagnostic conclusions was both impressive and inspiring. 

Watson’s positive impression spurred investment. Encouraged by the AI’s potential, MD Anderson, a cancer center within the University of Texas, signed a multi-million dollar contract with IBM to apply Watson’s cognitive computing capabilities to its fight against cancer. Watson for Oncology was meant to parse enormous quantities of case data and provide novel insights that would help doctors provide better and more effective care to cancer patients. 

Unfortunately, the tool didn’t exactly deliver on its marketing pitch. 

In 2017, auditors at the University of Texas submitted a caustic report claiming that Watson not only cost MD Anderson over $62 million but also failed to achieve its goals. Doctors lambasted the tool for its propensity to give bad advice; in one memorable case reported by the Verge, the AI suggested that a patient with severe bleeding receive a drug that would worsen their condition. Luckily the patient was hypothetical, and no real people were hurt; however, users were still understandably annoyed by Watson’s apparent ineptitude. As one particularly scathing doctor said in a report for IBM, “This product is a piece of s—. We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.”

But is the project’s failure to deliver on its hype all Watson’s fault? Not exactly. 

Watson’s main flaw was with implementation, not technology. When the project began, doctors entered real patient data as intended. However, Watson’s guidelines changed often enough that updating those cases became a chore; soon, users switched to hypothetical examples. This meant that Watson could only make suggestions based on the treatment preferences and information provided by a few doctors, rather than the actual data from an entire cancer center, thereby skewing the advice it provided. 

Moreover, the AI’s ability to discern connections is only useful to a point. It can note a pattern between a patient with a given illness, their condition, and the medications prescribed, but any conclusions drawn from such analysis would be tenuous at best. The AI cannot definitively determine whether a link is correlation, causation, or mere coincidence — and thus risks providing diagnostic conclusions without evidence-based backing.

Given the lack of user support and the shortage of real information, is it any surprise that Watson failed to deliver innovative answers? 

What Does Watson’s Failure Teach Us?

Watson’s problem is more human than it is technical. There are three major lessons that we can pull from the AI’s crash: 

We Need to Check Our Expectations.

We tend to believe that AI and emerging technology can achieve what its developers say that it can. However, as Watson’s inability to separate correlation and causation demonstrates, the potential we read in marketing copy can be overinflated. As users, we need to have a better understanding and skepticism of emerging technology before we begin relying on it. 

Tools Must Be Well-Integrated. 

If doctors had been able to use the Watson interface without continually needing to revise their submissions for new guidelines, they might have provided more real patient information and used the tool more often than they did. This, in turn, may have allowed Watson to be more effective in the role it was assigned. Considering the needs of the human user is just as important as considering the technical requirements of the tool (if not more so). 

We Must Be Careful

If the scientists at MD Anderson hadn’t been so careful, or if they had followed Watson blindly, real patients could have been at risk. We can never allow our faith in an emerging tool to be so inflated that we lose sight of the people it’s meant to help. 

Emerging technology is exciting, yes — but we also need to take the time to address the moral and practical implications of how we bring that seemingly-capable technology into our lives. At the very least, it would seem wise to be a little more skeptical in our faith. 

By |2019-09-03T16:44:02+00:00September 3rd, 2019|Uncategorized|