AI Fails and What They Teach Us About Emerging Technology

These days, we’ve become all but desensitized to the miraculous convenience of AI. We’re not surprised when we open Netflix to find feeds immediately and perfectly tailored to our tastes, and we’re not taken aback when Facebook’s facial recognition tech picks our face out of a group-picture lineup. Ten years ago, we might have made a polite excuse and beat a quick retreat if we heard a friend asking an invisible person to dim the lights or report the weather. Now, we barely blink — and perhaps wonder if we should get an Echo Dot, too. 

We have become so accustomed to AI quietly incorporating itself into almost every aspect of our day-to-day lives that we’ve stopped having hard walls on our perception of possibility. Rather than address new claims of AI’s capabilities with disbelief, we regard it with interested surprise and think — could I use that? 

But what happens when AI doesn’t work as well as we expect? What happens when our near-boundless faith in AI’s usefulness is misplaced, and the high-tech tools we’ve begun to rely on start cracking under the weight of the responsibilities we delegate? 

Let’s consider an example.

AI Can’t Cure Cancer — Or Can It? An IBM Case Study 

When IBM’s Watson debuted in 2014, it charmed investors, consumers, and tech aficionados alike. Proponents boasted that Watson’s information-gathering capabilities would make it an invaluable resource for doctors who might otherwise not have the time or opportunity to keep up with the constant influx of medical knowledge. During a demo that same year, Watson dazzled industry professionals and investors by analyzing an eclectic collection of symptoms and offering a series of potential diagnoses, each ranked by the system’s confidence and linked to relevant medical literature. The AI’s clear knowledge of rare disease and its ability to provide diagnostic conclusions was both impressive and inspiring. 

Watson’s positive impression spurred investment. Encouraged by the AI’s potential, MD Anderson, a cancer center within the University of Texas, signed a multi-million dollar contract with IBM to apply Watson’s cognitive computing capabilities to its fight against cancer. Watson for Oncology was meant to parse enormous quantities of case data and provide novel insights that would help doctors provide better and more effective care to cancer patients. 

Unfortunately, the tool didn’t exactly deliver on its marketing pitch. 

In 2017, auditors at the University of Texas submitted a caustic report claiming that Watson not only cost MD Anderson over $62 million but also failed to achieve its goals. Doctors lambasted the tool for its propensity to give bad advice; in one memorable case reported by the Verge, the AI suggested that a patient with severe bleeding receive a drug that would worsen their condition. Luckily the patient was hypothetical, and no real people were hurt; however, users were still understandably annoyed by Watson’s apparent ineptitude. As one particularly scathing doctor said in a report for IBM, “This product is a piece of s—. We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.”

But is the project’s failure to deliver on its hype all Watson’s fault? Not exactly. 

Watson’s main flaw was with implementation, not technology. When the project began, doctors entered real patient data as intended. However, Watson’s guidelines changed often enough that updating those cases became a chore; soon, users switched to hypothetical examples. This meant that Watson could only make suggestions based on the treatment preferences and information provided by a few doctors, rather than the actual data from an entire cancer center, thereby skewing the advice it provided. 

Moreover, the AI’s ability to discern connections is only useful to a point. It can note a pattern between a patient with a given illness, their condition, and the medications prescribed, but any conclusions drawn from such analysis would be tenuous at best. The AI cannot definitively determine whether a link is correlation, causation, or mere coincidence — and thus risks providing diagnostic conclusions without evidence-based backing.

Given the lack of user support and the shortage of real information, is it any surprise that Watson failed to deliver innovative answers? 

What Does Watson’s Failure Teach Us?

Watson’s problem is more human than it is technical. There are three major lessons that we can pull from the AI’s crash: 

We Need to Check Our Expectations.

We tend to believe that AI and emerging technology can achieve what its developers say that it can. However, as Watson’s inability to separate correlation and causation demonstrates, the potential we read in marketing copy can be overinflated. As users, we need to have a better understanding and skepticism of emerging technology before we begin relying on it. 

Tools Must Be Well-Integrated. 

If doctors had been able to use the Watson interface without continually needing to revise their submissions for new guidelines, they might have provided more real patient information and used the tool more often than they did. This, in turn, may have allowed Watson to be more effective in the role it was assigned. Considering the needs of the human user is just as important as considering the technical requirements of the tool (if not more so). 

We Must Be Careful

If the scientists at MD Anderson hadn’t been so careful, or if they had followed Watson blindly, real patients could have been at risk. We can never allow our faith in an emerging tool to be so inflated that we lose sight of the people it’s meant to help. 

Emerging technology is exciting, yes — but we also need to take the time to address the moral and practical implications of how we bring that seemingly-capable technology into our lives. At the very least, it would seem wise to be a little more skeptical in our faith. 

By |2019-09-03T16:44:02+00:00September 3rd, 2019|Uncategorized|

5 Startling Ways Humans Are Completely Phone-Dependent

Smartphones have become a crutch–a portable hub–for many users in our permanently plugged-in society. Although they can make our lives infinitely easier, the control and influence they exert over our habits can be alarming. The limitations of current technology (battery life, for examples) impacts us in an exaggerated fashion.

I’m often put out when I see people checking their phones during dinner, for example; it’s as if basic etiquette has been erased by a base desire for connection. It’s true that smartphones have some great qualities improving society and humanity, but they are also driving mass dependence.

Here are five surprising ways in which people rely on their smart phones.

1. Information Directory

Many people use their phone as a kind of external hard drive for the storage of vital information, like phone numbers and other contact information. Your phone may also store passwords and house other critical access information, as phones are often used in money management and health monitoring.

Even something as simple and powerful as your location can be monitored by your phone and used to personalize directions for your convenience. If your phone dies while out and about, you could lose directions to where you’re going and not know what number to call to let your friends know.

According to research by Canadian psychologists published in Computers in Human Behavior, “those who think more intuitively and less analytically when given reasoning problems were more likely to rely on their Smartphones (i.e., extended mind) for information in their everyday lives.” In other words, offloading information to technology erodes our ability to think intuitively, effortfully, and analytically.

2. Internet Access

Some people rely on their phone for internet access, choosing to forgo service from internet providers like Fios, Comcast, or Time Warner in favor of a simple cellular data plan. In this case, your phone serves as a conduit to the vast and increasingly vital data stream that is the internet. Like an umbilical cord, this option makes it almost impossible to disconnect.

Separation from phones, then, can lead to a perceived loss of information. According to Psychology Today, “having virtually any fact available at our fingertips creates an enriched environment that may make it more difficult to process information when we’re cut off.”

Our realities have been so changed by access to the Internet — whether it’s Google or SnapChat — that loss of Internet has become akin to loss of a sense like taste or smell, without which the world is totally different.

3. Communication

For all that smart phones now offer a dizzying array of ways to connect–via phone, video conference, text, email, social media and do on–they also seem to serve as a buffer for face-to-face communication. People rely on their phones more and more to communicate virtually, in many cases minimizing in person interaction. And people are handling increasingly intimate and delicate via these digital channels.

The inevitable impact of this effect is evident but the extent remains to be seen, as does the root cause. Maybe phones offer too many communication options. Or maybe people opt to connect with more people via these channels than they could reasonably do in person. Maybe people prefer these channels because they offer more superficial or deeper connections than in-person meetings.

Whatever the case, the ability to communicate digitally has had a measurable effect on people. The way we talk has changed, and studies have found that mobile communication correlates with an increase in face-to-face social anxiety among school-age children.

4. Digital rather than Physical

Just as virtual interaction has increased with the presence of smart phones, so have the online alternatives to physical chores, like shopping. The convenience of the smartphone makes it easier to order something online than to visit the actual store. Thus, the burgeoning digital network is reducing humans’ physical footprint.

The impact of this is manifold. It may seem like an oversimplification to claim it’s made us lazy, but the sheer amount of mobile services available supports this assumption: people can use their phones to delegate errands, order food, buy groceries, tour houses, acquire movies, music, and entertainment, all without leaving their couch.

That doesn’t mean we’re literally dependent on our phones for these things, but it does make physical shopping feel like an inconvenience.

5. Camera

Although virtual reality is now possible with your phone, looking at everything through the camera lens is its own kind of virtual reality. As phones became an increasingly essential part of everyday life, the camera came along for the ride. Now built into almost every smartphone, the camera creates a filter for reality, a Pokémon Go-like overlay, a digital portal.

With a camera accessible at almost all times, pictures, videos, and live streams became an increasingly important stand-in for real life, fueling the immediacy of social networks. When you go about daily life with a camera in hand, you end up looking through a certain kind of lens that can prevent you from fully partaking in the moment. You may even end up conflating your memories of an event with the media context of event records.

Altogether, it’s clear that mobile technology has become a phantom-like limb with new senses that we’ve become very accustomed to. While in some contexts this may seem like a superpower, we’d all do best to keep in mind that there is more to life than tech — and if our dependence level is high enough, we might be missing it.

By |2018-10-31T18:07:53+00:00March 13th, 2017|Culture, Philanthropy, Technology|
Go to Top