What Does It Matter If AI Writes Poetry?

Robots might take our jobs, but they (probably) won’t replace our wordsmiths.

These days, concerns about the slow proliferation of AI-powered workers underly a near-constant, if quiet, discussion about which positions will be lost in the shuffle. According to a report published earlier this year by the Brookings Institution, roughly a quarter of jobs in the United States are at “high risk” of automation. The risk is especially pointed in fields such as food service, production operations, transportation, and administrative support — all sectors that require repetitive work. However, some in creatively-driven disciplines feel that the thoughtful nature of their work protects them from automation.

It’s certainly a memorable passage — both for its utter lack of cohesion and its familiarity. The tone and language almost mimic J.K. Rowling’s style — if J.K. Rowling lost all sense and decided to create cannibalistic characters, that is. Passages like these are both comedic and oddly comforting. They amuse us, reassure us of humans’ literary superiority, and prove to us that our written voices can’t be replaced — not yet.

However, not everything produced by AI is as ludicrous as A Giant Pile of Ash. Some pieces veer on the teetering edge of sophistication. Journalist John A. Tures experimented with the quality of AI-written text for the Observer. His findings? Computers can condense long articles into blurbs well enough, if with errors and the occasional missed point. As Tures described, “It’s like using Google Translate to convert this into a different language, another robot we probably didn’t think about as a robot.” It’s not perfect, he writes, but neither is it entirely off the mark.

Moreover, he notes, some news organizations are already using AI text bots to do low-level reporting. The Washington Post, for example, uses a bot called Heliograf to handle local stories that human reporters might not have the time to cover. Tures notes that these bots are generally effective at writing grammatically-accurate copy quickly, but tend to lose points on understanding the broader context and meaningful nuance within a topic. “They are vulnerable to not understanding satires, spoofs or mistakes,” he writes.

And yet, even with their flaws, this technology is significantly more capable than those who look only at comedic misfires like A Giant Pile of Ash might believe. In an article for the Financial Times, writer Marcus du Sautoy reflects on his experience with AI writing, commenting, “I even employed code to get AI to write 350 words of my current book. No one has yet identified the algorithmically generated passage (which I’m not sure I’m so pleased about, given that I’m hoping, as an author, to be hard to replace).”

Du Sautoy does note that AI struggles to create overarching narratives and often loses track of broader ideas. The technology is far from being able to write a novel — but still, even though he passes off his perturbance at the AI’s ability to fit perfectly within his work as a literal afterthought, the point he makes is essential. AI is coming dangerously close to being able to mimic the appearance of literature, if not the substance.

Take Google’s POEMPORTRAITS as an example. In early spring, engineers working in partnership with Google’s Arts & Culture Lab rolled out an algorithm that could write poetry. The project leaders, Ross Goodwin and Es Devlin, trained an algorithm to write poems by supplying the program with over 25 million words written by 19th-century poets. As Devlin describes in a blog post, “It works a bit like predictive text: it doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model.”

When users donate a word and a self-portrait, the program overlays an AI-written poem over a colorized, Instagrammable version of their photograph. The poems themselves aren’t half bad on the first read; Devlin’s goes: “This convergence of the unknown hours, arts and splendor of the dark divided spring.”

As Devlin herself puts it, the poetry produced is “Surprisingly poignant, and at other times nonsensical.” The AI-provided poem sounds pretty, but is at best vague, and at worst devoid of meaning altogether. It’s a notable lapse, because poetry, at its heart, is about creating meaning and crafting implication through artful word selection. The turn-of-phrase beauty is essential — but it’s in no way the most important part of writing verse. In this context, AI-provided poetry seems hollow, shallow, and without the depth or meaning that drives literary tradition.

In other words, even beautiful phrases will miss the point if they don’t have a point to begin with.

In his article for the Observer, John A. Tures asked a journalism conference attendee his thoughts on what robots struggle with when it comes to writing. “He pointed out that robots don’t handle literature well,” Tures writes, “covering the facts, and maybe reactions, but not reason. It wouldn’t understand why something matters. Can you imagine a robot trying to figure out why To Kill A Mockingbird matters?”

Robots are going to verge into writing eventually — the forward march is already happening. Prose and poetry aren’t as protected within the creative employment bastion as we think they are; over time, we could see robots taking over roles that journalists used to hold. In our fake-news-dominated social media landscape, bad actors could even weaponize the technology to flood our media feeds and message boards. It’s undoubtedly dangerous — but that’s a risk that’s been talked about before.

Instead, I find myself wondering about the quieter, less immediately impactful risks. I’m worried that when AI writes what we read, our ability to think deeply about ourselves and our society will slowly erode.

Societies and individuals grow only when they are pushed to question themselves; to think, delve into the why behind their literature. We’re taught why To Kill a Mockingbird matters because that process of deep reading and introspection makes us think about ourselves, our communities, and what it means to want to change. In a world where so much of our communication is distilled down into tweet-optimized headlines and blurbs, where we’re not taking the time to read below a headline or first paragraph, these shallow, AI-penned lines are problematic — not because they exist, but because they do not spur thought.

“This convergence of the unknown hours, arts and splendor of the dark divided spring.”

The line sounds beautiful; it even evokes a vague image. Yet, it has no underlying message — although, to be fair, it wasn’t meant to make a point beyond coherency. It isn’t making a point. It’s shallow entertainment under a thin veil of sophistication. It fails at overarching narratives, doesn’t capture nuance, and fails to grasp the heartbeat of human history, empathy, and understanding.

If it doesn’t have that foundation to create a message, what does it have? When we get to a place where AI is writing for us — and be sure, that time will come — are we going to be thinking less? Will there be less depth to our stories, minimal thought inspired by their twists? Will it become an echoing room rather than a path forward? At the very least, will these stories take over our Twitter feeds and Facebook newsfeeds, pulling us away from human-crafted stories that push us to think?

I worry that’s the case. But then again, maybe I’m wrong — maybe reading about how an AI thinks that Ron ate Hermione’s family provides enough dark and hackneyed comedy to reassure our belief that AI will never step beyond its assigned role as a ludicrous word-hacker.

For now, at least.

By |2020-06-12T21:03:46+00:00April 17th, 2020|Culture, Technology|

AI Fails and What They Teach Us About Emerging Technology

These days, we’ve become all but desensitized to the miraculous convenience of AI. We’re not surprised when we open Netflix to find feeds immediately and perfectly tailored to our tastes, and we’re not taken aback when Facebook’s facial recognition tech picks our face out of a group-picture lineup. Ten years ago, we might have made a polite excuse and beat a quick retreat if we heard a friend asking an invisible person to dim the lights or report the weather. Now, we barely blink — and perhaps wonder if we should get an Echo Dot, too. 

We have become so accustomed to AI quietly incorporating itself into almost every aspect of our day-to-day lives that we’ve stopped having hard walls on our perception of possibility. Rather than address new claims of AI’s capabilities with disbelief, we regard it with interested surprise and think — could I use that? 

But what happens when AI doesn’t work as well as we expect? What happens when our near-boundless faith in AI’s usefulness is misplaced, and the high-tech tools we’ve begun to rely on start cracking under the weight of the responsibilities we delegate? 

Let’s consider an example.

AI Can’t Cure Cancer — Or Can It? An IBM Case Study 

When IBM’s Watson debuted in 2014, it charmed investors, consumers, and tech aficionados alike. Proponents boasted that Watson’s information-gathering capabilities would make it an invaluable resource for doctors who might otherwise not have the time or opportunity to keep up with the constant influx of medical knowledge. During a demo that same year, Watson dazzled industry professionals and investors by analyzing an eclectic collection of symptoms and offering a series of potential diagnoses, each ranked by the system’s confidence and linked to relevant medical literature. The AI’s clear knowledge of rare disease and its ability to provide diagnostic conclusions was both impressive and inspiring. 

Watson’s positive impression spurred investment. Encouraged by the AI’s potential, MD Anderson, a cancer center within the University of Texas, signed a multi-million dollar contract with IBM to apply Watson’s cognitive computing capabilities to its fight against cancer. Watson for Oncology was meant to parse enormous quantities of case data and provide novel insights that would help doctors provide better and more effective care to cancer patients. 

Unfortunately, the tool didn’t exactly deliver on its marketing pitch. 

In 2017, auditors at the University of Texas submitted a caustic report claiming that Watson not only cost MD Anderson over $62 million but also failed to achieve its goals. Doctors lambasted the tool for its propensity to give bad advice; in one memorable case reported by the Verge, the AI suggested that a patient with severe bleeding receive a drug that would worsen their condition. Luckily the patient was hypothetical, and no real people were hurt; however, users were still understandably annoyed by Watson’s apparent ineptitude. As one particularly scathing doctor said in a report for IBM, “This product is a piece of s—. We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.”

But is the project’s failure to deliver on its hype all Watson’s fault? Not exactly. 

Watson’s main flaw was with implementation, not technology. When the project began, doctors entered real patient data as intended. However, Watson’s guidelines changed often enough that updating those cases became a chore; soon, users switched to hypothetical examples. This meant that Watson could only make suggestions based on the treatment preferences and information provided by a few doctors, rather than the actual data from an entire cancer center, thereby skewing the advice it provided. 

Moreover, the AI’s ability to discern connections is only useful to a point. It can note a pattern between a patient with a given illness, their condition, and the medications prescribed, but any conclusions drawn from such analysis would be tenuous at best. The AI cannot definitively determine whether a link is correlation, causation, or mere coincidence — and thus risks providing diagnostic conclusions without evidence-based backing.

Given the lack of user support and the shortage of real information, is it any surprise that Watson failed to deliver innovative answers? 

What Does Watson’s Failure Teach Us?

Watson’s problem is more human than it is technical. There are three major lessons that we can pull from the AI’s crash: 

We Need to Check Our Expectations.

We tend to believe that AI and emerging technology can achieve what its developers say that it can. However, as Watson’s inability to separate correlation and causation demonstrates, the potential we read in marketing copy can be overinflated. As users, we need to have a better understanding and skepticism of emerging technology before we begin relying on it. 

Tools Must Be Well-Integrated. 

If doctors had been able to use the Watson interface without continually needing to revise their submissions for new guidelines, they might have provided more real patient information and used the tool more often than they did. This, in turn, may have allowed Watson to be more effective in the role it was assigned. Considering the needs of the human user is just as important as considering the technical requirements of the tool (if not more so). 

We Must Be Careful

If the scientists at MD Anderson hadn’t been so careful, or if they had followed Watson blindly, real patients could have been at risk. We can never allow our faith in an emerging tool to be so inflated that we lose sight of the people it’s meant to help. 

Emerging technology is exciting, yes — but we also need to take the time to address the moral and practical implications of how we bring that seemingly-capable technology into our lives. At the very least, it would seem wise to be a little more skeptical in our faith. 

By |2019-09-03T16:44:02+00:00September 3rd, 2019|Uncategorized|

How AI Will Help Build Better Cities

A “smart city,” as we think of it now, is not a singular, centrally controlled entity but a whole collection of intelligently designed machines and functions. Essential aspects of city life like traffic flow, energy distribution, and pedestrian behavior will one day be monitored, understood, and acted upon by smart machines with the goal of improving the way we live. AI has already transformed so many aspects of city life, and one day it may guide an even greater proportion of municipal functions. Here’s a look at just a few of the ways this will happen.

Traffic

Even in a public transportation haven like New York or Chicago, traffic congestion is a major issue. AI can provide a major boost to the work of city engineers, making a drive through the city less of a hassle and reducing the overall time spent on the road. AI can collect and analyze traffic data as it’s happening, and eventually even provide routing solutions for autonomous vehicles.

Not only that, this info can give drivers real-time information on open parking, making the desperate search for a spot downtown a thing of the past. Smart traffic signals that observe and analyze vehicle flow data can keep drivers moving without wasting time at automated red lights. With full integration with self-driving cars, it’s not a stretch to imagine a daily commute happening with little to no input from drivers.

Power

As cities grow, the need for power increases exponentially. One of the most consistent challenges of city management is ensuring that every citizen has their energy needs met, and while green solutions have already made an impact in reducing waste, AI can take the next step in bringing our cities closer to fully self-sufficient energy.

Our power grid is aching for a modern overhaul, and one may just be in store, thanks to smart grid initiatives to bring AI to the application and distribution of energy. The efficiency of a smart machine means that the power of the future will be delivered with less of the waste and redundancy that marks our present grid. The U.S. Department of Energy agrees with the potential of such technology, having made the development of a smart grid an official directive in 2010.

Safety

Artificial intelligence can not only make driving safer, but also improve conditions on the sidewalks and alleyways as well. The city of the future looks to be not only more efficient, but safer as well.

In its best form, AI will allow city officials to better monitor neighborhoods and districts whose problems have historically flown under the radar. Police departments nationwide have already adapted ShotSpotter technology to better crack down on gun crime, with promising results for holistic, community-based solutions to the issues facing urban communities.

While concerns about privacy are valid and important, video surveillance with the proper protocols in place could give police a huge boost in fighting street crime with the help of AI. While such tech is still in its nascent stages, one day in the far-off future police will use intelligent analysis to spot suspicious behavior that may indicate a violent crime about to happen, or follow a suspect through crowds in the city streets. Crack AI researchers are already on the case.

 

If all this talk of AI-infused cities sounds like science fiction, it isn’t. In fact, we in the U.S. have got some catching up to do. Earlier this year, we saw the first rollout of Chinese e-commerce giant Alibaba’s Smart City platform outside of their native country, as Kuala Lumpur introduced their adoption of the AI data-analysis program. While this smart city mostly makes use of the tech for operational tasks like transportation, such a commitment to this forward-thinking tech indicates a future where big cities will welcome AI assistance with open arms.

Cities are often described as the best implementation of America’s melting pot. A huge variety of people, with disparate origins, interests and dreams, all coming together around one principle: that we work better together than apart. Our cities of the future will likely fulfill that promise better and more efficiently than ever imagined, thanks to improvements in efficiency and safety enabled by AI.

By |2020-02-11T16:52:49+00:00May 16th, 2018|Technology|