We Can’t Afford to Sell Out on AI Ethics

Today, we use AI with the expectation that it will make us better than we are — faster, more efficient, more competitive, more accurate. Businesses in nearly every industry apply artificial intelligence tools to achieve goals that we would, only a decade or two ago, derided as moonshot dreams. But even as we incorporate AI into our decision-making processes, we can never forget that even as it magnifies our capabilities, so too can it plainly show our flaws.

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included,” AI researcher Kate Crawford once wrote for the New York Times. “Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with it sold, familiar biases and stereotypes.”

The need for greater inclusivity and ethics-centric research in AI development is well-established — which is why it was so shocking to read about Google’s seemingly senseless firing of AI ethicist Timnit Gebru.

For years, Gebru has been an influential voice in AI research and inclusivity. She cofounded the Black in AI affinity group and speaks as an advocate for diversity in the tech industry. In 2018, she co-wrote an oft-cited investigation into how gender bias influenced Google’s Image Search results. The team Gebru built at Google encompassed several notable researchers and was one of the most diverse working in the AI sector.

“I can’t imagine anybody else who would be safer than me.” Gebru shared in an interview with the Washington Post. “I was super visible. I’m well known in the research community, but also the regulatory space. I have a lot of grass-roots support — and this is what happened.”

So what, exactly, happened?

In November of 2020, Gebru and her team concluded a research paper that examined the potential risks inherent to large language-processing models, which can be used to discern basic meaning from text and, in some cases, create new and convincing copy.

Gebru and her team found three major areas of concern. The first was environmental; relying on large language models could lead to a significant increase of energy consumption and, by extension, our carbon footprint.

The second related to unintended bias; because large language models require massive amounts of data mined from the Internet, “racist, sexist, and otherwise abusive language” could accidentally be included during the training process. Lastly, Gebru’s team pointed out that as large language models become more adept at mimicking language, they could be used to manufacture dangerously convincing misinformation online.

The paper was exhaustively cited and peer-reviewed by over thirty large-language-model experts, bias researchers, critics, and model users. So it came as a shock when Gebru’s team received instructions from HR to either retract the paper or remove the researchers’ names from the submission. Gebru addressed the feedback and asked for an explanation on why retraction was necessary. She received no response other than vague, anonymous feedback and further instructions to retract the paper. Again, Gebru addressed the feedback — but to no avail. She was informed that she had a week to rescind her work.

The back and forth was exhausting for Gebru, who had spent months struggling to improve diversity and advocate for the underrepresented at Google. (Only 1.9 percent of Google’s employee base are Black women.) To be silenced while furthering research on AI ethics and the potential consequences of bias in machine learning felt deeply ironic.

Frustrated, she sent an email detailing her experience to an internal listserv, Google Brain Women and Allies. Shortly thereafter, she was dismissed from Google for “conduct not befitting a Google Manager. Amid the fall out, Google AI head Jeff Dean claimed that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” that undermined the risks she outlined — a shocking accusation, given its breadth of research.

To Gebru, Google’s reaction felt like corporate censorship.

“[Jeff’s email] talks about how our research [paper on large language models] had gaps, it was missing some literature,” she told MIT’s Technology Review. “[The email doesn’t] sound like they’re talking to people who are experts in their area. This is not peer review. This is not reviewer #2 telling you, ‘Hey, there’s this missing citation.’ This is a group of people, who we don’t know, who are high up because they’ve been at Google for a long time, who, for some unknown reason and process that I’ve never seen ever, were given the power to shut down this research.”

“You’re not going to have papers that make the company happy all the time and don’t point out problems,” Gebru concluded in another interview for Wired. “That’s antithetical to what it means to be that kind of researcher.”

We know that diversity and research is crucial to the development of truly effective and unbiased AI technologies. In this context, firing Gebru — a Black, female researcher with extensive accolades for her work in AI ethics — for doing her job is senseless. There can be no other option but to view Google’s actions as corporate censorship.

For context — in 2018, Google developed BERT, a large language model, and used it to improve its search result queries. Last year, the company made headlines by creating large-language techniques that would allow them to train a 1.6-trillion-parameter model four times as quickly as previously possible. Large language models offer a lucrative avenue of exploration for Google; having them questioned by an in-house research team could be embarrassing at best, and limiting at worst.

In an ideal world, Google would have incorporated Gebru’s research findings into its actions and sought ways to mitigate the risks she identified. Instead, they attempted to compel her to revise her document to include cherry-picked “positive” research and downplay her findings. Think about that for a moment — that kind of interference is roughly analogous to a pharmaceutical company asking researchers to fudge the statistics on a new drug’s side effects. Such intervention is not only unethical; it leads to the possibility of real harm.

Then, when that interference failed, Google leadership worked to silence and discredit Gebru. As one writer for Wired concludes, that decision proves that “however sincere a company like Google’s promises may seem — corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.”

Gebru is an undeniably strong person, an authority in her field with a robust support network. She had the force of personality to stand her ground against Google. But what if someone who wasn’t quite as well-respected, supported, and brave stood in her position? How much valuable research could be quashed due to corporate politicking? It’s a frightening thought.

The Gebru fallout tells us in no uncertain terms that we need to give real consideration to how much editorial control tech companies should have over research, even if they employ the researchers who produce it. If left unchecked, corporate censorship could stand to usher in the worst iteration of AI: one that writs large our biases, harms the already-underserved, and dismisses fairness in favor of profit.

This article was originally published on BeingHuman.ai

By |2021-04-16T18:56:58+00:00April 16th, 2021|Technology|

CEOs: AI Is Not A Magic Wand

The technology holds great promise, no question—but deployment must be done strategically, and with the understanding that you likely won’t see gains on its first attempt to integrate.

If you achieve the improbable often enough, even the impossible stops feeling quite so out of reach.

Over the last several decades, artificial intelligence has permeated almost every American business sector. Its proponents position AI as the tech-savvy executive leader’s magic wand — a tool that can wave away inefficiency and spark new solutions in a pinch. Its apparent success has winched up our suspension of disbelief to ever-loftier heights; now, even if AI tools aren’t a perfect fix to a given challenge, we expect them to provide some significant benefit to our problem-solving efforts.

This false vision of AI’s capability as a one-size-fits-all tool is deeply problematic, but it’s not hard to see where the misunderstanding started. AI tools have accomplished a great deal across a shockingly wide variety of industries.

In pharma, AI helps researchers home in on new drugs solutions; in sustainable agriculture, it can be used to optimize water and waste management; and in marketing, AI chatbots have revolutionized the norms of customer service interactions and made it easier than ever for customers to find straightforward answers to their questions quickly.

Market research provides similar backing to AI’s versatility and value. In 2018, PwC released a report which noted that the value derived from the impact of AI on consumer behavior (i.e., through product personalization or greater efficiency) could top $9.1 trillion by 2030.

McKinsey researchers similarly note that 63 percent of executives whose companies have adopted AI say that the change has “provided an uptick in revenue in the business areas where it is used,” with respondents from high performers nearly three times likelier than those from other companies to report revenue gains of more than 10 percent. Forty-four percent say that the use of AI has reduced costs.

Findings like these paint a vision of AI as having an almost universal, plug-and-play ability to improve business outcomes. We’ve become so used to AI being a “fix” that our tendency to be strategic about how we deploy such tools has waned.

Earlier this year, a joint study conducted by the Boston Consulting Group and MIT Sloan Management Review found that only 11 percent of the firms that have deployed artificial intelligence sees a “sizable” return on their investments.

This is alarming, given the sheer volume that investors are putting into AI. Take the healthcare industry as an example; in 2019, surveyed healthcare executives estimated that their organizations would invest an average of $39.7 million over the following five years. To not receive a substantial return on that money would be disappointing, to say the very least.

As reported by Wired, the MIT/BCG report “is one of the first to explore whether companies are benefiting from AI. Its sobering finding offers a dose of realism amid recent AI hype. The report also offers some clues as to why some companies are profiting from AI and others appear to be pouring money down the drain.”

What, then, is the main culprit? According to researchers, it seems to be a lack of strategic direction during the implementation process.

“The people that are really getting value are stepping back and letting the machine tell them what they can do differently,” Sam Ransbotham, a professor at Boston College who co-authored the report, commented. “The gist is not blindly applying AI.”

The study’s researchers found that the most successful companies used their early experiences with AI tools — good or ill — to improve their business practices and better-orient artificial intelligence within their operations. Of those who took this approach, 73 percent said that they saw returns on their investments. Companies who paired their learning mindset with efforts to improve their algorithms also tended to see better returns than those who took a plug-and-play approach.

“The idea that either humans or machines are going to be superior, that’s the same sort of fallacious thinking,” Ransbotham told reporters.

Scientific American writers Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh put Ransbotham’s point another way in an article titled — tellingly — “AI Isn’t the Solution to All of Our Problems.” They write:

“The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied.”

In other words: We need to stop viewing AI as a fix-it tool and more as a consultant to collaborate with over months or years. While there’s little doubt that artificial intelligence can help business leaders cultivate profit and improve their business, their deployment of the technology must be done strategically — and within the understanding that the business probably won’t see the gains it hopes for on its first attempt to integrate AI.

If business leaders genuinely intend to make the most of the opportunity that artificial intelligence presents, they should be prepared to workshop. Adopt a flexible, experimental, and strategic mindset. Be ready to adjust your business operations to address any inefficiencies or opportunities the technology may spotlight — and, by that same token, take the initiative to continually hone your algorithms for greater accuracy. AI can provide guidance and inspiration, but it won’t offer outright answers.

Businesses are investing millions — often tens of millions — in AI technology. Why not take the time to learn how to use it properly?

This article was originally published on ChiefExecutive.net

By |2021-03-31T23:25:21+00:00January 23rd, 2021|Business, Technology|

What Does It Matter If AI Writes Poetry?

Robots might take our jobs, but they (probably) won’t replace our wordsmiths.

These days, concerns about the slow proliferation of AI-powered workers underly a near-constant, if quiet, discussion about which positions will be lost in the shuffle. According to a report published earlier this year by the Brookings Institution, roughly a quarter of jobs in the United States are at “high risk” of automation. The risk is especially pointed in fields such as food service, production operations, transportation, and administrative support — all sectors that require repetitive work. However, some in creatively-driven disciplines feel that the thoughtful nature of their work protects them from automation.

It’s certainly a memorable passage — both for its utter lack of cohesion and its familiarity. The tone and language almost mimic J.K. Rowling’s style — if J.K. Rowling lost all sense and decided to create cannibalistic characters, that is. Passages like these are both comedic and oddly comforting. They amuse us, reassure us of humans’ literary superiority, and prove to us that our written voices can’t be replaced — not yet.

However, not everything produced by AI is as ludicrous as A Giant Pile of Ash. Some pieces veer on the teetering edge of sophistication. Journalist John A. Tures experimented with the quality of AI-written text for the Observer. His findings? Computers can condense long articles into blurbs well enough, if with errors and the occasional missed point. As Tures described, “It’s like using Google Translate to convert this into a different language, another robot we probably didn’t think about as a robot.” It’s not perfect, he writes, but neither is it entirely off the mark.

Moreover, he notes, some news organizations are already using AI text bots to do low-level reporting. The Washington Post, for example, uses a bot called Heliograf to handle local stories that human reporters might not have the time to cover. Tures notes that these bots are generally effective at writing grammatically-accurate copy quickly, but tend to lose points on understanding the broader context and meaningful nuance within a topic. “They are vulnerable to not understanding satires, spoofs or mistakes,” he writes.

And yet, even with their flaws, this technology is significantly more capable than those who look only at comedic misfires like A Giant Pile of Ash might believe. In an article for the Financial Times, writer Marcus du Sautoy reflects on his experience with AI writing, commenting, “I even employed code to get AI to write 350 words of my current book. No one has yet identified the algorithmically generated passage (which I’m not sure I’m so pleased about, given that I’m hoping, as an author, to be hard to replace).”

Du Sautoy does note that AI struggles to create overarching narratives and often loses track of broader ideas. The technology is far from being able to write a novel — but still, even though he passes off his perturbance at the AI’s ability to fit perfectly within his work as a literal afterthought, the point he makes is essential. AI is coming dangerously close to being able to mimic the appearance of literature, if not the substance.

Take Google’s POEMPORTRAITS as an example. In early spring, engineers working in partnership with Google’s Arts & Culture Lab rolled out an algorithm that could write poetry. The project leaders, Ross Goodwin and Es Devlin, trained an algorithm to write poems by supplying the program with over 25 million words written by 19th-century poets. As Devlin describes in a blog post, “It works a bit like predictive text: it doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model.”

When users donate a word and a self-portrait, the program overlays an AI-written poem over a colorized, Instagrammable version of their photograph. The poems themselves aren’t half bad on the first read; Devlin’s goes: “This convergence of the unknown hours, arts and splendor of the dark divided spring.”

As Devlin herself puts it, the poetry produced is “Surprisingly poignant, and at other times nonsensical.” The AI-provided poem sounds pretty, but is at best vague, and at worst devoid of meaning altogether. It’s a notable lapse, because poetry, at its heart, is about creating meaning and crafting implication through artful word selection. The turn-of-phrase beauty is essential — but it’s in no way the most important part of writing verse. In this context, AI-provided poetry seems hollow, shallow, and without the depth or meaning that drives literary tradition.

In other words, even beautiful phrases will miss the point if they don’t have a point to begin with.

In his article for the Observer, John A. Tures asked a journalism conference attendee his thoughts on what robots struggle with when it comes to writing. “He pointed out that robots don’t handle literature well,” Tures writes, “covering the facts, and maybe reactions, but not reason. It wouldn’t understand why something matters. Can you imagine a robot trying to figure out why To Kill A Mockingbird matters?”

Robots are going to verge into writing eventually — the forward march is already happening. Prose and poetry aren’t as protected within the creative employment bastion as we think they are; over time, we could see robots taking over roles that journalists used to hold. In our fake-news-dominated social media landscape, bad actors could even weaponize the technology to flood our media feeds and message boards. It’s undoubtedly dangerous — but that’s a risk that’s been talked about before.

Instead, I find myself wondering about the quieter, less immediately impactful risks. I’m worried that when AI writes what we read, our ability to think deeply about ourselves and our society will slowly erode.

Societies and individuals grow only when they are pushed to question themselves; to think, delve into the why behind their literature. We’re taught why To Kill a Mockingbird matters because that process of deep reading and introspection makes us think about ourselves, our communities, and what it means to want to change. In a world where so much of our communication is distilled down into tweet-optimized headlines and blurbs, where we’re not taking the time to read below a headline or first paragraph, these shallow, AI-penned lines are problematic — not because they exist, but because they do not spur thought.

“This convergence of the unknown hours, arts and splendor of the dark divided spring.”

The line sounds beautiful; it even evokes a vague image. Yet, it has no underlying message — although, to be fair, it wasn’t meant to make a point beyond coherency. It isn’t making a point. It’s shallow entertainment under a thin veil of sophistication. It fails at overarching narratives, doesn’t capture nuance, and fails to grasp the heartbeat of human history, empathy, and understanding.

If it doesn’t have that foundation to create a message, what does it have? When we get to a place where AI is writing for us — and be sure, that time will come — are we going to be thinking less? Will there be less depth to our stories, minimal thought inspired by their twists? Will it become an echoing room rather than a path forward? At the very least, will these stories take over our Twitter feeds and Facebook newsfeeds, pulling us away from human-crafted stories that push us to think?

I worry that’s the case. But then again, maybe I’m wrong — maybe reading about how an AI thinks that Ron ate Hermione’s family provides enough dark and hackneyed comedy to reassure our belief that AI will never step beyond its assigned role as a ludicrous word-hacker.

For now, at least.

By |2020-06-12T21:03:46+00:00April 17th, 2020|Culture, Technology|

AI Fails and What They Teach Us About Emerging Technology

These days, we’ve become all but desensitized to the miraculous convenience of AI. We’re not surprised when we open Netflix to find feeds immediately and perfectly tailored to our tastes, and we’re not taken aback when Facebook’s facial recognition tech picks our face out of a group-picture lineup. Ten years ago, we might have made a polite excuse and beat a quick retreat if we heard a friend asking an invisible person to dim the lights or report the weather. Now, we barely blink — and perhaps wonder if we should get an Echo Dot, too. 

We have become so accustomed to AI quietly incorporating itself into almost every aspect of our day-to-day lives that we’ve stopped having hard walls on our perception of possibility. Rather than address new claims of AI’s capabilities with disbelief, we regard it with interested surprise and think — could I use that? 

But what happens when AI doesn’t work as well as we expect? What happens when our near-boundless faith in AI’s usefulness is misplaced, and the high-tech tools we’ve begun to rely on start cracking under the weight of the responsibilities we delegate? 

Let’s consider an example.

AI Can’t Cure Cancer — Or Can It? An IBM Case Study 

When IBM’s Watson debuted in 2014, it charmed investors, consumers, and tech aficionados alike. Proponents boasted that Watson’s information-gathering capabilities would make it an invaluable resource for doctors who might otherwise not have the time or opportunity to keep up with the constant influx of medical knowledge. During a demo that same year, Watson dazzled industry professionals and investors by analyzing an eclectic collection of symptoms and offering a series of potential diagnoses, each ranked by the system’s confidence and linked to relevant medical literature. The AI’s clear knowledge of rare disease and its ability to provide diagnostic conclusions was both impressive and inspiring. 

Watson’s positive impression spurred investment. Encouraged by the AI’s potential, MD Anderson, a cancer center within the University of Texas, signed a multi-million dollar contract with IBM to apply Watson’s cognitive computing capabilities to its fight against cancer. Watson for Oncology was meant to parse enormous quantities of case data and provide novel insights that would help doctors provide better and more effective care to cancer patients. 

Unfortunately, the tool didn’t exactly deliver on its marketing pitch. 

In 2017, auditors at the University of Texas submitted a caustic report claiming that Watson not only cost MD Anderson over $62 million but also failed to achieve its goals. Doctors lambasted the tool for its propensity to give bad advice; in one memorable case reported by the Verge, the AI suggested that a patient with severe bleeding receive a drug that would worsen their condition. Luckily the patient was hypothetical, and no real people were hurt; however, users were still understandably annoyed by Watson’s apparent ineptitude. As one particularly scathing doctor said in a report for IBM, “This product is a piece of s—. We bought it for marketing and with hopes that you would achieve the vision. We can’t use it for most cases.”

But is the project’s failure to deliver on its hype all Watson’s fault? Not exactly. 

Watson’s main flaw was with implementation, not technology. When the project began, doctors entered real patient data as intended. However, Watson’s guidelines changed often enough that updating those cases became a chore; soon, users switched to hypothetical examples. This meant that Watson could only make suggestions based on the treatment preferences and information provided by a few doctors, rather than the actual data from an entire cancer center, thereby skewing the advice it provided. 

Moreover, the AI’s ability to discern connections is only useful to a point. It can note a pattern between a patient with a given illness, their condition, and the medications prescribed, but any conclusions drawn from such analysis would be tenuous at best. The AI cannot definitively determine whether a link is correlation, causation, or mere coincidence — and thus risks providing diagnostic conclusions without evidence-based backing.

Given the lack of user support and the shortage of real information, is it any surprise that Watson failed to deliver innovative answers? 

What Does Watson’s Failure Teach Us?

Watson’s problem is more human than it is technical. There are three major lessons that we can pull from the AI’s crash: 

We Need to Check Our Expectations.

We tend to believe that AI and emerging technology can achieve what its developers say that it can. However, as Watson’s inability to separate correlation and causation demonstrates, the potential we read in marketing copy can be overinflated. As users, we need to have a better understanding and skepticism of emerging technology before we begin relying on it. 

Tools Must Be Well-Integrated. 

If doctors had been able to use the Watson interface without continually needing to revise their submissions for new guidelines, they might have provided more real patient information and used the tool more often than they did. This, in turn, may have allowed Watson to be more effective in the role it was assigned. Considering the needs of the human user is just as important as considering the technical requirements of the tool (if not more so). 

We Must Be Careful

If the scientists at MD Anderson hadn’t been so careful, or if they had followed Watson blindly, real patients could have been at risk. We can never allow our faith in an emerging tool to be so inflated that we lose sight of the people it’s meant to help. 

Emerging technology is exciting, yes — but we also need to take the time to address the moral and practical implications of how we bring that seemingly-capable technology into our lives. At the very least, it would seem wise to be a little more skeptical in our faith. 

By |2019-09-03T16:44:02+00:00September 3rd, 2019|Uncategorized|

How AI Will Help Build Better Cities

A “smart city,” as we think of it now, is not a singular, centrally controlled entity but a whole collection of intelligently designed machines and functions. Essential aspects of city life like traffic flow, energy distribution, and pedestrian behavior will one day be monitored, understood, and acted upon by smart machines with the goal of improving the way we live. AI has already transformed so many aspects of city life, and one day it may guide an even greater proportion of municipal functions. Here’s a look at just a few of the ways this will happen.

Traffic

Even in a public transportation haven like New York or Chicago, traffic congestion is a major issue. AI can provide a major boost to the work of city engineers, making a drive through the city less of a hassle and reducing the overall time spent on the road. AI can collect and analyze traffic data as it’s happening, and eventually even provide routing solutions for autonomous vehicles.

Not only that, this info can give drivers real-time information on open parking, making the desperate search for a spot downtown a thing of the past. Smart traffic signals that observe and analyze vehicle flow data can keep drivers moving without wasting time at automated red lights. With full integration with self-driving cars, it’s not a stretch to imagine a daily commute happening with little to no input from drivers.

Power

As cities grow, the need for power increases exponentially. One of the most consistent challenges of city management is ensuring that every citizen has their energy needs met, and while green solutions have already made an impact in reducing waste, AI can take the next step in bringing our cities closer to fully self-sufficient energy.

Our power grid is aching for a modern overhaul, and one may just be in store, thanks to smart grid initiatives to bring AI to the application and distribution of energy. The efficiency of a smart machine means that the power of the future will be delivered with less of the waste and redundancy that marks our present grid. The U.S. Department of Energy agrees with the potential of such technology, having made the development of a smart grid an official directive in 2010.

Safety

Artificial intelligence can not only make driving safer, but also improve conditions on the sidewalks and alleyways as well. The city of the future looks to be not only more efficient, but safer as well.

In its best form, AI will allow city officials to better monitor neighborhoods and districts whose problems have historically flown under the radar. Police departments nationwide have already adapted ShotSpotter technology to better crack down on gun crime, with promising results for holistic, community-based solutions to the issues facing urban communities.

While concerns about privacy are valid and important, video surveillance with the proper protocols in place could give police a huge boost in fighting street crime with the help of AI. While such tech is still in its nascent stages, one day in the far-off future police will use intelligent analysis to spot suspicious behavior that may indicate a violent crime about to happen, or follow a suspect through crowds in the city streets. Crack AI researchers are already on the case.

 

If all this talk of AI-infused cities sounds like science fiction, it isn’t. In fact, we in the U.S. have got some catching up to do. Earlier this year, we saw the first rollout of Chinese e-commerce giant Alibaba’s Smart City platform outside of their native country, as Kuala Lumpur introduced their adoption of the AI data-analysis program. While this smart city mostly makes use of the tech for operational tasks like transportation, such a commitment to this forward-thinking tech indicates a future where big cities will welcome AI assistance with open arms.

Cities are often described as the best implementation of America’s melting pot. A huge variety of people, with disparate origins, interests and dreams, all coming together around one principle: that we work better together than apart. Our cities of the future will likely fulfill that promise better and more efficiently than ever imagined, thanks to improvements in efficiency and safety enabled by AI.

By |2020-02-11T16:52:49+00:00May 16th, 2018|Technology|
Go to Top