We Can’t Afford to Sell Out on AI Ethics

Today, we use AI with the expectation that it will make us better than we are — faster, more efficient, more competitive, more accurate. Businesses in nearly every industry apply artificial intelligence tools to achieve goals that we would, only a decade or two ago, derided as moonshot dreams. But even as we incorporate AI into our decision-making processes, we can never forget that even as it magnifies our capabilities, so too can it plainly show our flaws.

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included,” AI researcher Kate Crawford once wrote for the New York Times. “Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with it sold, familiar biases and stereotypes.”

The need for greater inclusivity and ethics-centric research in AI development is well-established — which is why it was so shocking to read about Google’s seemingly senseless firing of AI ethicist Timnit Gebru.

For years, Gebru has been an influential voice in AI research and inclusivity. She cofounded the Black in AI affinity group and speaks as an advocate for diversity in the tech industry. In 2018, she co-wrote an oft-cited investigation into how gender bias influenced Google’s Image Search results. The team Gebru built at Google encompassed several notable researchers and was one of the most diverse working in the AI sector.

“I can’t imagine anybody else who would be safer than me.” Gebru shared in an interview with the Washington Post. “I was super visible. I’m well known in the research community, but also the regulatory space. I have a lot of grass-roots support — and this is what happened.”

So what, exactly, happened?

In November of 2020, Gebru and her team concluded a research paper that examined the potential risks inherent to large language-processing models, which can be used to discern basic meaning from text and, in some cases, create new and convincing copy.

Gebru and her team found three major areas of concern. The first was environmental; relying on large language models could lead to a significant increase of energy consumption and, by extension, our carbon footprint.

The second related to unintended bias; because large language models require massive amounts of data mined from the Internet, “racist, sexist, and otherwise abusive language” could accidentally be included during the training process. Lastly, Gebru’s team pointed out that as large language models become more adept at mimicking language, they could be used to manufacture dangerously convincing misinformation online.

The paper was exhaustively cited and peer-reviewed by over thirty large-language-model experts, bias researchers, critics, and model users. So it came as a shock when Gebru’s team received instructions from HR to either retract the paper or remove the researchers’ names from the submission. Gebru addressed the feedback and asked for an explanation on why retraction was necessary. She received no response other than vague, anonymous feedback and further instructions to retract the paper. Again, Gebru addressed the feedback — but to no avail. She was informed that she had a week to rescind her work.

The back and forth was exhausting for Gebru, who had spent months struggling to improve diversity and advocate for the underrepresented at Google. (Only 1.9 percent of Google’s employee base are Black women.) To be silenced while furthering research on AI ethics and the potential consequences of bias in machine learning felt deeply ironic.

Frustrated, she sent an email detailing her experience to an internal listserv, Google Brain Women and Allies. Shortly thereafter, she was dismissed from Google for “conduct not befitting a Google Manager. Amid the fall out, Google AI head Jeff Dean claimed that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” that undermined the risks she outlined — a shocking accusation, given its breadth of research.

To Gebru, Google’s reaction felt like corporate censorship.

“[Jeff’s email] talks about how our research [paper on large language models] had gaps, it was missing some literature,” she told MIT’s Technology Review. “[The email doesn’t] sound like they’re talking to people who are experts in their area. This is not peer review. This is not reviewer #2 telling you, ‘Hey, there’s this missing citation.’ This is a group of people, who we don’t know, who are high up because they’ve been at Google for a long time, who, for some unknown reason and process that I’ve never seen ever, were given the power to shut down this research.”

“You’re not going to have papers that make the company happy all the time and don’t point out problems,” Gebru concluded in another interview for Wired. “That’s antithetical to what it means to be that kind of researcher.”

We know that diversity and research is crucial to the development of truly effective and unbiased AI technologies. In this context, firing Gebru — a Black, female researcher with extensive accolades for her work in AI ethics — for doing her job is senseless. There can be no other option but to view Google’s actions as corporate censorship.

For context — in 2018, Google developed BERT, a large language model, and used it to improve its search result queries. Last year, the company made headlines by creating large-language techniques that would allow them to train a 1.6-trillion-parameter model four times as quickly as previously possible. Large language models offer a lucrative avenue of exploration for Google; having them questioned by an in-house research team could be embarrassing at best, and limiting at worst.

In an ideal world, Google would have incorporated Gebru’s research findings into its actions and sought ways to mitigate the risks she identified. Instead, they attempted to compel her to revise her document to include cherry-picked “positive” research and downplay her findings. Think about that for a moment — that kind of interference is roughly analogous to a pharmaceutical company asking researchers to fudge the statistics on a new drug’s side effects. Such intervention is not only unethical; it leads to the possibility of real harm.

Then, when that interference failed, Google leadership worked to silence and discredit Gebru. As one writer for Wired concludes, that decision proves that “however sincere a company like Google’s promises may seem — corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.”

Gebru is an undeniably strong person, an authority in her field with a robust support network. She had the force of personality to stand her ground against Google. But what if someone who wasn’t quite as well-respected, supported, and brave stood in her position? How much valuable research could be quashed due to corporate politicking? It’s a frightening thought.

The Gebru fallout tells us in no uncertain terms that we need to give real consideration to how much editorial control tech companies should have over research, even if they employ the researchers who produce it. If left unchecked, corporate censorship could stand to usher in the worst iteration of AI: one that writs large our biases, harms the already-underserved, and dismisses fairness in favor of profit.

This article was originally published on BeingHuman.ai

By |2021-04-16T18:56:58+00:00April 16th, 2021|Technology|

CEOs: AI Is Not A Magic Wand

The technology holds great promise, no question—but deployment must be done strategically, and with the understanding that you likely won’t see gains on its first attempt to integrate.

If you achieve the improbable often enough, even the impossible stops feeling quite so out of reach.

Over the last several decades, artificial intelligence has permeated almost every American business sector. Its proponents position AI as the tech-savvy executive leader’s magic wand — a tool that can wave away inefficiency and spark new solutions in a pinch. Its apparent success has winched up our suspension of disbelief to ever-loftier heights; now, even if AI tools aren’t a perfect fix to a given challenge, we expect them to provide some significant benefit to our problem-solving efforts.

This false vision of AI’s capability as a one-size-fits-all tool is deeply problematic, but it’s not hard to see where the misunderstanding started. AI tools have accomplished a great deal across a shockingly wide variety of industries.

In pharma, AI helps researchers home in on new drugs solutions; in sustainable agriculture, it can be used to optimize water and waste management; and in marketing, AI chatbots have revolutionized the norms of customer service interactions and made it easier than ever for customers to find straightforward answers to their questions quickly.

Market research provides similar backing to AI’s versatility and value. In 2018, PwC released a report which noted that the value derived from the impact of AI on consumer behavior (i.e., through product personalization or greater efficiency) could top $9.1 trillion by 2030.

McKinsey researchers similarly note that 63 percent of executives whose companies have adopted AI say that the change has “provided an uptick in revenue in the business areas where it is used,” with respondents from high performers nearly three times likelier than those from other companies to report revenue gains of more than 10 percent. Forty-four percent say that the use of AI has reduced costs.

Findings like these paint a vision of AI as having an almost universal, plug-and-play ability to improve business outcomes. We’ve become so used to AI being a “fix” that our tendency to be strategic about how we deploy such tools has waned.

Earlier this year, a joint study conducted by the Boston Consulting Group and MIT Sloan Management Review found that only 11 percent of the firms that have deployed artificial intelligence sees a “sizable” return on their investments.

This is alarming, given the sheer volume that investors are putting into AI. Take the healthcare industry as an example; in 2019, surveyed healthcare executives estimated that their organizations would invest an average of $39.7 million over the following five years. To not receive a substantial return on that money would be disappointing, to say the very least.

As reported by Wired, the MIT/BCG report “is one of the first to explore whether companies are benefiting from AI. Its sobering finding offers a dose of realism amid recent AI hype. The report also offers some clues as to why some companies are profiting from AI and others appear to be pouring money down the drain.”

What, then, is the main culprit? According to researchers, it seems to be a lack of strategic direction during the implementation process.

“The people that are really getting value are stepping back and letting the machine tell them what they can do differently,” Sam Ransbotham, a professor at Boston College who co-authored the report, commented. “The gist is not blindly applying AI.”

The study’s researchers found that the most successful companies used their early experiences with AI tools — good or ill — to improve their business practices and better-orient artificial intelligence within their operations. Of those who took this approach, 73 percent said that they saw returns on their investments. Companies who paired their learning mindset with efforts to improve their algorithms also tended to see better returns than those who took a plug-and-play approach.

“The idea that either humans or machines are going to be superior, that’s the same sort of fallacious thinking,” Ransbotham told reporters.

Scientific American writers Griffin McCutcheon, John Malloy, Caitlyn Hall, and Nivedita Mahesh put Ransbotham’s point another way in an article titled — tellingly — “AI Isn’t the Solution to All of Our Problems.” They write:

“The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied.”

In other words: We need to stop viewing AI as a fix-it tool and more as a consultant to collaborate with over months or years. While there’s little doubt that artificial intelligence can help business leaders cultivate profit and improve their business, their deployment of the technology must be done strategically — and within the understanding that the business probably won’t see the gains it hopes for on its first attempt to integrate AI.

If business leaders genuinely intend to make the most of the opportunity that artificial intelligence presents, they should be prepared to workshop. Adopt a flexible, experimental, and strategic mindset. Be ready to adjust your business operations to address any inefficiencies or opportunities the technology may spotlight — and, by that same token, take the initiative to continually hone your algorithms for greater accuracy. AI can provide guidance and inspiration, but it won’t offer outright answers.

Businesses are investing millions — often tens of millions — in AI technology. Why not take the time to learn how to use it properly?

This article was originally published on ChiefExecutive.net

By |2021-03-31T23:25:21+00:00January 23rd, 2021|Business, Technology|

What Does It Matter If AI Writes Poetry?

Robots might take our jobs, but they (probably) won’t replace our wordsmiths.

These days, concerns about the slow proliferation of AI-powered workers underly a near-constant, if quiet, discussion about which positions will be lost in the shuffle. According to a report published earlier this year by the Brookings Institution, roughly a quarter of jobs in the United States are at “high risk” of automation. The risk is especially pointed in fields such as food service, production operations, transportation, and administrative support — all sectors that require repetitive work. However, some in creatively-driven disciplines feel that the thoughtful nature of their work protects them from automation.

It’s certainly a memorable passage — both for its utter lack of cohesion and its familiarity. The tone and language almost mimic J.K. Rowling’s style — if J.K. Rowling lost all sense and decided to create cannibalistic characters, that is. Passages like these are both comedic and oddly comforting. They amuse us, reassure us of humans’ literary superiority, and prove to us that our written voices can’t be replaced — not yet.

However, not everything produced by AI is as ludicrous as A Giant Pile of Ash. Some pieces veer on the teetering edge of sophistication. Journalist John A. Tures experimented with the quality of AI-written text for the Observer. His findings? Computers can condense long articles into blurbs well enough, if with errors and the occasional missed point. As Tures described, “It’s like using Google Translate to convert this into a different language, another robot we probably didn’t think about as a robot.” It’s not perfect, he writes, but neither is it entirely off the mark.

Moreover, he notes, some news organizations are already using AI text bots to do low-level reporting. The Washington Post, for example, uses a bot called Heliograf to handle local stories that human reporters might not have the time to cover. Tures notes that these bots are generally effective at writing grammatically-accurate copy quickly, but tend to lose points on understanding the broader context and meaningful nuance within a topic. “They are vulnerable to not understanding satires, spoofs or mistakes,” he writes.

And yet, even with their flaws, this technology is significantly more capable than those who look only at comedic misfires like A Giant Pile of Ash might believe. In an article for the Financial Times, writer Marcus du Sautoy reflects on his experience with AI writing, commenting, “I even employed code to get AI to write 350 words of my current book. No one has yet identified the algorithmically generated passage (which I’m not sure I’m so pleased about, given that I’m hoping, as an author, to be hard to replace).”

Du Sautoy does note that AI struggles to create overarching narratives and often loses track of broader ideas. The technology is far from being able to write a novel — but still, even though he passes off his perturbance at the AI’s ability to fit perfectly within his work as a literal afterthought, the point he makes is essential. AI is coming dangerously close to being able to mimic the appearance of literature, if not the substance.

Take Google’s POEMPORTRAITS as an example. In early spring, engineers working in partnership with Google’s Arts & Culture Lab rolled out an algorithm that could write poetry. The project leaders, Ross Goodwin and Es Devlin, trained an algorithm to write poems by supplying the program with over 25 million words written by 19th-century poets. As Devlin describes in a blog post, “It works a bit like predictive text: it doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model.”

When users donate a word and a self-portrait, the program overlays an AI-written poem over a colorized, Instagrammable version of their photograph. The poems themselves aren’t half bad on the first read; Devlin’s goes: “This convergence of the unknown hours, arts and splendor of the dark divided spring.”

As Devlin herself puts it, the poetry produced is “Surprisingly poignant, and at other times nonsensical.” The AI-provided poem sounds pretty, but is at best vague, and at worst devoid of meaning altogether. It’s a notable lapse, because poetry, at its heart, is about creating meaning and crafting implication through artful word selection. The turn-of-phrase beauty is essential — but it’s in no way the most important part of writing verse. In this context, AI-provided poetry seems hollow, shallow, and without the depth or meaning that drives literary tradition.

In other words, even beautiful phrases will miss the point if they don’t have a point to begin with.

In his article for the Observer, John A. Tures asked a journalism conference attendee his thoughts on what robots struggle with when it comes to writing. “He pointed out that robots don’t handle literature well,” Tures writes, “covering the facts, and maybe reactions, but not reason. It wouldn’t understand why something matters. Can you imagine a robot trying to figure out why To Kill A Mockingbird matters?”

Robots are going to verge into writing eventually — the forward march is already happening. Prose and poetry aren’t as protected within the creative employment bastion as we think they are; over time, we could see robots taking over roles that journalists used to hold. In our fake-news-dominated social media landscape, bad actors could even weaponize the technology to flood our media feeds and message boards. It’s undoubtedly dangerous — but that’s a risk that’s been talked about before.

Instead, I find myself wondering about the quieter, less immediately impactful risks. I’m worried that when AI writes what we read, our ability to think deeply about ourselves and our society will slowly erode.

Societies and individuals grow only when they are pushed to question themselves; to think, delve into the why behind their literature. We’re taught why To Kill a Mockingbird matters because that process of deep reading and introspection makes us think about ourselves, our communities, and what it means to want to change. In a world where so much of our communication is distilled down into tweet-optimized headlines and blurbs, where we’re not taking the time to read below a headline or first paragraph, these shallow, AI-penned lines are problematic — not because they exist, but because they do not spur thought.

“This convergence of the unknown hours, arts and splendor of the dark divided spring.”

The line sounds beautiful; it even evokes a vague image. Yet, it has no underlying message — although, to be fair, it wasn’t meant to make a point beyond coherency. It isn’t making a point. It’s shallow entertainment under a thin veil of sophistication. It fails at overarching narratives, doesn’t capture nuance, and fails to grasp the heartbeat of human history, empathy, and understanding.

If it doesn’t have that foundation to create a message, what does it have? When we get to a place where AI is writing for us — and be sure, that time will come — are we going to be thinking less? Will there be less depth to our stories, minimal thought inspired by their twists? Will it become an echoing room rather than a path forward? At the very least, will these stories take over our Twitter feeds and Facebook newsfeeds, pulling us away from human-crafted stories that push us to think?

I worry that’s the case. But then again, maybe I’m wrong — maybe reading about how an AI thinks that Ron ate Hermione’s family provides enough dark and hackneyed comedy to reassure our belief that AI will never step beyond its assigned role as a ludicrous word-hacker.

For now, at least.

By |2020-06-12T21:03:46+00:00April 17th, 2020|Culture, Technology|
Go to Top