We Can’t Afford to Sell Out on AI Ethics

Today, we use AI with the expectation that it will make us better than we are — faster, more efficient, more competitive, more accurate. Businesses in nearly every industry apply artificial intelligence tools to achieve goals that we would, only a decade or two ago, derided as moonshot dreams. But even as we incorporate AI into our decision-making processes, we can never forget that even as it magnifies our capabilities, so too can it plainly show our flaws.

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included,” AI researcher Kate Crawford once wrote for the New York Times. “Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with it sold, familiar biases and stereotypes.”

The need for greater inclusivity and ethics-centric research in AI development is well-established — which is why it was so shocking to read about Google’s seemingly senseless firing of AI ethicist Timnit Gebru.

For years, Gebru has been an influential voice in AI research and inclusivity. She cofounded the Black in AI affinity group and speaks as an advocate for diversity in the tech industry. In 2018, she co-wrote an oft-cited investigation into how gender bias influenced Google’s Image Search results. The team Gebru built at Google encompassed several notable researchers and was one of the most diverse working in the AI sector.

“I can’t imagine anybody else who would be safer than me.” Gebru shared in an interview with the Washington Post. “I was super visible. I’m well known in the research community, but also the regulatory space. I have a lot of grass-roots support — and this is what happened.”

So what, exactly, happened?

In November of 2020, Gebru and her team concluded a research paper that examined the potential risks inherent to large language-processing models, which can be used to discern basic meaning from text and, in some cases, create new and convincing copy.

Gebru and her team found three major areas of concern. The first was environmental; relying on large language models could lead to a significant increase of energy consumption and, by extension, our carbon footprint.

The second related to unintended bias; because large language models require massive amounts of data mined from the Internet, “racist, sexist, and otherwise abusive language” could accidentally be included during the training process. Lastly, Gebru’s team pointed out that as large language models become more adept at mimicking language, they could be used to manufacture dangerously convincing misinformation online.

The paper was exhaustively cited and peer-reviewed by over thirty large-language-model experts, bias researchers, critics, and model users. So it came as a shock when Gebru’s team received instructions from HR to either retract the paper or remove the researchers’ names from the submission. Gebru addressed the feedback and asked for an explanation on why retraction was necessary. She received no response other than vague, anonymous feedback and further instructions to retract the paper. Again, Gebru addressed the feedback — but to no avail. She was informed that she had a week to rescind her work.

The back and forth was exhausting for Gebru, who had spent months struggling to improve diversity and advocate for the underrepresented at Google. (Only 1.9 percent of Google’s employee base are Black women.) To be silenced while furthering research on AI ethics and the potential consequences of bias in machine learning felt deeply ironic.

Frustrated, she sent an email detailing her experience to an internal listserv, Google Brain Women and Allies. Shortly thereafter, she was dismissed from Google for “conduct not befitting a Google Manager. Amid the fall out, Google AI head Jeff Dean claimed that the paper “didn’t meet our bar for publication” and “ignored too much relevant research” that undermined the risks she outlined — a shocking accusation, given its breadth of research.

To Gebru, Google’s reaction felt like corporate censorship.

“[Jeff’s email] talks about how our research [paper on large language models] had gaps, it was missing some literature,” she told MIT’s Technology Review. “[The email doesn’t] sound like they’re talking to people who are experts in their area. This is not peer review. This is not reviewer #2 telling you, ‘Hey, there’s this missing citation.’ This is a group of people, who we don’t know, who are high up because they’ve been at Google for a long time, who, for some unknown reason and process that I’ve never seen ever, were given the power to shut down this research.”

“You’re not going to have papers that make the company happy all the time and don’t point out problems,” Gebru concluded in another interview for Wired. “That’s antithetical to what it means to be that kind of researcher.”

We know that diversity and research is crucial to the development of truly effective and unbiased AI technologies. In this context, firing Gebru — a Black, female researcher with extensive accolades for her work in AI ethics — for doing her job is senseless. There can be no other option but to view Google’s actions as corporate censorship.

For context — in 2018, Google developed BERT, a large language model, and used it to improve its search result queries. Last year, the company made headlines by creating large-language techniques that would allow them to train a 1.6-trillion-parameter model four times as quickly as previously possible. Large language models offer a lucrative avenue of exploration for Google; having them questioned by an in-house research team could be embarrassing at best, and limiting at worst.

In an ideal world, Google would have incorporated Gebru’s research findings into its actions and sought ways to mitigate the risks she identified. Instead, they attempted to compel her to revise her document to include cherry-picked “positive” research and downplay her findings. Think about that for a moment — that kind of interference is roughly analogous to a pharmaceutical company asking researchers to fudge the statistics on a new drug’s side effects. Such intervention is not only unethical; it leads to the possibility of real harm.

Then, when that interference failed, Google leadership worked to silence and discredit Gebru. As one writer for Wired concludes, that decision proves that “however sincere a company like Google’s promises may seem — corporate-funded research can never be divorced from the realities of power, and the flows of revenue and capital.”

Gebru is an undeniably strong person, an authority in her field with a robust support network. She had the force of personality to stand her ground against Google. But what if someone who wasn’t quite as well-respected, supported, and brave stood in her position? How much valuable research could be quashed due to corporate politicking? It’s a frightening thought.

The Gebru fallout tells us in no uncertain terms that we need to give real consideration to how much editorial control tech companies should have over research, even if they employ the researchers who produce it. If left unchecked, corporate censorship could stand to usher in the worst iteration of AI: one that writs large our biases, harms the already-underserved, and dismisses fairness in favor of profit.

This article was originally published on BeingHuman.ai

By |2021-04-16T18:56:58+00:00April 16th, 2021|Technology|

The Ethics of Bitcoin: Is the Cryptocurrency Better for Banking?

If you’re anything like me, you’re equal parts fascinated and befuddled by the evolving world of cryptocurrency, and Bitcoin in particular.

For those of us used to paper and plastic, the idea of a decentralized, digital payment can seem pretty pie in the sky. But many are quick to call it the currency of the future, and if the buzz is any indication, it could be. According to Realtime Bitcoin there are more than 16.5 million Bitcoins in circulation. The current exchange rate is one Bitcoin to US $3,917.83. That puts the total amount in circulation at almost US $65 trillion.

Created sometime between 2008 and 2009, Bitcoin only took off in 2013 when it hit an all-time high––at the time––of US$1,100. Over the next few years, the price fluctuated. Recently, however, the virtual coin has garnered resurgent interest, skyrocketing to an all-time high of US $4,522.13 in August.

But what caused the newfound appreciation for the cryptocurrency? And what concerns should we have regarding the ethics of Bitcoin? Technology that seems amazing often poses ethical quandaries we need to engage with, as I’ve talked about in regards to AI.

Here’s a look at the current state of Bitcoin and what it means for banking, both today and in the future.

Bitcoin’s Surge

There are a few clear reasons for the recent surge in Bitcoin stock. First, its blockchain technology has been of special interest to some major players in finance. Morgan Stanley, Goldman Sachs, and JP Morgan believe that this technology may improve the trading of loans, securities, and derivatives.

Second, Japan and China have begun to embrace the cryptocurrency. In April, regulators in Japan introduced certain rules to integrate Bitcoin into the regular banking system (rather than peg it as an outlaw currency). This change has caused many investors to swap their Yen for Bitcoin.

In addition, Chinese authorities who have been critical of Bitcoin in the past have recently gained more tolerance for the currency. This has made Bitcoin-related investments in the region far less risky and far more attractive.

Thanks to these developments, Bitcoin has taken a step forward in legitimacy. People will be less likely to hold it for speculative purposes and start buying actual things with it.

But this begs an important question: Will Bitcoin, blockchain, and other cryptocurrencies bring us to a more ethical level of banking? Or will the challenges of these new systems create an equally murky financial system?

A Case for Bitcoin

Trust plays a key role in finance today. But what if we eliminated the need for trust in conducting business transactions? A successful transaction would be guaranteed, no matter who you were dealing with.

Garrick Hileman, an economic historian at the London School of Economics and University of Cambridge, points out, “A big part of the problem with Lehman Brothers in 2008 came from counterparty risk and the fact that settlement could not be counted on.”

With the advent of blockchain technology and smart contracts (computer programs set to execute a transaction once certain criteria are met), it could be possible to take trust out of the equation entirely. Transactions are conducted on the basis of guarantee because collateral is posted instead of withheld. Potentially, this could avoid a Lehman situation in the future.

Bitcoin also offers the advantage of cutting costs. Right now, banks put a lot of money into the transaction process. Part of the reason is that much of banking is still done manually and saturated with paperwork. This occupies both time and resources. With an automated system, verified by blockchain technology and smart contracts, we would save billions in capital, conduct transactions more quickly, and achieve it at zero marginal cost.

While the engineering behind this technology is still not yet ready to be rolled out for use in banks and other financial institutions, the promises of automated settlements, a higher level of transparency, and an overall reduction of overheads promise a more stable financial sector.

The Challenges

Cryptocurrency doesn’t come without its challenges. Though it has its proponents, some go as far as to call it “evil”. And this isn’t without reason. Those who argue against cryptocurrency have posed concerns on the anonymity of how transactions are conducted. Case in point: Bitcoin has long been associated with shady business transactions and entities such as Silk Road (which was shut down late 2014).

This anonymity, they say, allows the currency to be used for criminal activity in ways that other currencies cannot. It could be argued that this actually encourages unethical transactions.

However, it’s important to note that the anonymity isn’t absolute. Transactions conducted using Bitcoin are made public on the blockchain. That means that parties involved can be found linked to their Bitcoin addresses, although they are often difficult to find. A good example of this is the Silk Road founder, Ross Ulbricht. We were ultimately able to break the anonymity and discover his identity, but it took both time and resources.

In short, we don’t want to create a lawless market. That means there need to be additional measures put in place to ensure that the government, the technology, and the banks are in close contact. We must protect the ethics of cryptocurrency.

What it all means

Finance often falls into ethically questionable territory. That’s why banking needs an ethical solution that’s available to all parties, that is affordable and verifiable, so that there is accountability across the board.

On the other hand, the structure of cryptocurrencies and the blockchain technology allows for scalable ethical banking. This would be achieved by first combining the digital efficiency of the currency and the scalability of computers and networks. Existing rules and regulations would ensure that the consumer is adequately protected.

We’ll just have to wait and see on which side the Bitcoin lands.

By |2020-02-11T16:45:11+00:00March 12th, 2018|Culture, Current Events, Technology|
Go to Top