Artificial intelligence has persisted in the fantasies of futurists, the pages of novelists, and the minds of mathematicians for nearly a century. Even before the rise of computers, the idea of non-human sentience has been the fodder of legends and conjecture: from the intelligent robots of Greek mythology to Turing’s theory of computation, images of sentient machines have rarely come into public conversation without the integral questions of ethics they pose.

Even by definition, artificial intelligence is a malleable term: coined in 1956, it typically refers to intelligence exhibited by computers or machines and the academic field of study that seeks to research and create them. Depending on the extent of the so-called “intelligence” and the form of the machine, AI is as much a topic of great optimism as it is fear. It means many different things to many people and industries; a symbol of both hope and dread, friend and foe, savior and destroyer.

AI: Ethics in Fiction

4925410614_92436221bf_b

2001 A Space Odyssey: Bill Lile via Flickr

The implications of AI today pose more issues than ever. Works of fiction have been proposing potential concerns for hundreds of years, if not more — especially in terms of ethics. For example, the sympathetic but flawed monster of Mary Shelley’s Frankenstein was a pivotal illustration of man-made intelligence, published to highlight the morality of man “playing God.” Corpse reanimation, fortunately, still remains an impossibility. But AI, in general? Not so much.

With the dawn of technology, AI as we conceive of it in the 20th and 21st century is less green and hulking than it is metallic and calculating: think HAL from 2001: A Space Odyssey, the title character of The Terminator, Matrix agents, or any other number of fictional representations. Throw a stone in a pool of science fiction artworks, and you’re bound to hit a robot; more than half the time, it’s one with human-like or even superhuman intelligence.

These fictional works tend to illustrate our greatest fears of technology: In 2001, HAL attempted to kill the humans aboard the Jupiter-bound spaceship. In the 2015 film Ex Machina, the ultra-seductive robot Ava ends up killing her creator; in Her, digital operating system (voiced by a breathy Scarlett Johansson) leaves Joaquin Phoenix’ character for a network of romance beyond human understanding. The philosophical theme of artificial intelligence has always been ethics and consequences; it’s played out in books and movies time and time again with no signs of slowing.

Rapid Advancements

https://bennatberger.net/wp-content/uploads/bennatberger-net/sites/271/12935316785_eb85f43860_k.jpg

In reality, AI may seem, on the surface, less glamorous than the robotic villains and lovers in feature films. On the forefront of sophistication, IBM’s Watson is capable of winning Jeopardy and writing original, date-based recipes. Machines can (and do) perform human jobs in fields ranging from manufacturing to journalism, to high-frequency trading and customer service. These are just several examples of the gradual growth of artificial intelligence: unsexy, sure, but far from insignificant.

Each of these new innovations, though seemingly innocuous in comparison to homicidal robots, carries ethical concerns of its own. Take Google’s self-driving cars as an example. Should autonomous vehicles be programmed to protect their driver, if other lives are at risk, or sacrifice the driver if it saves more lives? It comes down to coding with a conscious; not so much whether to program intelligent algorithms, but how.

Another example is the smart weapons under development by militaries across the world: some, like smart drones, are already in use, capable of selecting targets based on coding and machine learning. This reduces the need for human soldiers, but gives machines enormous power without the values or judgement to match it.

Tech giants including Google, Apple, Facebook, and Microsoft are also investing big-time in artificial intelligence, especially when it comes to machine learning, deep learning, and image, speech and emotion recognition. These investments have already begun reshaping tools like Siri, OK Google, and Facebook newsfeeds. Their growing sophistication and personalization is ostensibly for better user experience, but for company and monetary gains too.

Regulating Development

5382624995_a7ae6bd9d3_b

Michael Cordedda via Flickr

With AI technology advancing fast, it’s important that ethical standards evolve just as quickly to prevent potentially disastrous side effects. The job of most researchers and companies is simply to develop intelligent technology, but whose job should it be to ensure computers prioritize ethics as much as profit or success? Should a machine’s values be defined only by the biases of its creator, or adhere to a higher standard?

Industry experts are more aware today than ever of the very real ethical concerns artificial intelligence poses, especially if and when they reach human-level intelligence. Thus, efforts to keep AI peaceful and safe are being implemented in numerous forms: For example, in December of 2015, technology companies invested $1 billion in a nonprofit initiative called OpenAI, the aim of which is to digital intelligence advances to the benefit of humanity. In theory, opening access to important AI ideas and development will keep the industry transparent and accountable.

Organizations like Future of Humanity Institute are also focusing their efforts on ensuring new technology is safe, ethical, and to the benefit of humans and the world. Their AI Safety program intends to address the “control issue” of artificial intelligence: in other words, ensuring that advanced AI systems are created and deployed without risk to humanity as they get smarter.

Austin-based Lucid is one AI company with an ethics advisory panel in place to apply best ethical practices so that new products are built specifically with social, cultural and moral values in mind. Google’s recently acquired Deepmind allegedly has a similar board. This is a start, but others believe that codes should eventually be set nationally or even internationally, instead of just internally.

Though it may seem paranoid to some, top scientists including Stephen Hawking and Elon Musk have urged the scientific community to regulate and monitor AI development, or else face dire consequences — the risk being that computers outpace humanity, and are bestowed with moral responsibility that puts people at risk. That’s not to mention the question of at what point, if any, robots might require rights of their own. Are we obligated to care for and respect smart machines, as we do animals? Or will they end up taking care of us?

Looking forward

Whether or not AI at this level of advancement comes about anytime soon doesn’t change the fact that science fiction is getting closer to reality with every dollar put into this field of research. As a hint, it’s been trillions already, and ballooning rapidly.

Self-aware machines may be a while away from existence; in fact, it may never actually be possible to mimic human thought and emotion entirely. Questions aside, some experts think human-level intelligence (or greater) will be a reality within the century.

As robots acquire skills ranging from learning to reasoning, today’s Dr. Frankenstein could conceivably be at work in his or her own lab in this lifetime. Will we be prepared to ensure that before new intelligence is unleashed, it has the ethics it needs to thrive cooperatively with its creators and the planet? This territory is uncharted, but the more it is explored the sooner we’ll have our answer.

Featured Image: Health Blog via Flickr