AI either poses the greatest risk or benefit to humanity

For years, some of the brightest individuals in the world have cautioned about the future of Artificial Intelligence (AI). Influential figures, like Tesla and SpaceX CEO Elon Musk, have publicly provided numerous warnings about the potential risks of superintelligent AI.

The potential risks of an AI that can emulate and surpass the intelligence of the human mind are vast. “Robots will be able to do everything better than us,” Elon Musk said during his speech at the National Governors Association 2017 Summer Meeting. “I have exposure to the most cutting-edge AI, and I think people should be concerned by it.” 

Elon Musk isn’t the only one worried about AI; late theoretical physicist Stephen Hawking has also expressed worry about the future of AI. Dr. Hawking stated in an interview with BBC,” The development of full artificial intelligence could spell the end of the human race.” But where do these claims come from; if this is the future, why are we still developing it?

The potential benefits of AI are as vast as its risks. AI can perform tedious and repetitive tasks, increasing productivity in such fields. For example, banks commonly require multiple document checks to obtain a loan which is an immensely time-consuming task for the bank’s owner. Using AI, the bank owner can expedite the document verification process, helping both the owner and the customer. By removing such tedious jobs, humans could focus on more pressing issues, such as climate change and world hunger, among some of the many problems humans have brought about. 

However, can we ensure that these positive outcomes are genuinely possible? Many have proposed different ways to monitor/control AI. Ideas to control AI have been considered, such as limiting the potential of AI and translating ethics into a sort of law for AI. However, there are no definitive answers to this issue. For now, we are well aware that the future of AI is bright, but, if mishandled, it could very well be our downfall.

New feats like AI-generated art have already captured the world’s attention. This technology uses an algorithm that “learns” art “by analyzing thousands of images” to identify a specific aesthetic. Afterward, it generates images in adherence to the new aesthetic they have learned. 

Many of these programs use algorithms called “generative adversarial networks” or GANs for short. Initially introduced in 2014 by computer scientist Ian Goodfellow, these algorithms get the name “adversarial” from the two sides of the algorithm. One part generates random pictures, and the other judges these pictures from the aesthetic they learned based on the input. These algorithms are fascinating as they can replicate art styles of any genre or period they want. For example, should the user of the program request a portrait from the renaissance, they would simply need to provide pieces of art from this time, and the AI would attempt to imitate this style and create a unique piece of art. 

These programs are evidence of the power of AI—it can produce artwork in mere seconds compared to that of an artist’s hand. One can only imagine the potential of a more advanced AI that extends into architecture, 3D modeling, and beyond. 

There are plenty of applications for AI technology in different fields. For example, a more advanced use of AI in science/biology is DeepMind’s AlphaFold. DeepMind is a London-based AI company owned by Google that develops and creates various AI programs. This advanced AI network, according to nature.com, was recently used to “predict the structures of more than 200 million proteins from some 1 million species.” This AI predicted the shape of almost all the known proteins found on Earth with extreme accuracy. This massively-significant discovery allows biologists to discover new things about proteins and establish relations and similarities. 

DeepMind provides information on how accurate its data is so biologists can make discoveries without the risk of using false data. All this data lies in a database that is free to the public and set up by DeepMind. The company that developed AlphaFold also created the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL–EBI), an intergovernmental institution near Cambridge, UK. 

These AI creations are the tip of the iceberg; the future of AI has much in store for the world. However, what if this technology becomes hostile to human life? What will happen to our species? Maybe Doctor Steven Hawking said it best: “[AI] will either be the best thing that’s ever happened to us, or it will be the worst thing.” 

Article by Quinn Lam-Vu of Robert Frost Middle School

Photo courtesy of Wikimedia Commons

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.