Determinator 2: Our A.I. Judgement Day - Aidan Munoz
Aidan Munoz
Professor Horgan
Seminar in Science Writing
20 March 2024
Determinator 2: Our A.I. Judgement Day
How do we determine how far is too far? When do the capabilities of the things we create become too advanced for us to control? Skynet, from the famous Terminator franchise, is a fictional artificial neural network-based conscious group that waged war with humanity with an army of Terminators (extremely advanced murderous cyborgs). Now, this is obviously a dramatized and very extreme version of what the future of A.I. could be, but it does make us question what realistic danger we may face in our near future. With the power held inside a limitless thought process, one imitating human action and reaction, scanning thousands of algorithms in mere seconds creates a powerful source of information and creativity. What issues, or benefits, can this create when used in artistry? Medical fields? Automobiles? Courts of law? It forces humanity to question our ethics and morals.
Ethically, humanity needs to decide what rules and laws apply to artificial intelligence before its advancement is too far gone for us to make retroactive decisions. On a more surface level basis: Who gets the legal credit and blame for the imitation of an artist’s likeness for both visual and auditory works? On a level above that: Who becomes responsible for the damage, harm, or death of an individual caused by miscalculations/biases of an A.I. in the medical field or in control of a self-driving car down the line? And of course, in the case that A.I. begins to develop an intelligence level greater than our own, and subsequently a “fabricated” consciousness, how do we determine whether we accept or reject that idea? In a paper published by the National Library of Medicine, they question
“Can machines think? . . . The development process of AI includes perceptual intelligence, cognitive intelligence, and decision-making intelligence. Perceptual intelligence means that a machine has the basic abilities of vision, hearing, touch, etc., which are familiar to humans. Cognitive intelligence is a higher-level ability of induction, reasoning and acquisition of knowledge . . . Decision intelligence requires the use of applied data science, social science, decision theory, and managerial science to expand data science, so as to make optimal decisions.”
So, if this were to be the case, and an A.I. model was able to replicate an expression of feeling and consciousness indiscernible from that of a human being, where do we draw the line? If we, as humans, cannot even determine an answer to the mind-body problem for ourselves, how can we decide on this? In the end, A.I. serves as an exceptionally useful tool for our societal advancement and survival, and eventually these positive implementations may bleed into nearly every aspect of our lives.
Perhaps the most important field artificial intelligence is beginning to encroach upon is the medical field. The Organization for Economic Co-operation and Development (OECD) is an intergovernmental organization with 38 member countries, including the United States, focused on stimulating economic progress and world trade. The OECD has come out and declared numerous positive and a small handful of negative factors that this implementation can bring. A.I. is expected to save lives through enhanced communication transmission, protection of digital infrastructure from security threats, and using advanced health data assets and patient history to heavily increase proper patient diagnosis. And, for example, with the European Alliance for Access to Safe Medicine determining communication failure to have caused 30% of medical errors in all of Europe in 2023, these factors are definitely enticing. However, the OECD emphasizes the risks of unclear accountability in A.I. management, disruption of the health workforce, social and economic class disparities through access, and algorithms that carry bias and/or are untransparent. Like with most things, there is no sure-fire way to reduce risk factors down to zero percent, but with slow implementation, there is no telling how successful, or detrimental, artificial intelligence can be towards our healthcare.
Above that, the advancements of the dystopian future that we may one day live in, is led by the company behind ChatGPT, OpenAI. Recently, OpenAI and Figure (another AI company) released a demo video unveiling their new humanoid robot “Figure 01”. This robot is meant to imitate the basic body shape of a human, perform human-like movement and speech, and eventually be moved into the labor force. The humanoid robot is able to use common sense reasoning from images, use an end-to-end neural network to complete fast dexterous manipulation, and has stable dynamics to complete actions and tasks safely. It converts speech-to-text and then its own text-to-speech on the fly, it can adjust its own behavior as all of its actions are autonomous, and it has on-board robot images trained by OpenAI’s model to process an entire conversation and formulate the proper response. Although the video itself does the explanation much more justice, the idea of having these humanoid robots already pushing their way into the workforce is both awesome and frightening.
However, as Eliezer Yudowsky, a decision theorist leading research at the Machine Intelligence Research Institute, puts it, “The key issue is not ‘human-competitive’ intelligence . . . it’s what happens after AI gets smarter-than-human intelligence.” One key example of this rise is the over-dramatized “shutdown” of Facebook’s A.I. system while it was still in development. Facebook’s Artificial Intelligence Research group managed to get two artificial agents to negotiate with one another, eventually deciding to shut them down after they had developed their own shorthand way to communicate and keep the researchers out of the loop (Novet). Now, although researchers on the team clarified that the agents continued to follow the model of negotiation and resolution, media outlets blew the story out of proportion and created this air of fear that A.I. capabilities beyond our understanding were on the horizon (Novet). And whether that statement is true or not is a debate within itself, but what is important is to acknowledge how fearful people became of just the idea of this superhuman level of intelligence. With the possibility of streamlined humanoid robots with extremely advanced levels of intelligence and communication possibly coming to fruition, it seems only a matter of time before mass hysteria over the topic starts to kick in. And who knows... maybe “Terminator” was our foreshadowing.
Works Cited
“Ai in Health: Huge Potential, Huge Risks.” OECD, www.oecd.org/health/AI-in-health-huge-potential-huge-risks.pdf. Accessed 20 Mar. 2024.
“Artificial Intelligence: A Powerful Paradigm for Scientific Research.” Innovation (Cambridge (Mass.)), 28 Oct. 2021, www.ncbi.nlm.nih.gov/pmc/articles/PMC8633405/.
Jordannovet. “Facebook AI Researcher Slams ‘irresponsible’ Reports about Smart Bot Experiment.” CNBC, CNBC, 1 Aug. 2017, www.cnbc.com/2017/08/01/facebook-ai-experiment-did-not-end-because-bots-invented-own-language.html.
“Master Plan: Figure.” FigureAI, www.figure.ai/master-plan. Accessed 20 Mar. 2024.
“OpenAI’s New ‘Agi Robot’ Stuns the Enitre Industry (Figure 01 Breakthrough).” YouTube, YouTube, 13 Mar. 2024, www.youtube.com/watch?v=GiKvPJSOUmE&t=272s.
Yudkowsky, Eliezer. “The Only Way to Deal with the Threat from Ai? Shut It Down.” Time, Time, 29 Mar. 2023, time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/.
Comments
Post a Comment