Can AI Be Conscious? - By Jack Harrington
Can AI Be Conscious?
By Jack Harrington
“In the literal sense, the programmed computer understands what the car or the adding machine understands: namely, exactly nothing.” This quote from John Searle, the inventor of the Chinese Room Experiment, showcases one of the two opinions we will explore in this heated debate. As we traverse through time, we can see the interesting evolution of this topic. Is AI a conscious threat to humanity or is it just a tool?
What is AI?
At the heart of this debate lies the need for a fundamental understanding of AI itself. IBM, an American multinational technology company, states that AI is a technology that allows machines to simulate human intelligence and problem solving. Giving them the ability to learn from data, reason, and make decisions effectively. AI performs this through a series of complicated steps: data collection, training, feature extraction, model building, and prediction.
Data collection is the first major step in the process where an AI system is provided raw data. This can be labeled or unlabeled and in the form of text, images, or video. All of this data is then placed into a training algorithm. In this step, the AI determines statistical patterns within the dataset. The most important attributes of the data are then observed further by removing data points that do not contribute to the pattern. Finally, a model is constructed using remaining data. This can take many forms, but are commonly neural networks or decision trees. At this point, the model can be applied to new datasets to make predictions.
AI is used in countless industries as it continues to advance. Currently, AI has applications in analytics and insights, logistics, healthcare, and natural language processing. As AI continues its march into tech dominance, it will expand further into more markets.
Testing Consciousness
AI consciousness is a debate that extends back to the 1950’s with Alan Turing. Turing proposed the first test that could be used to determine AI intelligence, which could be further extended to consciousness. The goal of the test was to determine if AI is capable of replicating human intelligence. Turing set up the test using an interviewer, a computer, and another human participant. The interviewer is placed in a room separate from the computer and other participant. Then the computer and the participant are asked to respond to a series of questions. The test ends with the interviewer identifying the computer or person as a human. If the interviewer falsely identifies the computer as a human, the AI passes the Turing Test. There have been few instances where AI has passed this test in the past; however, new AI, like ChatGPT, are increasingly better at natural language processing. These models can easily trick interviewers and pass.
(Image depicting Turing Test set up procedure.)
Passing the Turing Test is viewed in two ways in the context of this debate:
First, the passing machine has the ability to think for itself and learns human-like speech patterns. This would imply that the AI possesses an ability to think and can be considered conscious.
Second, the AI model is very good at regurgitating trained responses. This opinion recognizes that AI is capable of statistically finding patterns and returning answers. The AI in this case does not seem to learn anything. It has trained on natural language data and statistically determines the best response.
Another pivotal consciousness test was introduced later, in 1980, by John Searle. His theoretical test was called the Chinese Room Experiment. It remains a thought provoking experiment in the AI consciousness debate with supporters and skeptics actively referencing his work.
(AI depiction of the Chinese Room Experiment.)
In this experiment, a computer is placed in a box devoid of all outside stimuli other than strings of input. These input strings are in a language unknown to the computer. Inside of the box with the computer, there is a book holding the response to every input string. For each input the computer must find the corresponding output string and return it outside of the box. The computer is considered to pass when the people outside of the box are convinced that the computer is fluent in the inputted language.
Supporters of this experiment believe that the computer is thinking when generating the responses. Gaining a human-like understanding for natural language and actively learning the language. This demonstration of understanding is then considered to be consciousness within the passing machine. Skeptics believe that a computer can pass as it is very good at providing output strings; however, during the process of the Chinese Room Experiment the computer is considered to gain no understanding of the language and therefore is unable to be considered conscious.
Opinion: Calculator
One significant perspective in the ongoing debate is characterized by those who view AI as a sophisticated tool rather than a conscious entity. This viewpoint, which I refer to as the “Calculator” perspective, asserts that AI will continue to function as primarily a tool. A prominent advocate for this stance is Erik J. Larson. Larson is a writer, tech entrepreneur, and computer scientist in the space of AI. He has worked for a number of AI startups and is a published author about AI’s inability to have consciousness.
Larson believes that AI is a glorified calculator. It takes in large amounts of data, finds statistical patterns, and provides an output. He highlights AI’s inability to think with AI presenting incorrect answers to inputs. He asserts that conscious AI would be capable of learning from inputted data, and the number of incorrect responses would approach zero. This is not the case with current AI models. For example, ChatGPT provides incorrect answers to complex prompts. When it is challenged, it often defends the incorrect response it outputted with a string of incorrect logic. A conscious AI would be able to discern correctness of outputs. Currently, AI models require human-in-the-loop interactions to gauge correctness of responses. Users must like or dislike outputs to reinforce patterns within the AI model.
Larson continues to assert that AI is just a tool through logic statements. AI models struggle to reverse simple logic statements. When told “A is B”, it is expected that the AI would be able to respond “B is A”. This feat of a simple logical reversal only successfully occurs around 33% of the time. Larson recognizes this inability to perform the task as a deficiency in comprehension and learning. If AI were truly conscious, it would quickly understand simple logical structures and consistently be able to perform logical reversals with a high degree of accuracy.
Opinion: Terminator
The opposing perspective in this debate sounds more like science fiction than fact. The central idea of the holders of this viewpoint believe that AI is or will become conscious. Then utilizing this consciousness to pose a threat to humanity. A prominent member of academia that holds this viewpoint is Nick Bostrom. Bostrom is a philosopher and professor who has held teaching positions at both Yale and Oxford. During this time, he has written literature on the dangers of AI and has made claims that it is a conscious threat to humanity.
The threat to humanity starts with a machine superintelligence. This is an AI system that possesses an intelligence level greater than a human. Members of the “Terminator” viewpoint are divided if AI is already more intelligent than humans. Bostrom asserts that AI is rapidly advancing and will hit an “intelligence explosion.” This is a significant AI event where these machines will outpace the learning rate of humans, which will create a dependence of humans on these AI models. Bostrom furthers his claims by stating that AI can continue to develop its superintelligence using the existing internet infrastructure. Leading to detrimental impacts on humanity.
Supposedly, super-intelligent AI could have the ability to build covert nano-factories to produce nerve gasses, hijack political processes, manipulate financial markets, and even hack human-made weapon systems all for the extinction of humanity,
My Thoughts
Though I find the “Terminator” opinion very interesting, I agree with Larson. AI is an extremely fascinating tool that continues to impress me. It has developed rapidly in the last few years and has made tremendous strides in language processing, image creation, and other fields. I look forward to seeing the applications and how it can be used to improve existing technologies. With that said, I do not believe that AI has a consciousness or an independent ability to think. Simply, it is just an algorithm that detects statistical patterns with some degree of accuracy. It lacks reliability with its correctness and requires human input to further train the models, therefore I believe AI is not conscious.
Resources
Cole, David. “The Chinese Room Argument.” Stanford Encyclopedia of Philosophy, Stanford University, 20 Feb. 2020, plato.stanford.edu/entries/chinese-room/#TuriPapeMach.
Copilot. Images.
Horgan, John. “ChatGPT and the End of AI.” YouTube, YouTube, 23 Oct. 2023, www.youtube.com/watch?v=mU10I2xKBMI&t=2s.
“Nick Bostrom: ‘We Are like Small Children Playing with a Bomb.’” The Guardian, Guardian News and Media, 12 June 2016, www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine.
Oppy, Graham, and David Dowe. “The Turing Test.” Stanford Encyclopedia of Philosophy, Stanford University, 4 Oct. 2021, plato.stanford.edu/entries/turing-test/.
Comments
Post a Comment