top of page
Eric Kim

Artificial Intelligence: A Step to a Dystopia or Utopia

By Eric Kim


We often think of humans as intelligent beings. But when it comes to applying general knowledge to niche applications, a logically abnormal fallacy seems to arise. We seem to be unable to apply an axiom to subjects that we fear. This is the problem when it comes to the conscience of artificial intelligence (AI). We believe that we are so different from AI, but we, in fact, have quite a lot in common with AI. Given this similarity, it seems they can be enlightened to become conscious—just as human beings are.


We often belittle the fact that computers are controlled by a “simple” two-symbol system and delude ourselves to believe humans are far more complex and superior. Just as how computers use binary code, we humans are also comprised of a binary scale.


You may be confused or even enraged by this connection, comparing the dualistic algorithms of computers and that of people. From Plato to Descartes to Lao Tzu, many influential philosophers emphasize Dualism—the belief that the world is made up of contrasting forces. Proponents of this theory believe the world is made up of a mixture of one concept and another opposing concept. For instance, if Heaven is seen as purely made of concept “A,” while Hell is seen as purely made of the opposite, concept “B,” Earth would be a neutral and balanced concoction of concept “A” and “B.”


But in a computer, that “A” and “B” would represent the 0s and 1s of a computer’s dualistic algorithm. “0” would represent the good and “1” would represent the evil. So when we see a code like 01010010111011010100, that is something that represents a concoction of good and evil.


The capacity to create a concoction is, to an extent, also empirically proven. In math, if we think about an inner function, there are numerous possibilities—f(f(x)), f(f(f(x))), f(f(f(f(x))))—that represent a thought encapsulated in another thought. To put that into perspective, it would be equivalent to saying “I think that I think that I think that this apple is red,” with “I think” repeated multiple times. The concept of this inner function applied to thoughts is what we call meta-recognition. Our ability to understand such a statement is the uniqueness of our intelligence. For instance, if we say “I think that I think that I think,” we do not have to start reciting the first two that say “I think” and “I think that I think” to know what the original statement means. In a similar way, algorithms do not go from “A” to “B” to “C.” Rather, they skip far more intermediate steps than any human exceeding the capacity of human intelligence.


But noting this, we do not need to become Luddites. We should not be hesitant about embracing this capacity. Such intelligence will exist in the absence of emotion, and thus, contrary to popular belief, robots devoid of emotion will have no will for destructive conquest. Therefore, we should welcome the rise of this technology and focus on the people who abuse such innovation. If a car crash occurs, is it the car or the driver’s fault? We must be able to rationally identify the one at fault—not randomly scapegoat one. Then we will see that AI is a reflection of us. Ultimately, it is not about technology; the problem lies with us and our perspective: we will either stay in our own safe bubbles or venture into the millions of possibilities that we could create with AI.


Comments


bottom of page