Do Computers Think Creatively?

Jonathan Bartlett

|

July 21, 2016

by Jonathan Bartlett, The Blyth Institute

In his essays on Robots and Rationality and the “Chinese Room,” Tim Stratton pointed to why it is unreasonable to think of machines and computers as being rational beings. One of his primary points is that rationality requires understanding and intentionality, two things that machines do not have. In this article, I want to tackle a different question – can computers think creatively? Many artificial intelligence researchers think that the difference between human thinking and artificial intelligence is merely one of degree – once we build big enough machines, they will be able to think exactly like a human.

In order to make that claim, though, one needs to have a definition of what one means to “think like a human.” In 1950, mathematician Alan Turing developed the “Turing Test” for artificial intelligence. According to Turing, artificial intelligence will have succeeded if a computer can be developed where a human cannot tell the difference between interacting with a computer and interacting with a human through a typed session. However, it turns out that this criteria is fatally flawed. For something to attain human intelligence, a computer being able to trick another human into thinking that it is a human winds up to neither be a particularly difficult or interesting proposition.

In 2014 an event was held at the Royal Society of London for judges to interact with a variety of different humans and computer programs, to see if they could tell which was which. The entry which won the “most human computer” award was a computer program named Eugene Goostman, who was able to convince 33% of the judges that he has human. How did “Eugene Goostman” do this? The programmers made Goostman to be a 13-year-old boy from the Ukraine. This way, his knowledge and logic mistakes were covered up by the idea that he was only 13, and his language mistakes were covered up by the idea that he was not a native English speaker. Thus, while “Eugene Goostman” was largely hailed in the press (and by its creators) as being the first program to pass the Turing Test, we can see that while the attempt was cute, it was pretty meaningless. What it proved was not that the AI was very human-like, but that the Turing Test is not a workable tool for assessing the quality of AI.

It turns out that, despite what their programmers may claim, these programs really are computers. Microsoft attempted to unleash a learning AI onto Twitter named Tay, but it turned out that when a few people decided to have a little fun with Tay, they turned Tay into a Hitler-loving Jew hater in under 24 hours.

Now, I should say that I have no doubt that these problems will be solved. I am sure that eventually someone will produce a conversational AI that can fool most people. I am also sure that Microsoft will eventually figure out how to prevent Tay from going rogue and invading Poland. However, at the end of the day, what separates human thought from computation isn’t that humans are able to fool other humans into thinking that we are human, but that human thinking is creative while artificial intelligence is entirely derivative of previous creativity. The ability to think creatively has been termed by some researchers as the “Lovelace Test,” named after Lady Lovelace, who worked with Charles Babbage on early computers. Lovelace believed that computers could not have minds because they could not originate anything.

So what is creativity, and how do we know that humans have it, and computers don’t? It turns out that this was discovered even prior to Alan Turing’s seminal work that led to modern computers. Kurt Gödel, in 1931, published his “Incompleteness Theorems.” These theorems, at their core, state two things. First, given a starting set of basic truths (called axioms), there are derivative truths (called theorems) that are based on these axioms, but cannot be logically determined from those axioms. This may seem very obscure, but it has profound implications. In the late 1800s and early 1900s, mathematicians led by David Hilbert were trying to find the core axioms of mathematics, with which they could use to deduce everything else, or to at least verify mathematical proofs automatically. But, as it turns out, mathematical truth doesn’t work like that. You cannot develop a fixed set of rules (known as a “formal axiomatic system”) to do the work for you. Nonetheless, this has not stopped humans from discovering such truths themselves. This means that while computers by their nature are limited to a fixed set of rules for their computation, humans are not. Humans can take an “outside” perspective to solve problems and find truths.

The reason why artificial intelligence is so deceptively successful is not because computers are able to take the same creative steps that humans can, but that humans, after taking their own creative steps, can then go back and create rules that will generate those results. Thus, humans always take the creative steps, and the steps that artificial intelligences make are always derivative. Computers take derivative steps much faster and easier than humans, but computers cannot take the creative steps.

This truth has shown up in a number of powerful ways. In his book, How to Measure Anything, Hubbard notes that, given a set of parameters for evaluation, computers are almost always better than humans at figuring out the relative importance of the parameters and applying those parameters successfully. However, he also notes that humans are incredibly better than computers at generating the list of parameters to use for evaluation. For instance, if you are sick, it might be possible that a computer is better than a doctor for applying diagnostic criteria to determine what illness you have. However, it is almost certain that a doctor will be better than a computer at determining which criteria the computer should use for diagnosis. In artificial intelligence, this is known as the “frame problem” – finding out which pieces of information are relevant to a problem is an insurmountable task for a computer, at least in the general case. However, this is what humans do every day.

In so-called “evolutionary computing” this same issue arises. Humans are needed to set what the evolvable parameters are, and how the system is evaluated. After those things are set, only then can the evolutionary algorithms successfully find solutions. Evolvability depends heavily on the choices of the programmer to pick parameters that can evolve well and to pick selection criteria that will move the program in the desired direction. In a highly-touted paper in Nature which claims to show an evolvable algorithm, the authors note that without selection criteria that led the evolution in the right way, the organism never evolved the features they were looking for.

Peter Thiel, co-founder of PayPal, noted in his book Zero to One that,

computers are far more different from people than any two people are different from each other; men and machines are good at fundamentally different things. People have intentionality – we form plans and make decisions in complicated situations. We’re less good at making sense of enormous amounts of data. Computers are exactly the opposite: they excel at efficient data processing, but they struggle to make basic judgments that would be simple for any human. . . .

In 2012, one of [Google’s] supercomputers made headlines when, after scanning 10 million thumbnails of YouTube videos, it learned to identify a cat with 75% accuracy. That seems impressive – until you remember that an average four-year-old can do it flawlessly. When a cheap laptop beats the smartest mathematicians at some tasks but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other – they are categorically different. (pgs 143-144)

As I mentioned, there is not any particular task that I think that computers will Jonathan Bartlettnot be able to do. Whenever humans figure out how to do something, they can then teach computers to do it efficiently. What humans can do that computers cannot is to be creative – to find the ways to automate and think about things that have never been automated before.

While this article has touched on some technical topics, a skeptic of this position may want more formalized reasons, which are outside the scope of an article such as this. For those interested in a more rigorous presentation of these ideas, you can find them in Chapter 5 of the book Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft.

If this sort of discussion interests you, we will be covering similar topics in the upcoming online conference on Alternatives to Methodological Naturalism. You can find more at http://am-nat.org/


NOTES

  • Thiel, Peter and Blake Masters. Zero to One: Notes on Startups, or How to Build the Future. Crown Business. 2014.
  • Hubbard, Douglas W. How to Measure Anything: Finding the Value of Intangibles in Business. Wiley. 2010.
  • Larson, Erik J. “Eugene Goostman is a Fraud.” Evolution News. July 9, 2014. http://www.evolutionnews.org/2014/07/eugene_goostman087641.html
  • Johnson, Sam. “An Interview with Jewish Artificial Intelligence, Eugene Goostman.” Huffington Post. June 24, 2014. http://www.huffingtonpost.com/sam-johnson/an-interview-with-jewish-_b_5497972.html
  • Turing, Alan M. “Computing Machinery and Intelligence.” Mind 59:433-460. 1950.
  • Bringsjord, Selmer et al. “Creativity, the Turing Test, and the (Better) Lovelace Test.” Mind and Machines 11:3-27. 2001.
  • Stratton, Tim. “Robots and Rationality: Not the Droids You’re Looking For.” Free Thinking Ministries.
  • Lensky, Richard et al. “The Evolutionary Origin of Complex Features.” Nature 139-144. 2003.
  • Moore, Gregory. “The Incomplete Gö” American Scientist 93(5). 2005. http://www.americanscientist.org/bookshelf/pub/the-incomplete-g-del
  • Bartlett, Jonathan. “Using Turing Oracles in Cognitive Models of Problem-Solving.” In Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft, edited by Bartlett et al. 2014. Pages 99-122.
  • Weinberger, Matt. “Microsoft Apologizes for its Racist Chatbot’s ‘Wildly Inappropriate and Reprehensible Words’”. Business Insider. March 25, 2016. http://www.businessinsider.com/microsoft-apologizes-for-tay-twitter-meltdown-2016-3

 

Share:

About the Author

Jonathan Bartlett