Theories of Intelligence

Howard Gardner, a developmental psychologist at Harvard, achieved fame through the popularization of his theory of "Multiple Intelligences". According to Gardner, there are eight different types of intelligence (with intelligence defined as "the capacity to solve problems or to fashion products that are valued in one or more cultural setting").

  1. Linguistic Intelligence
  2. Logical-mathematical Intelligence
  3. Musical Intelligence
  4. Bodily-kinesthenic Intelligence
  5. Spatial Intelligence
  6. Interpersonal Intelligence
  7. Intrapersonal Intelligence
  8. Naturalistic Intelligence

Theories of Artificial Intelligence

Alan Turing

Alan Turing was one of the forerunners of Computer Science. In a paper entitled "Computing Machinery and Intelligence", he proposed that computers had the potential to be as intelligent and "human" as humans are. Moreover, he designed an "Imitation Game" (now known as the Turing Test) to tell whether a computer had achieved intelligence. In this "imitation game", a person must distinguish between two entities (one a computer, and the other a human) communicating with him from another room. If the player cannot determine which is the computer with an accuracy better than random guessing, then the computer has passed the test and is "intelligent".

John McCarthy

John McCarthy was a professor of Computer Science. Among his many accomplishments was the invention of the programming language LISP (short for List Processing). LISP was not designed to be implemented in machine code -- it was a theoretical construct, showing one of the simplest possible ways that one could build a turing complete language for algorithms. However, it was later found that LISP could be implemented in machine code. It was commonly used in the AI research community for a long time.

Marvin Minsky

Marvin Minsky is a Cognitive Scientist, Philosopher, and Computer Scientist who did a lot of important work in the field of artificial neural networks.

Norbert Wiener

Norbert Wiener was a philosopher and a mathematician who did early work in the field of "cybernetics" or the study of "systems" (Wiener defined it as "the scientific study of control and communication in the animal and the machine"). He said that all intelligence could be conceptualized as collections of feedback loops.

Chatbot Project

We worked on a project to make an AI that would have a conversation with a user. We used python's random module to have the chatbot pick a random response from a list. We also used the "in" command in python to have the chatbot respond to different input differently. For example, if the program senses the word "stupid" in the user's input, it will respond "saying that something is stupid is silly". We also used a loop to make the chatbot continue to respond to user questions or stop responding to user questions, depending on user input.

Here is the code for my chatbot.

Here is a picture of my chatbot chatting.

Importing

Tuesday morning, we made a pair of functions that which ask the user if he or she would like to play again, and then ends the program if the user answers no. We thought that these would be useful in many of the projects we're working on this year, and so we learned how to import functions from one python module into another python module.

Turing Test

Tuesday afternoon, we did an actual turing test with our chatbots. A student would leave the room, and the rest of the class would ask him/her questions in an attempt to divine whether the entity answering was the student's chatbot or the student him/her self. The class guessed wrong 3 out of 5 times, so overall, the students selected did a good job.

Factbot project

On wednesday, we used the wolfram alpha API to create a factbot. It asks the user if he or she has a question, queries wolfram alpha with the input , and then prints the answer the wolfram alpha sends back. After that, it asks if the user has another question -- if the user says yes, it asks for more input, and if the user says no, the program stops.

Here is a picture of the code for the factbot.

Here is a picture of the factbot working.

We also made the factbot speak, and write its questions and answers to a text file. Here is a picture of the code we used.

Watson

Today we watched a Nova special on an IBM supercomputer ("Watson") designed to play Jeopardy. The main problem the designers of this supercomputer had to overcome was its lack of context. Humans are very good at processing natural languages, and they're also very good at quickly making inductive conclusions. Because of this, the computer makes mistakes that a human never would. For example, at one point, Watson answered "Who is Richard Nixon?" to a clue asking for a first lady. Humans in the english speaking world know that a "First Lady" must be female, because a "Lady" is a name for a woman, and they would know that "Richard" is a male name. The researchers making Watson were able to approximate this "common sense" (at least with regards to answering Jeopardy questions) through machine learning. In machine learning, a computer is given tools for finding patterns and correlations, and then given a gigantic data set of some classification. For example, a computer learning to recognize a cow might be given the ability to perceive color and shape, and then shown many images of cows. This technique ended up giving Watson the ability to compete on equal footing with Jeopardy champions like Ken Jennings, but it still wasn't as good at figuring out context clues as a human was. Back to index.