Week 1 - Reading
Chapter 1
- The definitions on top are concerned with thought processes and reasoning, whereas the ones on the bottom address behavior. The definitions on the left measure success in terms of fidelity to human performance, whereas the ones on the right measure against an ideal performance measure, called rationality, system is rational if it does the “right thing,” given what it knows.
- The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer.
- The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to construct precise and testable theories of the human mind.
- An agent is just something that acts (agent comes from the Latin a gere, to do). Of course, all computer programs do something, but computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals.
- This book therefore concentrates on general principles of rational agents and on components for constructing them.
- Descartes was a strong advocate of the power of reasoning in understanding the world, a philosophy now called rationalism, and one that counts Aristotle and Leibnitz as members. But Descartes was also a proponent of dualism.
- Given a physical mind that manipulates knowledge, the next problem is to establish the source of knowledge.
- The final element in the philosophical picture of the mind is the connection between knowledge and action. This question is vital to AI because intelligence requires action as well as reasoning.
- The first nontrivial algorithm is thought to be Euclid’s algorithm for computing greatest common divisors
- His incompleteness theorem showed that in any formal theory as strong as Peano arithmetic (the elementary theory of natural numbers), there are true statements that are undecidable in the sense that they have no proof within the theory.
- Neuroscience is the study of the nervous system, particularly the brain. Although the exact way in which the brain enables thought is one of the great mysteries of science, the fact that it does enable thought has been appreciated for thousands of years because of the evidence that strong blows to the head can lead to mental incapacitation.
- The truly amazing conclusion is that a collection of simple cells can lead to thought, action, and consciousness or, in the pithy words of John Searle (1992), brains cause minds.
- The fact that a program can find a solution in principle does not mean that the program contains any of the mechanisms needed to find it in practice.
Chapter 2
- An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
- In general, an agent’s choice of action at any given instant can depend on the entire percept sequence observed to date, but not on anything it hasn’t perceived.
- Internally, the agent function for an artificial agent will be implemented by an agent program.
- A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent function is filled out correctly.
- As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave.
- For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
- Our definition of rationality does not require omniscience, then, because the rational choice depends only on the percept sequence to date.
- To the extent that an agent relies on the prior knowledge of its designer rather than on its own percepts, we say that the agent lacks autonomy.
- For the acronymically minded, we call this the PEAS (Performance, Environment, Actuators, Sensors) description
- In contrast, some software agents (or software robots or softbots) exist in rich, unlimited domains.
- Episodic vs. sequential: In an episodic task environment, the agent’s experience is divided into atomic episodes.
- Known vs. unknown
- Discrete vs. continuous
- Static vs. dynamic
- Such experiments are often carried out not for a single environment but for many environments drawn from an environment class.
- The simplest kind of agent is the simple reflex agent. These agents select actions on the basis of the current percept, ignoring the rest of the percept history.
- work only if the correct decision can be made on the basis of only the current percept—that is, only if the environment is fully observable. Even a little bit of unobservability can cause serious trouble.
- The last component of the learning agent is the problem generator. It is responsible for suggesting actions that will lead to new and informative experiences.
- In an atomic representation each state of the world is indivisible—it has no internal structure.
Chapter 26
- Alan Turing, in his famous paper“Computing Machinery and Intelligence” (1950), suggested that instead of asking whether machines can think, we should ask whether machines can pass a behavioral intelligence test, which has come to be called the Turing Test.
- The inability to capture everything in a set of logical rules is called the qualification problem in AI.
- Turing calls this the argument from consciousness—the machine has to be aware of its own mental states and actions.
- Philosophical efforts to solve this mind–body problem are directly relevant to the question of whether machines could have real minds.
- They have focused in particular on intentional states. These are states, such as believing, knowing, desiring, fearing, and so on, that refer to some aspect of the external world.
- The theory of functionalism says that a mental state is any intermediate causal condition between input and output.
- Although we cannot rule out the second possibility, it reduces consciousness to what philosophers call an epiphenomenal role—something that happens, but casts no shadow, as it were, on the observable world.
- Therefore, there is no understanding of Chinese. Hence, according to Searle, running the right program does not necessarily generate understanding.
- Running through all the debates about strong AI—the elephant in the debating room, so to speak—is the issue of consciousness.
- This explanatory gap has led some philosophers to conclude that humans are simply incapable of forming a proper understanding of their own consciousness.
Comments
Post a Comment