Q. Can computers know?
A. This is largely a question of definition. If a camera
looked at a table, we could say it "knows" that there are
four containers of liquid on the table (which was true).
This is definitely a very old school approach to AI and one that I don't find very convincing. If a computer with a camera is looking at a table with four containers of liquid, to say that it "knows" "there are four containers of liquid on the table" presupposes that it "knows" what a "container" is, what a "liquid" is, and what a "table" is, and that it can recognize and point out each of these things in its 640x480 grid of RGB values and describe the essential properties of a container, a liquid, and a table. Even that presupposes that things like "container", "liquid" and "table" have "essential properties".
(That's an interesting question: what makes a table a table? Not all tables are made out of the same material or have the same color. Not all tables have four legs (or legs at all!) and not all have a flat surface. Not all tables are the same height or width or are used for the same purposes. What, then, makes some particular table "a table"?)
We don't say that a camera "knows" there's three glasses of water on the table when it takes a picture any more than we say a newborn baby "knows" when he looks at a chessboard that Kasparov has a mate in three.
I guess I shouldn't take McCarthy's comments at a freshman seminar as representative of his most thorough theories on AI, but I found that one answer particularly naive.
One can only express so much in a short sentence, but McCarthy's reply does sum up one version of a traditional cognitivist understanding of perception. That is the idea that knowledge is raw data, and that thought is processing. It seems to me that much of traditional AI is unfairly dismissed, just as behaviorism was largely unfairly dismissed by the AI people, such is the peril of science being ruled by trends in the absence of strong findings (real science as in physics for example, the paradigm of real science).
But I also think there are some very important ideas from the more recent work that began with connectionism and led to embodied/ enactivist approaches. That has led to a definition something along the lines of knowing being an organism's ability to interact effectively with its environment (which would involve being able to predict correctly the results of actions performed on the object). So that would imply that both the organism and the environment are involved in the knowledge.
The camera has no knowledge of the table because it has not had the experience of lifting the table and feeling its weight, being aware of its ability to throw it (and how far), to set it down (and how its weight will affect how quickly it will hit the floor), its surface as a stable place for setting other objects, etc. All of these interactions lead to the perceptual skills necessary to know and understand the table, which is to say to have a trained neural network controlling, planning behavior, categorizing experience with these trained expectations.
(That seems to imply some guiding principles for implementing AI: 1. basic locomotion and physical interaction with the world is an important and non-trivial problem 2. there needs to be a linking theory to extend basic-level knowledge to novel, abstract categories of knowledge grounded in the earlier type. For 1. lots of neuroscience work is relevant including constructivist/ modeling approaches, including the behavior-based AI paradigm and for 2. one such major linking paradigm is the one that started with Rosch/ Lakoff /Faconnier/ Gibbs etc currently under the headings of conceptual metaphor and blending theory, cognitive linguistics, embodied cognitive science)
Knowledge about the world or some part of it potentially gives the ability to purposefully change it, or influence it if you wish. At least I'd be more comfortable with this definition of knowledge that distinguishes a camera from a human.
(That's an interesting question: what makes a table a table? Not all tables are made out of the same material or have the same color. Not all tables have four legs (or legs at all!) and not all have a flat surface. Not all tables are the same height or width or are used for the same purposes. What, then, makes some particular table "a table"?)
We don't say that a camera "knows" there's three glasses of water on the table when it takes a picture any more than we say a newborn baby "knows" when he looks at a chessboard that Kasparov has a mate in three.
I guess I shouldn't take McCarthy's comments at a freshman seminar as representative of his most thorough theories on AI, but I found that one answer particularly naive.