Proposal: “The Problem of Knowledge and Data”

Despite a few technical snags, my proposal is in the hands of my committee members and thus beyond my control for the next two months. Time to shift gears and focus on paper-units for a while.

Since I put up a draft of my abstract, I thought I should go for broke and post my proposal as well. It’s shifted a bit in focus . . . lemme see if I can summarize it (this version has no abstract):

Knowledge. Kinda important issue for artificial intelligence and cognitive science. But a slippery beast—we don’t really know that much about what it could or should or might be. And one particularly intriguing question is how the detailed, transient, particular signals of an intelligent agent’s sensorimotor experience are related to the abstract, stable, and general summary information that we call knowledge. This is the aforementioned problem of knowledge and data.

Now, in artificial intelligence research, there’s been a lot of work on the representation of knowledge. In particular what kind of structure should be used for representation and what should go in it. Then, given different kinds of representations, how it is grounded (or given meaning) and how the content might be verified (and what it means to be true in the first place). That research has been helpful and interesting. But there’s an even-more-basic question about knowledge representation, which is what the knowledge is about—what does the representation represent? This choice about what knowledge refers to has consequences. 

So although we’ve spent lots of time on figuring out the problems and advantages of different representational schemes, there hasn’t been as much talk about different referents. But I think it matters, for grounding and verification and usability. In the proposal I briefly talk about the differences between taking an objective stance, which says knowledge is about the objects and laws of the physical world, and taking an empirical stance, which says knowledge is about patterns in sensorimotor data. 

That’s the setup. The actual thesis work I’m proposing has two parts. First, I want to analyze what we’ve done in AI for knowledge, particularly with respect to how knowledge and data interact. What are the strengths and weaknesses of different choices of referent and approaches to the problem of knowledge and data? After getting a clear handle on what we’ve got, I want to see what I can do. I want to implement a predictive representation specifically for general knowledge representation and see if our various tools for abstraction can actually turn around some of the current weaknesses in empirical representations. 

That’s the gist. It’s more carefully laid out in the proposal. Let me know what you think! Love it? Hate it? Reserving judgement over whether or not this is actually a comp sci thesis? After incubating the ideas for ages I’m looking forward to hearing what people think!

I’ll be giving a practice candidacy talk in a Tea-Time-Talk soonish. Patrick will keep us all posted…

The Problem of Knowledge and Data (pdf)

The logic works regardless of what the variables are…

 A random thought, from reading this article on cheating(Mike’s fault)

and bumping into this quote: “Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are.”

Which is standard stuff but it struck me that the whole problem of this view is the problem of definition (Plato’s problem in the Margolis and Laurence survey).

Logic works regardless of what the statements are as long as the variables mean what you wanted them to mean. “If P, then Q. P, therefore Q.” This is true so long as the entities you want to sub in for P and Q can properly take a true or false value. So we get told modus ponens as if it’s “this is a universal truth” and well, it’s more like 1+1=2, isn’t it? That *can* be one of the universals. Doesn’t have to be (dangit, I have to read up on Gödel one of these days). And even so, it rather hinges on the definition of 1 and 2. One cup of water and one cup of sugar doesn’t make two cups of anything.

Reading further (the article’s quite interesting) I see I’m not alone in this wait-a-minute reaction and now I have to look up the Wason selection task and David Buller’s critique of it.

Anyway, apparently, “meaning matters” is going to be my new hobby-horse.

Random picture test: