bryan: Am I in the right room now?
caro: moderator testing
tom: Nope--Bryan please log out and log in again. Then you should be in the right place.
ted: Hi everybody. I suppose it's about that time.
caro: Introducing Ted O'Connor, who will graduate this June with a B.S. in Computer Science from Rose-Hulman. Welcome, Ted! Please computationally enlighten us on the matter of entities and propositions.
caro: There seems to me to be a great deal of similarity between your theory, Bryan Register's theory, and the theory Tom Radcliffe and I have been developing. Would you say that's the case, or not, after the discussion with Bryan just now?
ted: Yes, I'd say that the theories are very similar. I think that we're all approaching the same sort of position, but perhaps in different ways or from different perspectives. I know I very much enjoyed reading Bryan's paper; it struck me as very congruent with how I had been thinking about the topic.
caro: From Bryan: You cite Rand defining 'perception' as a mental process. You identify
percepts with entities. But you then suggest (p. 3) that entities are in the mind. Earlier
remarks suggest (and obviously I agree) that entities exist only in relation to a mind, but
that's radically different from saying that entities are in the mind. This seems to be not only
unmotivated by your ealier remarks, but extremely counterintuitive. Are you just saying,
obliquely, that entities exist only in relation to minds? It seems that that view is consistent
with the rest of your remarks.
caro: Easier for me, people, if you stop _labeling_ your questions and key questions.
caro: Sorry, that was meant for the other window. I don't know how this is happening. Let me log off and try again.
ted: I honestly don't see any fundamental, much less radical, difference in the two statements.
caro: I'm going to take these terminological questions first. Tom, remind me to come back to fuzzIness.
caro: caro test
caro: From Bryan (who I'm sure will want to remark on your last response to his question.): You say (and you're sure right!) that anything that is, is in some
way. But you then infer that anything that is, is encoded in some way. While it seems that
maybe anything mental might have to be encoded, it doesn't seem that anything else needs
to be. (Do I 'encode' a chair when I make one?) Maybe you're just using 'encode' in a way
unfamiliar to me? (Likely, since I know next-to-no CS!)
ted: In the paper, I infer that entities are encoded in some way. I think this claim generalizes to any mental content, but not necessarily further.
caro: Bryan's follow-up from to your response to his first question: Okay, that's weird to me. Let me give some examples to suggest why. This
sprite is a sprite (TM) only in relation to the coca-cola company, which is a bunch of people
and equipment. But that doesn't imply that it is in them. It's also a drink only in relation to
the mouth and stomach of its eventual consumer. But that doesn't imply that it's in my
mouth or stomach. It seems to me that there's a very large difference between being in
something, and existing only in relation to that thing.
ted: The Sprite is an entity only in relation to the minds that have distinguished the relevant portion of existence from the rest of existence. At the same time, the Sprite does exist in your mind. The process of entity-formation doesn't have any causal efficacy beyond the mind that forms the entity.
caro: From Bryan: You say that organisms need a way to have a concept occur to them even
when no member (or whatever) of that concept is handy, and that seems right. But you then
suggest that organisms will need to engage in some process intentionally designed to bring
this about. But could an organism have formed the intention to bring it about that the
concept DINNER to occurred to it without the concept DINNER having already occurred to it?
That is, don't you have to have some notion of the desired end-state in mind before you can
form the intention to bring about that end-state, and where the desired end-state is that you
be in some mental state, you have to have that mental state in mind as a goal before
forming the intention to bring it about. But further, do we really use words, ourselves, to
bring about mental states in ourselves? Do I say, 'Dinner!' to get myself to think about
dinner? Further, you (rightly) want the theory to generalize to other organisms, AIs, etc. What
about organisms which don't have language? How did they get themselves to have concepts
occur to themselves in the absence of members (or whatever) of those concepts?
ted: Well, while I think that the organism needs to engage in some sort of activity which will bring this about, I don't think that the activity needs to be intentionally designed to do so. We build up all sorts of strange associations over time. For instance, smelling certain kinds of flowers immediately brings up memories of Carolyn's garden/porch. Now, I didn't intentionally design my smelling activity to bring about this experience, but I've built up an association between these things. So it could be with words. The catch is, perhaps unlike other kinds of associations, words are very easy for us to manipulate.
caro: Phil would like to know what term you think we might use for that which is out there, independent of us. Do you have such a word that you favor?
ted: Existence.
ted: I suppose I should expand on that a bit more. I favor any singular term along the lines of `existence,' but I don't think it really matters. The main point is that it is singular; we're the ones that divide it up.
caro: From Bryan: I accept that concepts can occur to us for
reasons other than our intentionally bringing this about, and also other than a member of
the concept coming in view. But you say that "the agent can conjure up some other entity
that it... associates with the desired EC-vector". You make the agent do it on purpose. But
how could an agent purpose to do this, without having in mind the state which it's its
purpose to bring about?
ted: Perhaps I should change the "associates" there to "has associated?" Clearly, when we're acquiring language as children, we are associating words and concepts in some way. The particular way in which this happens is very dependent on the kind of agent in question. I don't know nearly enough about psychology to even speculate about how humands do it.
caro: From Bryan: But you bring it about in realtime: you *now* call to mind something which you've in
the past associated with the thing which you *now* want to bring to mind. How can you do it
*now* without *now* knowing what it is that you want to bring to mind? And knowing what
it is that you want to bring to mind seems to imply that you've brought it to mind.
ted: The process doesn't necessarily have to be entirely one-way, from word to concept, either. Perhaps there's some kind of process involving feedback. I know that I often have a vague, pre-linguistic notion of what I'm trying to think before I (linguistically) think it.
caro: And then when you linguistically think it, doing so feeds back and helps you develop the concept? Or maybe develop the thought? Or what?
ted: (Wait just a second; IE cleared my almost completed answer from the input box.)
caro: IE?! IE?!!
ted: It develops the thought, yes. Perhaps the "final," fully formed thought is some kind of equilibrium condition.
But I should note that this is entirely speculative: I think a tremendous amount of psychological information is necessary to really do justice to Bryan's question. I'm just saying that it happens in some manner, and I'm actively avoiding saying much about whatever manner it is.
caro: I like that terminology. I was just thinking something similar during that brief exchange with Bryan on whether one has to understand concepts first, or propositions. Sure, it's hard to say what exactly is going on without psychological data. But we can look at the same phenomenon from all sorts of different directions and characterize it in different ways--unless, of course, we're realists. They never get to have any fun.
caro: From Tom: There's all kinds of weird feedback that goes on. One thing that I considered while
reading Ted's paper is that the sound of a word can sometimes suggest it's content to us, as
native-speakers of a language. This is why synthetic words like "slythy" make a kind of
sense, because the sounds themselves are suggestive. I think this goes beyond folk
etymoliogy, and is something that would be diffiult to capture computationally. These sorts
of non-idealities are typical in natural systems--we can locate sounds in three-dimensions
even though we only have two ears because of really complex interactions between our ears,
our shoulders, and the sound field. This kind of thing means we should be cautious about
simplified, formal models that hope to capture cognition.
ted: I completely agree with Tom's points.
caro: Ted, do you see a continuum in degrees of implicitness of concepts...sort of like
focusing a lens till it goes from fuzzy to sharp?
caro: (that one was from Phil)
ted: I don't think I understand the question. Could you expand and/or clarify the question, Phil?
caro: While we wait for Phil to reformulate, here's Jamie:
Ted, do your ec-vector only tell us the denotation of a concepts (i.e., what entities it
points out) and not the connotation (i.e., what is implied by the concept)? I would include
Tom's example of "slythy" as part of the concept's connotation. Another is that thinking
about the concept of POODLE gives me the shakes. If connotation is missing from your
account, is this a problem?
ted: I don't think connotations need to be kept as part of the vector. They could be similar to the word-associations discussed earlier. For instance, when you think about the concept of POODLE, for one reason or another, you've associated this experience with fear.
caro: Here's Phil's reformulation: Ted, in IOE there is a discussion of how a child gets a concept more and more clear in
his mind...the last steps are affixing a word and a definition ( I may have this out of order). As
children (and as adults) do we 'kind of know' what we mean by, say, 'democracy'--a word our
schooteachers like to use--but can't yet define. W it... then know a bit more. We gain more an
more control of our concepts through greater and greater a) explicitness, b) definition, c)
integration with other concepts. The 'net' becomes wider and the integration sharper and
more focused.
ted: OK, that sounds reasonable. We certainly refine our concepts over time, for a variety of reasons.
caro: Jamie's rejoinder: So your not modelling concepts--just the denotative component of concepts.
ted: Well, no. I'm saying that there's no need to include connotations "in" the concept itself: we get connotations for free in virtue of our ability to associate concepts with one another.
tom: From Jamie: Interesting. Would you say that connotation would be a distinct function of the
mind that depends on concepts, but it not a part of "the conceptual faculty"?
caro: test caro
caro: Sorry about that. Shall we wrap up here with Ted's answer, and retire to the play room?
ted: Not necessarily. I would say that it is a function of the mind that depends on concepts, but I don't know whether or not it's part of the conceptual faculty.
ted: Well, before we retire to "the play room," I want to thank everyone for participating. It's been a very interesting two hours.
caro: Looks like I'm not allowed to play. Let me go be somebody else.
caro: Might as well. I can't get in anyway!