bryan: okay, now?
moderator: moderator testing
tom: Yes, Bryan, that works. Caro, I'm back as an ordinary user now. You and Ted should be speakers.
bryan: Question. You cite Rand defining 'perception' as a mental process. You identify percepts with entities. But you then suggest (p. 3) that entities are in the mind. Earlier remarks suggest (and obviously I agree) that entities exist only in relation to a mind, but that's radically different from saying that entities are in the mind. This seems to be not only unmotivated by your ealier remarks, but extremely counterintuitive. Are you just saying, obliquely, that entities exist only in relation to minds? It seems that that view is consistent with the rest of your remarks.
tom: I think the paper is very interesting, particularly where it touches on linear optimization, which I'd like to ask more about. But I'm also interested in knowing why Ted didn't explicitly touch on the role of fuzzy logic as the logic of propositions as he's defined them. I've long thought that this was a natural area of contact between objectivism and the rest of the world, despite o'ists unwillingness to entertain any non-Aristotlean logical notions. Does Ted agree that the fuzzieness of his ec-vectors is of the same kind as the fuzzieness of fuzzy sets, or is the similarity in his view just an accident of language?
bryan: Another question. You say (and you're sure right!) that anything that is, is in some way. But you then infer that anything that is, is encoded in some way. While it seems that maybe anything mental might have to be encoded, it doesn't seem that anything else needs to be. (Do I 'encode' a chair when I make one?) Maybe you're just using 'encode' in a way unfamiliar to me? (Likely, since I know next-to-no CS!)
phil: key question: if entities are in the mind, what word do we have (or do we) for what's out there independent of the mind?
phil: rephrase: if entities only exist _in relation to the mind_, what is it, what do we call it that exists prior to, independent of, not in relation tominds...that exists before humans evolved, for example?
jamie: Phil: 'existents'
moderator: moderator test
phil: existents is wider than entities--it includes actions, attributes, and relationships. entities are merely one type of existent
moderator: OK, I assume all's well again. Maybe I'll have to make myself yet another user.
moderator: Phil. Yes. And?
bryan: Follow-up. Okay, that's weird to me. Let me give some examples to suggest why. This sprite is a sprite (TM) only in relation to the coca-cola company, which is a bunch of people and equipment. But that doesn't imply that it is in them. It's also a drink only in relation to the mouth and stomach of its eventual consumer. But that doesn't imply that it's in my mouth or stomach. It seems to me that there's a very large difference between being in something, and existing only in relation to that thing.
jamie: Phil: True, but Rand stresses that EXISTENT is a metaphysical notion and ENTITY is the related epistemological notion. (See, for example, "Journals of Ayn Rand", p. 701. [Which I don't have in front of me])
phil: If entities exist in the mind, then so presumably would actions, relationships, attributes..all existents. So none of these would be 'intrinsic' or indep of mind. So: my question still stands--what is that exists out there, not in relation to mind. My point, of course, is that all the existent terms are the ones we use for mind-indep phenomena, and this whole discussion is abusing the terminology, in my opinion.... especially if my question can't be answered.
bryan: That doesn't tell me what encoding is. But now you're saying that entities are mental content. Does that mean that they're the contents of mental states, or that they *are* mental states? Same question as before from a differnt point of view.
phil: Jamie, Rand's view was that entity was metaphysical not epistemological. I would rely on IOE, not on her earlier ruminations in Journals, when she was still working out her ideas and debating things with herself.
bryan: (Carolyn, these aren't strung together, they're just things I noticed as reading. Plug in as appropriate.) You say that organisms need a way to have a concept occur to them even when no member (or whatever) of that concept is handy, and that seems right. But you then suggest that organisms will need to engage in some process intentionally designed to bring this about. But could an organism have formed the intention to bring it about that the concept DINNER to occurred to it without the concept DINNER having already occurred to it? That is, don't you have to have some notion of the desired end-state in mind before you can form the intention to bring about that end-state, and where the desired end-state is that you be in some mental state, you have to have that mental state in mind as a goal before forming the intention to bring it about. But further, do we really use words, ourselves, to bring about mental states in ourselves? Do I say, 'Dinner!' to get myself to think about dinner? Further, you (rightly) want the theory to generalize to other organisms, AIs, etc. What about organisms which don't have language? How did they get themselves to have concepts occur to themselves in the absence of members (or whatever) of those concepts?
frank: READ LATER AT YOUR LEISURE: I have very little to say in this conference, since I was greatly humbled in discussing a question I raised, "How many entities are there in this room?" with Carolyn some months ago. I was beating a drum for Mario Bunge's _Then Furniture of the World_ but came away agreeing with Carolyn that entities do not exist in the world but rather in the *objective* interface between the world and our brains. This put paid to a big hunk of my world view, and I'm unlikely to take a few years off to revise it! Meanwhile, my guardian angel placed a copy of a book, _Conceptual Spaces_, on the table in Reiter's Scientific Books (almost worth a trip to DC to visit), making me realize that the issues are even more difficult that I had realized. Read it (it takes no more than a good course in modern algebra) and good luck (better: hard work!). Here's the jacket blurb for Peter Ga"rdenfors, _Conceptual Spaces: The Geometry of Thought_ (MIT, 2K): "Within cognitive science, two approaches currently dominate the problem of modeling representations. The symbolic apporach views cognition as computation invoving symbolic manipulation. Connectionism, a special case of associationism, models associations using artificial neuron networks. Peter Ga"rdenfors offers his theory of conceptual representations as a bridge between the symbolic and connectionist approaches. (paragraph) Symbolic representation is particularly weak at modeling concept learning, which is paramount for understanding many cognitive phenomena. Concept learning is closely tied to the notion of similarity, which is also poorly seved by the symbolic approach. Ga"rdenfors's theory of conceptual spaces presents a framework for representing information on the conceptual level. A conceptual space is built up from geometrical structures based on a number of quality dimensions. The main applications of the theory are on the constructive side of cognitive science: as a constructive model the theory can be applied to the develpmetn of artificial capable of solving cognitive tasks. Ga"rdenfors also shows how conceptual spaces can serve as an explanitory framework for a number of empirical theories, in particular those concerning concept formation, inductin, and semantics. His aim is to present a coherent research program that can be used as a basis for more detailed investigations.
jamie: Phil: That is a good question, but I think Ted is merely assuming the theories of Ray-Radcliffe, Rand, and (maybe) Register. I think your question should be directed to them.
bryan: Phil: When Rand says things like that, it seems that she is using 'metaphysical' to mean 'intrinsic' and 'epistemological' to mean 'non-intrinsic', or maybe even just to mean 'objective'. So I think that Ted (who misspelled his name at the top of his paper, incidentally 8^), Tom, Carolyn, and I all have different reasons why she's just wrong about that. She might have noticed this if she hadn't been using words for subject matters as though they were words for ontological statuses of traits of things.
jamie: Phil: I am using ITOE as my primary source. I just find that passage in the Journals as illuminating (and I think it was written after ITOE... but I could be wrong.) In ITOE, Rand's view is that entites are epistemological (see my comment, "Metaphysical and Intentional Entities") and existents are metaphysical.
bryan: Of course it lacks *causal* efficacy. But it might have a sort of ontological efficacy which supervenes on the causal relation. It's like this. By stepping up next to someone, I make it the case that they have a new property: being-such-that-Bryan-is-right-in-front-of-them. But I didn't do any *causal* work. It seems that entityhood should be like that, riding on but not influencing in any way any causal relations.
moderator: You did do causal work!
phil: Bryan et al, we are losing sight of my question...which was not about what Rand said, but about what term should we use for what is metaphysical, intrinsic, out there...whatever you want to call it. One needs to either have an answer to this or say...there is nothing (independent) out there.
jamie: Phil: Is this a question for Ted?
bryan: Well, yeah, I did the causal work of stepping up in front of someone. But I didn't do anything *extra* to bring it about that my victim had this new property. Likewise, we causally interact with noumenal stuff by way of our perceptual apparatus. That's what gives us a certain experience. But something else happened, which supervened on the causal interaction, which was that the stuff acquired the status of being an entity. Likewise, something's being beautiful is not directly a causal consequence of my listening to it, but I do non-causally bring it about that the thing is beautiful by a relation supervening on the causal one. So beauty isn't intrinsic to things, but it's also not in the eye of the beholder. It's in the things (out there) in virtue of their having been apprehended by the eye of the beholder.
phil: Yes...if he is agreeing that 'entities are in the mind'. (It's probably also a question for the rest of you who agree on this point to ponder offline after this discussion.)
agnes: Your claim "there doesn't appear to be any need to encode entity-vectors and concept-vectors differently" inscribes within the connectionist tradition. (vs. representational cognitive science). Can you imagine any kind of supervenience/emergence within mental processes and how would you explain phenomena like self-consciousness and its special status within mental activity?
bryan: Ted's example doesn't speak to the point. I accept that concepts can occur to us for reasons other than our intentionally bringing this about, and also other than a member of the concept coming in view. But you say that "the agent can conjure up some other entity that it... associates with the desired EC-vector". You make the agent do it on purpose. But how could an agent purpose to do this, without having in mind the state which it's its purpose to bring about?
bryan: Phil: as an aside, I've got such a phrase that'll do the trick, and it's: all of the tropes other than similarity and distinctness tropes.
bryan: If an entity is the same as the existence of that entity (which is something Rand says), and entities are in our minds, then so is existence.
moderator: Agnes, can you choose another word for 'inscribes'? That one doesn't make sense to me there.
phil: But I was asking what word applies to the _things_ which are out there--existence applies to _everything that exists out there_ doesn't it?
tom: An entity can be seen as a particular categorization of mind-independent reality. The categorization is in our mind, the reality it is categorizing is not.
phil: Bryan, I'd have to get a clearer understanding of your terminology to see if it does what I'm asking...so I'm not sure of it.
moderator: Bryan, take it a little easier on Ted while he's speaker. He's not Saul Kripke.
jamie: Howdy Ted! Is your paper presenting a theory of concepts, an analogy to better understand Rand's or Ray-Radcliff's theory, or a way to apply the Objectivist theory to AI?
phil: Tom, so do you want to make a distinction between entity-as-it-is-out-there and entity-as-grasped-by-our-consciousness? Makes sense.
bryan: Tom: as often, I can't tell whether our disagreement is semantic or not. It seems to me that an entity is an entity in virtue of a particular categorization of mind-independent reality, and that the categorization is in our mind. But it strikes me as an unnecessarily odd use of language to say that the entity is in our mind. I think that there's a general semantic problem here which I also discovered while reading a lot of Marxist theory last semester, and that this. Often, when a writer wants to emphasize the contextuality or relationality of something, she starts writing as though some dependent factor were identical to something on which it's dependent , or is a property of the thing on which it's dependent, or is *literally* internal to the thing on which its dependent. It seems to me that this makes things unnecessarily hard. Entities are out there. Why? We put them there.
(Text continues in next file)