logo logo
Home arrow Forums...
Wednesday, 27 August 2014
 
 
 
Conscious-Robots.com Forum  


<< Start < Prev 1 2 Next > End >>
Raúl
Moderator

Moderator
Posts: 592
graph
Karma: 10  
Re:Philosophical Zombies (p-zombie) - 2007/06/15 19:16 Hi again. About Hofstadter at Stanford video, I don’t really have any information; I just found the Hofstadter speech. Anyhow, if I find out something else I’ll let you know.
Here you have some links to investigate:


- The Singularity Summit at Stanford: http://sss.stanford.edu/
- What is the singularity? http://sss.stanford.edu/overview/whatisthesingularity/
- SSS Videos: http://www.singinst.org/media/
- The Singularity Intstitute: http://www.singinst.org/

I couldn’t find the article written by Hofstadter: “Who will be we in 2093?” or “Who will be we in 2493?”…

Well, you have convinced me to read “I am s Strange Loop”, and so I’ll do. Hopefully soon…

I agree with you about the inaccuracy of current AI models. I think any biological system is by far much more complex than any artificial model we implement (as Hofstadter points out in his talk). However, we need to build “silly” tiny models in order to learn how to build the “big one”, the one that could lead us to the singularity. If we are in the wrong way, at least it is fun to discover it by your own.

I am trying to follow your argumentation about Hofstadter’s ideas. I also believe that a simulation on a computer of a mind could be a mind. That would be the idea of a virtual machine able to simulate a mind. Of course, the exact requirements of such a simulation process are not clear to me. The same applies to the virtual machine able to run that simulation process. Again these arguments are always blurry as we don’t really have any empirical significant results on the matter. In this scenario, where a mind lives thanks to a virtual machine running in whatever artificial hardware, I agree that the mind is not fully residing in the brain (the hardware), but I don’t see how it could reside outside the realm of the virtual machine. What I believe is that thanks to its relation with the environment an “external” casual link is established. Therefore, depending on how you define “to reside”, the mind is somehow residing (I would say having a relation) outside its brain.

Saying that a simulation of a mind in a mind would be a mind sounds completely different to me. I don’t think this case is completely equivalent to the simulation of a mind on a computer. If we again focus on the necessary virtual machine, the only way I can think of a simulation of a mind in a mind would be that the original mind is able to run a virtual machine.

From a different standpoint, I think you are not referring to the same process in both cases. Let’s assign names to these hypothetical assumptions:

- MindOverComputer is a mind simulated on a computer.
- MindOverMind is a mind simulated in a mind.

The problem is that when you talk about the representation in your mind of, for instance, a friend of yours, it is not a MindOverMind. It is just a perception or set of perceptions of your mind. You are not simulating his/her mind on any virtual machine. Or maybe you refer to trying to put yourself in the place of another one? Anyhow, that doesn’t seem to be the same process as the one taking place in a MindOverComputer.

When you try to guess your friend’s responses, you are just using your internal model of your friend, which is different from a simulation. Basically, the experience is only yours; your friend cannot feel anything from your internal process unless you communicate it somehow. Analogously, the man on the bench that you see from the corner of your eye is a stimulus that your perception machinery recognizes as a human subject; therefore it is assigned a model (your internal model for humans, including consciousness and so on). This stereotype or approximation is an internal model that you can develop as you incorporate more data (perceptions) to it. That is the personification process you mentioned.
I still perceive a qualitative gap between personification and strange loops. Well, it could be just true that thinking of red dragons render them real. Real in the sense that they are part of our common culture (i.e. they become more copied memes as they are more popular, that is, as their strange loop is more present in minds). Do you think there is a relation between memes (cultural self-replicating units) and strange loops? Does Hofstadter mention anything about memetics in his book?

To address the last questions you made, I think we need to tackle the real mean of reality. From my point of view, in principle, consciousness is an hallucination. Therefore, it is not real. Consequently, anything produced consciously is not real (from an absolute point of view). The good news is that consciousness is an adaptive hallucination. Otherwise, we wouldn’t be alive… So we don’t have access to reality, but we have a good tool for approximating it: consciousness. Does it makes sense? Did I generate more minds?


Post edited by: Raúl, at: 2007/06/15 19:17
Raúl Arrabales Moreno. conscious-robots.com/raul
  The administrator has disabled public write access. Please, register to participate in the forum.
Plato Demosthenes
User

Platinum Boarder
Posts: 50
graphgraph
Karma: 5  
Re:Philosophical Zombies (p-zombie) - 2007/06/25 16:47 If we are in the wrong way, at least it is fun to discover it by your own.
I couldn't agree more!

When you try to guess your friend’s responses, you are just using your internal model of your friend, which is different from a simulation. Basically, the experience is only yours; your friend cannot feel anything from your internal process unless you communicate it somehow.
Isn't the computer simulation of the mind also just belonging to the computer?

In response to your account of personification, you say that it stems from sense-data. Are ones' prior thoughts included in that sense-data, say, for the author who makes up a character from their own mind and assigns it a mind?

I'm not sure what the relation between memes and strange loops is. Hofstadter only mentions them twice, according to the index, and only in passing. He seems to accept them as true, but does not think that they are the only structures operating in the mind. However, real in the sense of memes is not at all what I mean by the made-up mind being "Real". What I mean is that the imagined minds could succesfully use the Cogito Ergo Sum argument succesfully to prove their existence. If existence follows from sentience (as Descartes claimed) and if internal representations of minds are sentient in their own right is true (as Hofstadter claimed), then those internal representations exist in their own right. Though they may also exist as cultural memes, thoughts, etc. they also exist in the sense that they can apply Descartes' principle to themselves.


Does it makes sense?
No, to be rather honest, your theory makes no sense whatsoever. ...which is why I am highly interested in finding out more. What do you mean by saying that consciousness is a hallucination? What is it a hallucination of? What is hallucinating it? Is it hallucinating itself? What sort of reality does it have, then? Could you, perhaps, start a new topic about your own theories and speculations on consciousness? I'm very curious to understand it better.
Did I generate more minds?
Still not positive about this answer, but I am certain that you have stretched mine!
  The administrator has disabled public write access. Please, register to participate in the forum.
Raúl
Moderator

Moderator
Posts: 592
graph
Karma: 10  
Re:Philosophical Zombies (p-zombie) - 2007/06/26 13:13 Yes, I think one's prior thoughts are included in that sense-data (even some unconscious perception is included I would say). I am not sure if I understood correctly your claim, it seems to me that you are saying that a mind imagined by a human mind would have exactly the same properties (and associated implications) as a proper human mind.

What I say is that the underlying mechanisms that generate the MindOverComputer and the MindOverMind are not the same (a proper human mind would be a special case of MindOverComputer where the computer is a human brain). I am considering the MindOverMind just as another “regular” thought that could have the MindOverComputer. And I think you claim that the MindOverMind is actually a complete mind that can have its own qualia, did I get it right?


No, to be rather honest, your theory makes no sense whatsoever. ...which is why I am highly interested in finding out more. What do you mean by saying that consciousness is a hallucination? What is it a hallucination of? What is hallucinating it? Is it hallucinating itself? What sort of reality does it have, then? Could you, perhaps, start a new topic about your own theories and speculations on consciousness? I'm very curious to understand it better.

I read some time ago about this hallucination idea. I think it is interesting as it explains consciousness from an evolution point of view. When I say hallucination I refer to conscious experience. Awake conscious experience is an hallucination the same way dreams are. The key different is that awake hallucination is taking as input information from real world, while the dreaming process is getting input which is not necessarily taking place in the reality. Anyhow, as I’d like to understand it better, I’ll open a new thread on the matter as you suggest, and try to find the key references where I took this idea from. Maybe you are able to change my mind!

Post edited by: Raúl, at: 2007/07/10 12:06
Raúl Arrabales Moreno. conscious-robots.com/raul
  The administrator has disabled public write access. Please, register to participate in the forum.
Plato Demosthenes
User

Platinum Boarder
Posts: 50
graphgraph
Karma: 5  
Re:Philosophical Zombies (p-zombie) - 2007/06/26 19:10 it seems to me that you are saying that a mind imagined by a human mind would have exactly the same properties (and associated implications) as a proper human mind.
Well, yes, essentially. But what do you mean by a "proper" human mind? And what counts as imagined? Hofstadter himself seemed very general about those notions, and it seems that his arguments (of which what I have given is only a very boiled-down form) would apply to a very broad scope. What sort of limitations do you suppose the theory would have, if the general stance and methods are taken to be true?
And I think you claim that the MindOverMind is actually a complete mind that can have its own qualia, did I get it right?
Yes, though it depends on the accuracy of the MindOverMind representation. Just as MindOverComputer would be less of a mind, in a way, if it was only a rough sketch of that particular mind, MindOverMind would vary in accuracy depending on how much of the representation is stereotype and assumption compared to hoe much is their actual mind. Of course, with a zombie, stereotype and assumption would be just as valid as the "actual mind" would be in other cases, since the approximation would be their "actual mind". Either way, though, it would seem to me that the MindOverMind would have its own qualia.
Maybe you are able to change my mind!
Jokes/sarcasm must be very easy to make in the cognitive sience field, aren't they? Another perk of the field
  The administrator has disabled public write access. Please, register to participate in the forum.
Raúl
Moderator

Moderator
Posts: 592
graph
Karma: 10  
Re:Philosophical Zombies (p-zombie) - 2007/06/29 12:03 I've opened a new thread on the matter of hallucinations. Thanks for the suggestion. Link:

http://www.conscious-robots.com/en/forums-./consciousness-theories/consciousness-as-hallucination/ view.html
Raúl Arrabales Moreno. conscious-robots.com/raul
  The administrator has disabled public write access. Please, register to participate in the forum.
Plato Demosthenes
User

Platinum Boarder
Posts: 50
graphgraph
Karma: 5  
Re:Philosophical Zombies (p-zombie) - 2007/07/10 00:06 Would a definition of consciousness as "that which has the potential to use the logic in Cogito Ergo Sum" be a start to pinning down the concept? It would involve having some sense of qualia along with some rational processing. I don't mean just saying it, but being able to actually apply it to themselves. The logic for C.E.S. is that either you exist, or you don't. Well, if you don't, then there has to be some reason why you think that you do, i.e., there must be a "demon" (which may be your own unconscious)decieving you. If there is a demon decieving you, then you must exist to be decieved. Thus, through proof by contradiction, you must exist. Alternatively, you are doubting your existence by having to reason through this. Yet, there must be a doubt-er: you. So, you exist.
Cogito Ergo Sum, though, just begs the question: what does it mean to think?
Hmm...so even though it would not be a good definition for consciousness, mighn't it be a good test for other definitions of it?
  The administrator has disabled public write access. Please, register to participate in the forum.
Raúl
Moderator

Moderator
Posts: 592
graph
Karma: 10  
Re:Philosophical Zombies (p-zombie) - 2007/07/10 18:35 Yeah, could it be expressed in terms of being an observer? i.e. a conscious observer?
Because you are an observer you can be decieived. Because you can observ (some of) your own thoughts.
The question is what does it really means to be an observer?
Raúl Arrabales Moreno. conscious-robots.com/raul
  The administrator has disabled public write access. Please, register to participate in the forum.
Plato Demosthenes
User

Platinum Boarder
Posts: 50
graphgraph
Karma: 5  
Re:Philosophical Zombies (p-zombie) - 2007/07/23 04:56 I don't think that an observer and a conscious mind are exactly the same thing, though. All conscious minds are observers, but the reverse is not true. An infrared detecter that opens a door when it senses the right blocking of its beam, thus observing it - but, in the normal (non pantheistic) sense, the detector would not be classified as being sentient. However, as a conscious observer, that may pin down the concept more. The consciousness cannot simply be there, an inert concept, but must actually respond and change dynamically. Maybe this might be too restrictive a definition, but it would be more practical.
  The administrator has disabled public write access. Please, register to participate in the forum.
Raúl
Moderator

Moderator
Posts: 592
graph
Karma: 10  
Re:Philosophical Zombies (p-zombie) - 2007/07/24 16:54 I agree with the point you’ve made on observers. An infrared detector is not observing the same way a human does, and as far as I know (correct me if I am wrong) they both will have the same effect on a quantum system – this is actually more related with quantum theories of consciousness.

Now that you mentioned pantheism, it came to my mind the idea of consciousness being a kind of property of the matter. I am reluctant to that hypothesis as I don’t see how the degree of consciousness can vary from one kind of matter to another. But I know there are some authors that advocate for the idea of consciousness being a property present in a stone or an infrared detector. They argue that the infrared detector is only “aware” of two states: blocked beam and unblocked beam. According to this hypothesis, as a system grows in the number of states that it “is aware of”, it posses a higher consciousness… This account would imply that philosophical zombies are not possible. What do you think about it? Do you know of any relevant researcher supporting this account?

Post edited by: Raúl, at: 2007/07/24 16:55
Raúl Arrabales Moreno. conscious-robots.com/raul
  The administrator has disabled public write access. Please, register to participate in the forum.
<< Start < Prev 1 2 Next > End >>





Lost Password?
 Conscious Robots RSS FeedConscious Robots RSS Feed

Find us on Facebook

Follow us on TwitterFollow us on twitter
Spotlight

Machine Consciousness Bibliography Database

 

ConsScale
The Cognitive Machine Consciousness Scale

 
Last Posts in Forum
 
CR
miel continental