• AnarchoEngineer@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.

    Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?

    You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.

    A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.

    We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.

    That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.

    We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.

    If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.

    Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.

    To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.

    It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.

    • m_‮f@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      I made another comment pointing this out for a similar definition, but OK so awareness is being able to “recognize”, and recognize in turn means “To realize or discover the nature of something” (using Wiktionary, but pick your favorite dictionary), and “realize” means “To become aware of or understand”, completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it’s worth, much like common definitions of “awareness”. I think it’s perfectly possible in theory to read someone’s brain to see how something is represented and then twiddle someone else’s brain in the same way to cause the same experience, or compare the two to see if they’re equivalent.

      As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn’t itself. It can look at processes and say “this other process consumes more memory than I do”. It’s super primitive and hardcoded, but why doesn’t that count? I also thinking learning is separate but related. If we take the definition of “consciousness” as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.

      If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.

      I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I’ve reached the “Sure, why not?” stage. To be a useful definition though, we need to go beyond that and start asking questions like “Conscious of what?”

      • AnarchoEngineer@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.

        To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.

        As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)

        I hope that’s a clear enough “empirical” explanation for you.

        As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.

        Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.

        A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.

        The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.

        I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.

        • m_‮f@discuss.online
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          I think pointing out the circular definition is important, because even in this comment, you’ve said “To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, …”. Sure, but that doesn’t provide a useful framework IMO.

          For qualia, I’m not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that’s just a skill issue. I think it’s likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We’ll have a coherent way of comparing representations across those and deciding if they’re equivalent, and that’s good enough for me.

          I think we agree on LLMs and chess engines, they don’t learn as you use them. I’ve worked with both under the hood, and my point is exactly that: they’re a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.

          Anyways, I’m interested in hearing more about your project if it’s publicly available somewhere

          • AnarchoEngineer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.

            As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.

            There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.

            The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.

            It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.

            The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.

            Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.


            Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol

            So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.

            After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.

            If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.

            I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.

          • AnarchoEngineer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.

            As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.

            There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.

            The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.

            It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.

            The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.

            Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.


            Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol

            So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.

            After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.

            If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.

            I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.