I think pointing out the circular definition is important, because even in this comment, you’ve said “To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, …”. Sure, but that doesn’t provide a useful framework IMO.
For qualia, I’m not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that’s just a skill issue. I think it’s likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We’ll have a coherent way of comparing representations across those and deciding if they’re equivalent, and that’s good enough for me.
I think we agree on LLMs and chess engines, they don’t learn as you use them. I’ve worked with both under the hood, and my point is exactly that: they’re a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.
Anyways, I’m interested in hearing more about your project if it’s publicly available somewhere
if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.
As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.
There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.
The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.
It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.
The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.
Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.
Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol
So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.
After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.
If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.
I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.
if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.
As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.
There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.
The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.
It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.
The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.
Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.
Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol
So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.
After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.
If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.
I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.
I think pointing out the circular definition is important, because even in this comment, you’ve said “To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, …”. Sure, but that doesn’t provide a useful framework IMO.
For qualia, I’m not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that’s just a skill issue. I think it’s likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We’ll have a coherent way of comparing representations across those and deciding if they’re equivalent, and that’s good enough for me.
I think we agree on LLMs and chess engines, they don’t learn as you use them. I’ve worked with both under the hood, and my point is exactly that: they’re a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.
Anyways, I’m interested in hearing more about your project if it’s publicly available somewhere
if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.
As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.
There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.
The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.
It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.
The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.
Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.
Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol
So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.
After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.
If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.
I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.
if you don’t think my framework is useful, could you provide a more useful alternative or explain exactly where it fails? If you can it’d be a great help.
As for “skill issue” while I think generalized comparisons of brains are possible (in fact we have some now) I think you might be underestimating the nature of chaotic systems or have a belief that consciousness will arise with equivalent qualia whenever it exists.
There is nothing saying that our brains process qualia in exactly the same way, quite the opposite, and yet we can reach the same capabilities of thought even with large scale neurodivergences. The blind can still experience the world without their sense of sight, those with synesthesia can experience and understand reality even if their brain processes multiple stimuli as the same qualia. It is very possible that there are multiple different paths to consciousness which will have unique neurological behaviors that only makes sense within their original mind and may have no analog in another.
The more I look into the functions of the brain—btw I am by no means an expert and this is not my field—the more I realize many of our current models are limited by our desire to classify things discreetly. The brain is an absolute mess. That is what makes it so hard to understand but also what makes it so powerful.
It may not be possible to isolate qualia at all. It may not be possible to isolate certain thoughts or memory from other circumstances in which it is recalled. There might not be elemental/specific spike trains for a certain sense that are disjoint from other senses. And if this is the case, it is likely possible different individuals may have different couplings of qualia making them impossible to compare directly.
The idea that other processing areas of the brain (which by the way we do see in the brain (place neurons remapping is a simple example)) may be entangled in different ways across individuals means that even among members of the same species it likely won’t be possible to directly compare raw experiences because the required hardware to process a specific experience for one individual might not exist in the other individual’s mind.
Discrete ideas like communicable knowledge/relationships should (imo) be possible to isolate well enough that you could theoretically implant them into any being capable of understanding abstract thought, but raw experiences (ei qualia) most likely will not have this property.
Also, the project isn’t available online and is a mess because it’s not my field and I have an irrational desire to build everything from scratch because I want to understand exactly how it is implemented and hey it’s a personal hobby project, don’t judge lol
So far I’ve mostly only replicated the research of others. I have tried some experiments with my own ideas, but spiking neural nets are difficult to simulate on normal hardware, and I need a significant number of neurons, so currently I’m working on designing a more efficient implementation than the ones I’ve previously written.
After that, my plan is to experiment with my own designs for a spiking artificial hippocampus implementation. If my ideas are sound I should be able to use similar systems to implement both short and long term memory storage.
If that succeeds I’ll be moving onto the main event of focus and attention which I also have some ideas for, but it really requires the other systems to be functional.
I probably won’t get that far but hey it’s at least interesting to think about and it’s honestly fun to watch a neural net learn patterns in real time even if it’s kinda slow.