Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

>I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of...

Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

But information is a poor measurement here. The quantity of information in a system tells you how many distinctions can be made using the state of the system. But information doesn't tell you how such distinctions are made and what is ultimately picked out. For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

So in a structure that has intrinsic content, the process of picking out the targets of reference is inherent as well. This means that structural information about how concepts are related to each other are inherent such that there is a single mapping between the structure as a whole and the universe of concepts. This requires a flexible graph structure such that general relationships can be captured. It's no wonder that the only place we currently find intrinsically contentful structures are brains.

>I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

Do the thoughts in your head require external validators to endow them with meaning, or do they intrinsically have meaning owing to their content? If the latter, then that should raise your credence that such non-contextual expressions are possible in principle. But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

>Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness... This seems to be essentially what Frankfish says

In the paper you link, Frankish is circumspect about his theory being eliminative about phenomenal consciousness:

>Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist. We might call this eliminativism about phenomenal consciousness. The term is not ideal, however, suggesting as it does that belief in phenomenal consciousness is simply a theoretical error, that rejection of phenomenal realism is part of a wider rejection of folk psychology, and that there is no role at all for talk of phenomenal properties — claims that are not essential to the approach. Another label is ‘irrealism’, but that too has unwanted connotations; illusions themselves are real and may have considerable power. I propose ‘illusionism’ as a more accurate and inclusive name, and I shall refer to the problem of explaining why experiences seem to have phenomenal properties as the illusion problem.

So he seems to accept eliminativist about phenomenal consciousness as accurate, but with unwanted connotations. But later on he takes a more unequivocal stance[1]:

>I do not slide into eliminativism about phenomenal consciousness; I explicitly, vocally, and enthusiastically embrace it! Qualia, phenomenal properties, and their ilk do not exist!

[1] https://twitter.com/keithfrankish/status/1182770161251749890



> Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

I agree 100% that computational complexity can be used to make a meaningful distinctions. It's not clear that that's the case here though. That is, I agree that the quantity of information is worlds apart, but if the information content is all of the same kind and requires no special changes to the computational model needed to process it, then I don't think the magnitude of information is relevant.

> For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

I don't think this distinction is meaningful due to the descriptor "intrinsic". It suggests that an agent's thoughts are somehow divorced from the enviroment that bootstrapped them, that the thoughts originated themselves somehow.

The referent of one of my thoughts is an abstract internal model of that thing that I formed from my sensory memories of it. So if "instrinsically contentful" information is simply information that refers to an internal model generated from sensory data, then this would suggest that even dumb AI-driven non-player-character/NPC in video games have thoughts with instrinsic content, since they act on internal models built from sensing their game environment.

> But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

Maybe, but I'm not yet convinced that there's real substance to the distinction you're trying to make. I'm all for making a meaningful distinctions, and perhaps "mental states" driven by internal sensory-built models is such a distinction, but I'm not sure "thought" or "information with intrinsic content" are necessarily good labels for it. "Thought" seems like overstepping given the NPC analogy, and per above, "intrinsic" doesn't seem like a good fit either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: