Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah! Perhaps a bit naively, as a Highly Opinionated Person (HOP) on this topic I was ready for this to have something controversial to say about the nature of intelligence.

It's not out of the ordinary for even Anglosphere philosophers to fall into a kind of essentiallism about intelligence, but I think the treatment of it here is extremely careful and thoughtful, at least on first glace.

I suppose I would challenge the following, which I've also sometimes heard from philosophers:

>However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.

I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally? Or that it isn't already equivalent to something out there in the vector space that machines already utilize? People are constantly rotating through essentialist concepts that supposedly reflect an intangible "human element" that shifts the conversation onto non-computational grounds, which turn out to simply reproduce the errors of every previous variation of intelligence essentialism.

My favorite familiar example is baseball, where people say human umpires create a "human element" by changing the strike zone situationally (e.g. tighten the strike zone if it's 0-2 in a big situation, widen the strike zone if it's an 3-0 count), completely forgetting that you could have machines call those more accurately too, if you really wanted to.

Anyway, I have my usual bones to pick but overall I think a very thoughtful treatment that I wouldn't say is borne of layperson confusions that frequently dog these convos.



Yep I think that is an interesting point! I definitely think there are important ways in which human intelligence is embodied, but yeah - if we are modeling intelligence as a function, there's no obvious reason to think that whatever influence embodiment has on the output can't be "compressed" in the same way – after all, it doesn't matter generally how ANY of the reasoning that AI is learning to reproduce is _actually_ done. I suppose, though, that that gets at the later emphasis:

> Drawing an overly close equivalence between human intelligence and AI risks succumbing to a functionalist perspective, where people are valued based on the work they can perform

One might concede that AI can produce a good enough simulation of an embodied intelligence, while emphasizing that the value of human intelligence per se is not reducible to its effectiveness as an input-output function. But I agree the vatican's statement seems to go beyond that.


As an aside, and more out of curiosity, I want to mention a tiny niche corner of CogSci I once came across on YouTube. There was a conference on a fringe branch of consciousness studies where a group of philosophers hold a claim that there is a qualitative difference of experience based on material substrate.

That is to say, one view of consciousness suggests that if you froze a snapshot of a human brain in the process of experiencing and then transferred every single observable physical quantity into a simulation running on completely different material (e.g. from carbon to silicon) then the re-produced consciousness would be unaware of the swap and would continue completely unaffected. This would be a consequence of substrate independence, which is the predominant view as far as I can tell in both science and philosophy of mind.

I was fascinated that there was an entire conference dedicated to the opposite view. They contend that there would be a discernable and qualitative difference to the experience of the consciousness. That is, the new mind running in the simulation might "feel" the difference.

Of course, there is no experiment we can perform as of now so it is all conjecture. And this opposing view is a fringe of a fringe. It's just something I wanted to share. It's nice to realize that there are many ways to challenge our assumptions about consciousness. Consider how strongly you may feel about substrate independence and then realize: we don't actually have any proof and reasonable people hold conferences challenging this assumption.


It's going to sound rather hubristic, being that I'm just a random internet commenter and not a conference of philosophers, but this seems... nonsensical? I don't understand how it isn't obvious that the new consciousness instance would be unaware of the swap, or that nevertheless the perspective of the original instance would be completely disconnected from that of the new one.

It seems to be a question that many apparently smart people discuss endlessly for some reason, so I guess I'm not surprised by this proposal in particular, but it's really mystifying to me that anybody other than soulists think there's any room for doubt about it whatsoever.


Completely agree. I'm interested in the detour, perhaps as much fascinated in the human psychology that prompt people to invest in these debates as anything about the question itself. We have psychology of science and political psychology and so it seems like a version of that that attempts to be predictive of how philosophers come to their dispositions is a worthy venture as well.


And then Marvin Minsky asked: what if you substitute one cell at a time with an exactly functioning electronic duplicate? At what point does this shift occur?


Related to that are Searle's "Chinese Room" argument and the question of "Mind uploading" (can you up/download mental states): https://plato.stanford.edu/entries/chinese-room/#ChinRoomArg...

https://en.wikipedia.org/wiki/Mind_uploading and Chapter 8 about Mind Uploading in https://www.researchgate.net/profile/Alfredo-Pereira-Junior/...

The related Reddit conversation https://www.reddit.com/r/Futurology/comments/2ew9i2/would_it...


Sounds like an experimental question. Maybe 99%, maybe 1%, maybe never.

Can you suggest another way to answer your question other than performing an experiment? Can you describe how to perform an experiment to answer your question?

Would you agree to be the subject of such an experiment?


>I have heard this claim frequently, that intelligence is "embodied" in a way that computers overlook, but if that turns out to be critical, well, who is to say that something like this "embodied" context can't also be modeled computationally?

Well, Searle argued against it when presenting the case for the Chinese Room argument, but I disagree with their take.

I personally believe in the virtual mind argument with an internal simulated experience that is then acted upon externally.

Moreso, if this is the key to human like intelligence and learning in the real world, I do believe that AI would very quickly pass by our limitations. Humans are not only embodied, but we are prisoners to our embodiment and we only get one. I don't see any particular reason why a model would be trapped to one body, when they could 'hivemind' or control a massive number of bodies/sensors to sense and interact with the environment. The end product would be an experience far different from what a human experiences and would likely be a super organism in itself.


Experience is biological, analog, computers are digital; that's the core of the problem. It doesn't matter how many samples you take, it's still not the full experience. Witness Vinyl.


This is just so story more than it's an actual argument and I would say it's exactly the kind of essentialism that I was talking about previously. In fact, the version of the argument typically put forward by Anglo-sphere philosophers, and in this case, by the Vatican, are actually more nuanced. The reference to the "embodied" nature of cognition at least introduces a concept that supports a meaningful argument that can be engaged with or falsified.

It could be at the end of the day that there is something important about the biological basis of the experience and the role it plays in supporting cognition. But simply stipulating that it works that way doesn't represent forward motion in the conversation.


That's not a very good answer, imho.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: