Talk:artificial consciousness

{{WikiProject banner shell|class=C|1=

{{WikiProject Robotics |importance=mid |attention=yes}}

{{WikiProject Cognitive science |importance=High}}

{{WikiProject Computer science |importance=}}

{{WikiProject Effective Altruism |importance=mid}}

{{WikiProject Neuroscience |importance=Low}}

{{WikiProject Philosophy |mind=yes |science=yes |contemporary=yes |importance=mid}}

{{WikiProject Transhumanism|importance=mid}}

}}

{{Annual readership|days=90}}

{{Archives|Blasphemy,

NPOV,

AI vs AC,

notreal

|auto=short|search=yes|index=User:ClueBot III/Master Detailed Indices/Talk:Artificial consciousness|bot=ClueBot III|age=365}}

{{User:ClueBot III/ArchiveThis|age=8760|archiveprefix=Talk:Artificial consciousness/Archive|numberstart=13|maxarchsize=120000|header={{Automatic archive navigator}}|minkeepthreads=8|minarchthreads=1|format= %%i}}

{{Archive basics

|archive = Talk:Artificial consciousness/Archive %(counter)d

|counter = 13

|headerlevel = 2

|maxarchivesize = 120K

|archiveheader = {{Aan}}

}}

"Self-simulation"

Hi, "self-simulation" is a concept and methode advocated by Hod Lipson as pre-stage to self-awareness of robots.{{cite web|url=https://www.quantamagazine.org/hod-lipson-is-building-self-aware-robots-201907-11/|title=Curious About Consciousness? Ask the Self-Aware Machines|access-date=2019-10-21|date=2019-07-09|author=John Pavlus|website=Quanta Magazine}}

Now I do not know where that could fit, if it is even this article or worth an own article? Nsae Comp (talk) 16:37, 21 October 2019 (UTC)

:{{reply to|Nsae Comp}} I started writing [https://en.wikipedia.org/w/index.php?title=Artificial_consciousness&diff=935654956&oldid=935653716 a description] of this "self-modeling" concept, but it's far from complete. Jarble (talk) 04:09, 17 December 2020 (UTC)

{{reflist-talk}}

Intriguing work re. MC

Hi, if anyone hasn't seen this it appears that Orch-OR may indeed be correct and verifiable.

Its actually a very accurate model from certain points of view, such as the action of xenon and

other anaesthetics on consciousness.

Its entirely possible that the technology to make a conscious machine already exists but what is

lacking is the specific program and model to run in limited hardware. — Preceding unsigned comment added by 88.81.156.140 (talk) 08:15, 9 April 2021 (UTC)

Difficult to understand for a non-technical audience

This article feels like it is written for a technical audience, it's really hard to understand for newcomers. I understand the endeavor to be technically accurate, and maybe it's also a complicated subject in itself. For example, the definition "Define that which would have to be synthesized were consciousness to be found in an engineered artifact" feels very convoluted to me, and I didn't understand the paragraph on the Computational Foundation argument. Alenoach (talk) 22:56, 22 June 2023 (UTC)

:This means, what has to be made, to have a computer software (for example), that is conscious. It is that the more we know what has to be made for that, and then make it, the closer we come to have such software. This is what the research is about. We don't necessarily get anything conscious, but we can go closer.

:It is not necessarily about computer software, may be something else engineered. I hope that this explains what Igor Aleksander wanted to say. His definition is maybe not easy to understand, but it defines what is necessary to be defined, and there are really not much better ones. Unless one goes to a greedy simplification, omitting something essential, that is. Tkorrovi (talk) 23:56, 26 April 2025 (UTC)

:I very much want to write such easier and better to understand explanations, as that above. But because of using the Wikipedia rules to the extreme, at some times if you didn't write it directly from a peer-reviewed paper, then it was deleted. And these texts are always technical, almost never easy to understand for most people. This is also one reason why i wanted a link to the Everything2 wiki about the topic, where the rules are less restrictive, and it is possible to write such explanations there. Yes there also is a link to my software project, some people like it, and i cannot change that. If some say that this taints everything, that's an easy way to prevent all good efforts. My aim has always been to understand these things, and help people to understand, this is also why i, almost alone, once a long time ago started this article. That i also wrote some software for research purposes for the same reason, i'm so sorry for the sin, though i don't really understand why it is considered to be so great sin. Tkorrovi (talk) 10:34, 27 April 2025 (UTC)

Integrated information theory

I think the integrated information theory is a major aspect of the topic, and should be discussed in the article. Alenoach (talk) 07:00, 16 August 2023 (UTC)

:Perhaps also Attention schema theory and Global workspace theory. Alenoach (talk) 10:12, 16 August 2023 (UTC)

May need to be removed

I don't feel confident removing a lot of content without discussion, but in my opinion, the section "Implementation proposals" still contains old and non-essential content that has historical value but that isn't so useful for readers. For example the part on "Intelligent Distribution Agent". Perhaps some of it can be moved to other articles like Cognitive architecture. Alenoach (talk) 22:24, 18 August 2024 (UTC)

:Maybe it is true about something particular. But in general, you say "Old and non-essential content that has historical value but that isn't so useful for readers". Would you please explain why is old necessarily bad? Is true only that which is said at any particular time? One old thing that you seem to accept, is the Daniel Dennett's multiple drafts model, this may just happen to remain true, in spite of being old. Tkorrovi (talk) 09:58, 29 April 2025 (UTC)

Chatbots like ChatGPT or Bard have been trained to say they are not conscious?

The statement, "many chatbots like ChatGPT or Bard have been trained to say they are not conscious." is referenced to this article:

https://www.noemamag.com/artificial-general-intelligence-is-already-here/

But that article provides no evidence to support this statement. It merely states the same - "ChatGPT and Bard are both trained to respond that they are not conscious."

Therefore I removed the following statement and it's reference:

Additionally, many chatbots like ChatGPT or Bard have been trained to say they are not conscious.{{Cite news |last=Agüera y Arcas |first=Blaise |last2=Norvig |first2=Peter |date=October 10, 2023 |title=Artificial General Intelligence Is Already Here |url=https://www.noemamag.com/artificial-general-intelligence-is-already-here/ |work=Noema}}

Tyler keys (talk) 12:49, 26 October 2024 (UTC)

:It's not something that companies openly declare. But one of the two authors (Blaise Agüera y Arcas) is well-placed to make a statement about Bard, since he works at Google. Asking to ChatGPT if it is conscious returns an unusually short and categorical negative response. But indeed, the authors don't work at OpenAI and may not have insider knowledge about how ChatGPT was trained. I replaced the sentence with "Additionally, some chatbots have been trained to say they are not conscious." Let me know if it's still not ok. Alenoach (talk) 01:11, 27 October 2024 (UTC)

:Thanks - I neglected to take note of the author's credentials. Tyler keys (talk) 06:33, 28 October 2024 (UTC)

::No problem, it's good that you verify the sources. Alenoach (talk) 17:56, 28 October 2024 (UTC)