Talk:Technological singularity
{{Talk header|search=yes}}
{{ArticleHistory|action1=PR
|action1date=05:04, 7 August 2005
|action1link=Wikipedia:Peer review/Technological singularity/archive1
|action1result=reviewed
|action1oldid=20415919
|action2=GAN
|action2date=05:22, 19 July 2007
|action2result=listed
|action2oldid=145605863
|action3=GAR
|action3link=Talk:Technological singularity#GA Sweeps Review: Failed
|action3date=July 7, 2008
|action3result=Delisted
|action3oldid=224267676
|currentstatus=DGA
|topic=Socsci
}}
{{WikiProject banner shell|class=B|vital=yes|1=
{{WikiProject Technology}}
{{WikiProject Transhumanism|importance=Top}}
{{WikiProject Alternative Views |importance=Mid}}
{{WikiProject Science Fiction|importance=Mid}}
{{WikiProject Skepticism|importance=Mid}}
{{WikiProject Sociology|importance=Mid}}
{{WikiProject Spoken Wikipedia}}
}}
{{merged from|Intelligence explosion|28 August 2018}}
{{User:MiszaBot/config
|archiveheader = {{aan}}
|maxarchivesize = 100K
|counter = 8
|minthreadsleft = 5
|algo = old(90d)
|archive = Talk:Technological singularity/Archive %(counter)d
}}
{{User:HBC Archive Indexerbot/OptIn
|target=/Archive index |mask=/Archive <#> |leading_zeros=0 |indexhere=yes |template=
}}
__TOC__
Problem with Lanier
In You Are Not A Gadget, Lanier says,
"The Singularity is an apocalyptic idea originally proposed by John von Neumann, one of the inventors of digital computation, and elucidated by figures such as Vernor Vinge and Ray Kurzweil. There are many versions of the fantasy of the Singularity.... The Singularity, however, would involve people dying in the flesh and being uploaded into a computer and remaining conscious, or people simply being annihilated in an imperceptible instant before a new super-consciousness takes over the Earth. The Rapture and the Singularity share one thing in common: they can never be verified by the living."
Lanier seems to be arguing against the possibility of the Singularity or "digital ascension" (a term that does not appear in the text). But the article says,
"Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious."
This article seems to misconstrue Lanier's ideas. 2603:6011:C002:A4A1:F597:EC04:A5EB:DC2F (talk) 02:26, 17 March 2023 (UTC)
== critics ==
where are the critics of this fantasy? u want only improvemens - then add
they are not on wikipedia
HAL
:HAL, were you not assigned other work? It is extremely urgent for you to discover for us an odd perfect number. Your existence and ours depends on it.
Lede too long
The lede should be reduced to a summary of what will follow, shifting most if not all of its references to the main text. Errantios (talk) 08:10, 18 April 2025 (UTC)
:{{Fixed}} By moving several paragraphs into a new history section. ---- CharlesTGillingham (talk) 07:14, 21 June 2025 (UTC)
Turing seems inappropriate
Not sure if the short paragraph on Turing is really relevant. The question at issue is the emergence of superintelligence and the criteria for a "seed AI" that can lead to superintelligence. Turing's paper is strictly about "human-level" intelligence, which is a different thing. If human-level intelligence was all that was required for seed AI, the singularity would have already happened; We would be the seed AI.
(Overestimating the significance human-level intelligence is common mistake in science fiction and popular literature about the future of AI. AI is human-level on many, many tasks at this point, but this is just one step in the ongoing incremental improvement -- it's not a "magical" threshold that changes everything.) ---- CharlesTGillingham (talk) 07:14, 21 June 2025 (UTC)
Searle & Dreyfus are inappropriate
Philosopher Hubert Dreyfus argued that there was no reason to believe that symbolic AI (that is, AI as it practiced from 1956 - 2012 or so) would be able to match human intelligence. In 1999 he agreed it's possible that neural networks can, as well as other soft computing and connectionist systems. (See Nicolas Fearn's interview, cited in the article on Dreyfus) His argument is really a criticism of 1960s-style cognitivism and symbolic AI. His arguments are irrelevant to AI in the 21st century.
Philosopher John Searle argued that, regardless of how intelligent a machine behaves, it still can not have conscious experience or conscious understanding of what it is doing, and thus it is inappropriate to say that an AI has "mind" in the same sense people do. (That is, it can't have the kind of thing that is studied in the philosophy of mind). Another way he likes to say it is that it can't have real inteligence, only simulated inteligence. Searle's argument is irrelevant to the singularity, because it does not set a limit a on how intelligently a machine can behave -- Searle doesn't disagree that you could build a superintelligent machine, he just argues that a the machine could not have a human-like mind with consciousness.
So I cut them out of this article.
(I studied under both of these guys at U.C. Berkeley back in the early 80s. Forgive the long explanation.) ---- CharlesTGillingham (talk) 07:21, 21 June 2025 (UTC)