Talk:Feedforward neural network

{{WikiProject banner shell|

class=C|

1=

{{WikiProject Computing |importance=Low}}

{{WikiProject Computer science |importance=Low}}

}}

The former use of the label and nomenclature of '''Feedforward''' (vs non-Feedforward networks) is both misleading and confusing. Thus should be changed

Recurrent networks etc process time but they do not have feedback like negative feedback or positive feedback where the outputs feedback to the VERY SAME inputs and modify them.

From this perspective all current neural networks including recurrent networks are feedforward:

1) they multiply inputs to weights to get outputs and

2) backpropagation could not be used otherwise.

Temporal networks just do another pass after some time or processing. Thus I suggest this category be disambiguated by being called Temporal and Non-Temporal or something along those lines.

slp

The last paragraph in Single-layer Perceptron seems irrelevant and out of place since it's solely about Multi-layer Perceptrons. It should be in the next section. --134.100.6.103 (talk) 12:53, 22 July 2013 (UTC)

overview

This is a very basic page and I am hoping to update it. Paskari 14:26, 30 November 2006 (UTC)

Picture

I recently included the picture, if anyone can make a better one please replace itPaskari 14:26, 30 November 2006 (UTC)

In the picture descibing the XOR function, shouldn't the output be 0 when x=y=1 ? 132.168.90.105 (talk) 13:40, 3 March 2016 (UTC) Julien Borrel

:It wasn't making sense to me either. It turns out that the threshold being reached ends up with a 0 or 1 value resulting, for example, in 0 or -2 when the -2 weight is applied and never a -4.

:I hope my changes to the XOR diagram description are correct and readable.

:For the reviewers:

:The equal signs within triplets represent threshold of 2 results.:

:From input layer on up:

:0,0 -> 0,0,0 -> 0+0+0 -> 0

:0,1 -> 0,1=0,0 -> 1-0+0 -> 1

:1,0 " " "

:1,1 -> 1,2=1,0 -> 1-2+1 -> 0 JJones587 (talk) 11:33, 15 May 2023 (UTC)

ADELINE part

Currently in the text there is no connection between the ADELINE chapter and the previous chapters, what does ADELINE do for ff-nn? I think the part should be removed or expanded to make that connection. (Background, I'm a MSc level Computer Science student)

82.157.46.201 (talk) 14:11, 17 January 2013 (UTC)

References

This really need more citations. Which paper first used the word feedfoward in this context, and which books gave a formal definition (in terms of graph theory?

multi-layer perceptron

"notable for being able to distinguish data that is not linearly separable"--this implies that being able to classify with non-linearly separable data is the only advantage of multiple hidden layers. But there are other advantages which should be mentioned. Iuvalclejan (talk) 00:27, 26 January 2025 (UTC)