Fusion adaptive resonance theory

{{Technical|date=November 2020}}

Fusion adaptive resonance theory (fusion ART)Tan, A.-H., Carpenter, G. A. & Grossberg, S. (2007) [http://www3.ntu.edu.sg/home/ASAHTan/Papers/2007/II%20ISNN07.pdf Intelligence Through Interaction: Towards A Unified Theory for Learning] . In proceedings, D. Liu et al. (Eds.): International Symposium on Neural Networks (ISNN'07), LNCS 4491, Part I, pp. 1098-1107.{{Cite journal|last1=Tan|first1=A.-H.|last2=Subagdja|first2=B.|last3=Wang|first3=D.|last4=Meng|first4=L.|date=2019|title=Self-organizing neural networks for universal learning and multimodal memory encoding|url=https://linkinghub.elsevier.com/retrieve/pii/S0893608019302370|journal=Neural Networks|language=en|volume=120|pages=58–73|doi=10.1016/j.neunet.2019.08.020|pmid=31537437 |s2cid=202703163 }} is a generalization of self-organizing neural networks known as the original Adaptive Resonance TheoryCarpenter, G.A. & Grossberg, S. (2003), [http://cns.bu.edu/Profiles/Grossberg/CarGro2003HBTNN2.pdf Adaptive Resonance Theory] {{Webarchive|url=https://web.archive.org/web/20060519091948/http://cns.bu.edu/Profiles/Grossberg/CarGro2003HBTNN2.pdf |date=2006-05-19 }}, In Michael A. Arbib (Ed.), The Handbook of Brain Theory and Neural Networks, Second Edition (pp. 87-90). Cambridge, MA: MIT Press models for learning recognition categories across multiple pattern channels. There is a separate stream of work on fusion ARTMAP,Y.R. Asfour, G.A. Carpenter, S. Grossberg, and G.W. Lesher. (1993) Fusion ARTMAP: an adaptive fuzzy network for multi-channel classification. In Proceedings of the Third International Conference on Industrial Fuzzy Control and Intelligent Systems (IFIS).R.F. Harrison and J.M. Borges. (1995) Fusion ARTMAP: Clarification, Implementation and Developments. Research Report No. 589, Department of Automatic Control and Systems Engineering, The University of Sheffield. that extends fuzzy ARTMAP consisting of two fuzzy ART modules connected by an inter-ART map field to an extended architecture consisting of multiple ART modules.

Fusion ART unifies a number of neural model designs and supports a myriad of learning paradigms, notably unsupervised learning, supervised learning, reinforcement learning, multimodal learning, and sequence learning. In addition, various extensions have been developed for domain knowledge integration,{{Cite journal|last1=Teng|first1=T.-H.|last2=Tan|first2=A.-H.|last3=Zurada|first3=J. M.|date=2015|title=Self-Organizing Neural Networks Integrating Domain Knowledge and Reinforcement Learning|url=https://ieeexplore.ieee.org/document/6841041|journal=IEEE Transactions on Neural Networks and Learning Systems|volume=26|issue=5|pages=889–902|doi=10.1109/TNNLS.2014.2327636|pmid=25881365 |s2cid=4664197 |issn=2162-237X}} memory representation,{{Cite journal|last1=Wang|first1=W.|last2=Subagdja|first2=B.|last3=Tan, A.-H.|last4=Starzyk|first4=J. A.|date=2012|title=Neural Modeling of Episodic Memory: Encoding, Retrieval, and Forgetting|url=https://ieeexplore.ieee.org/document/6261552|journal=IEEE Transactions on Neural Networks and Learning Systems|volume=23|issue=10|pages=1574–1586|doi=10.1109/TNNLS.2012.2208477|pmid=24808003 |s2cid=1337309 |issn=2162-237X}}{{Cite journal|last1=Wang|first1=W.|last2=Tan|first2=A.-H.|last3=Teow|first3=L.-N.|date=2017|title=Semantic Memory Modeling and Memory Interaction in Learning Agents|url=https://ieeexplore.ieee.org/document/7429758|journal=IEEE Transactions on Systems, Man, and Cybernetics: Systems|volume=47|issue=11|pages=2882–2895|doi=10.1109/TSMC.2016.2531683|s2cid=12768875 |issn=2168-2216}} and modelling of high level cognition.

Overview

Fusion ART is a natural extension of the original adaptive resonance theory (ART)Grossberg, S. (1987), Competitive learning: From interactive activation to adaptive resonance, Cognitive Science (Publication), 11, 23-63 models developed by Stephen Grossberg and Gail A. Carpenter from a single pattern field to multiple pattern channels. Whereas the original ART models perform unsupervised learning of recognition nodes in response to incoming input patterns, fusion ART learns multi-channel mappings simultaneously across multi-modal pattern channels in an online and incremental manner.

The learning model

Fusion ART employs a multi-channel architecture (as shown below), comprising a category field F_2 connected to a fixed number of (K) pattern channels or input fields F_1^{c1},\dots,F_1^{cK} through bidirectional conditionable pathways. The model unifies a number of network designs, most notably Adaptive Resonance Theory (ART), Adaptive Resonance Associative Map (ARAM){{cite journal|date=1995|title=Adaptive Resonance Associative Map|url=http://www3.ntu.edu.sg/home/ASAHTan/Papers/ARAM%20NN95.pdf|journal=Neural Networks|volume=8|issue=3|pages=437–446|last1=Tan|first1=A.-H.|doi=10.1016/0893-6080(94)00092-z}} and Fusion Architecture for Learning and COgNition (FALCON),{{cite journal|date=2008|title=Integrating Temporal Difference Methods and Self-Organizing Neural Networks for Reinforcement Learning with Delayed Evaluative Feedback|url=http://www3.ntu.edu.sg/home/ASAHTan/Papers/2008/TD-FALCON%20TNN%2004359212.pdf|journal=IEEE Transactions on Neural Networks|volume=9|issue=2|pages=230–244|author1=Tan, A.-H., Lu, N. |author2=Xiao, D }} developed over the past decades for a wide range of functions and applications.

File:Fusion ART Architecture.jpg

Given a set of multimodal patterns, each presented at a pattern channel, the fusion ART pattern encoding cycle comprises five key stages, namely code activation, code competition, activity readout, template matching, and template learning, as described below.

  • Code activation: Given the input activity vectors \vec{I}^{ck}, one for each input field F_1^{ck}, the choice function T_j of each F_2 node j is computed based on the combined overall similarity between the input patterns and the corresponding weight vectors \vec{w}_j^{ck}.
  • Code competition: A code competition process follows under which the F_2 node with the highest choice function value is identified. The winner is indexed at J where T_j is the maximum among all F_2 nodes. This indicates a winner-take-all strategy.
  • Activity readout: During memory recall, the chosen F_2 node J performs a read out of its weight vectors to the input fields F_1^{ck}.
  • Template matching: Before the activity readout is stabilized and node J can be used for learning, a template matching process checks that the weight templates of node J are sufficiently close to their respective input patterns. Specifically, resonance occurs if for each channel k, the match function of the chosen node J meets its vigilance criterion. If any of the vigilance constraints is violated, mismatch reset occurs in which the value of the choice function T_J is set to 0 for the duration of the input presentation. Using a match tracking process, at the beginning of each input presentation, the vigilance parameter in each channel ck equals a baseline vigilance. When a mismatch reset occurs, the vigilance of all pattern channels are increased simultaneously until one of them is slightly larger than its corresponding match function, causing a reset. The search process then selects another F_2 node J under the revised vigilance criterion until a resonance is achieved.
  • Template learning: Once a resonance occurs, for each channel ck, the weight vector \vec{w}_J^{ck} is modified according to a learning rule which moves it towards the input pattern. When an uncommitted node is selected for learning, it becomes committed and a new uncommitted node is added to the F_2 field. Fusion ART thus expands its network architecture dynamically in response to the input patterns.

Types of fusion ART

The network dynamics described above can be used to support numerous learning operations. We show how fusion ART can be used for a variety of traditionally distinct learning tasks in the subsequent sections.

= Original ART models =

With a single pattern channel, the fusion ART architecture reduces to the original ART model. Using a selected vigilance value ρ, an ART model learns a set of recognition nodes in response to an incoming stream of input patterns in a continuous manner. Each recognition node in the F_2 field learns to encode a template pattern representing the key characteristics of a set of patterns. ART has been widely used in the context of unsupervised learning for discovering pattern groupings.

= Adaptive resonance associative map =

By synchronizing pattern coding across multiple pattern channels, fusion ART learns to encode associative mappings across distinct pattern spaces. A specific instance of fusion ART with two pattern channels is known as adaptive resonance associative map (ARAM), that learns multi-dimensional supervised mappings from one pattern space to another pattern space. An ARAM system consists of an input field F_1^a, an output field F_1^b, and a category field F_2. Given a set of feature vectors presented at F_1^a with their corresponding class vectors presented at F_1^b, ARAM learns a predictive model (encoded by the recognition nodes in F_2) that associates combinations of key features to their respective classes.

Fuzzy ARAM, based on fuzzy ART operations, has been successfully applied to numerous machine learning tasks, including personal profiling,{{cite book | doi=10.1007/3-540-45571-X_21 | chapter=Predictive Adaptive Resonance Theory and Knowledge Discovery in Databases | title=Knowledge Discovery and Data Mining. Current Issues and New Applications | series=Lecture Notes in Artificial Intelligence | date=2000 | last1=Tan | first1=Ah-Hwee | last2=Soon | first2=Hui-Shin Vivien | volume=1805 | pages=173–176 | isbn=978-3-540-67382-8 }} document classification,{{cite journal | last1 = He | first1 = J. | last2 = Tan | first2 = A.-H. | last3 = Tan | first3 = C.-L. | year = 2003 | title = On Machine Learning Methods for Chinese Document Classification | url = http://www3.ntu.edu.sg/home/ASAHTan/Papers/TC%20APIN03.pdf | journal = Applied Intelligence | volume = 18 | issue = 3| pages = 311–322 | doi = 10.1023/A:1023202221875 | s2cid = 2033181 }} personalized content management,{{cite journal | last1 = Tan | first1 = A.-H. | last2 = Ong | first2 = H.-L. | last3 = Pan | first3 = H. | last4 = Ng | first4 = J. | last5 = Li | first5 = Q.-X. | year = 2004 | title = Towards Personalized Web Intelligence | url = http://www3.ntu.edu.sg/home/ASAHTan/Papers/TC%20APIN03.pdf | journal = Knowledge and Information Systems | volume = 6 | issue = 5| pages = 595–616 | doi=10.1007/s10115-003-0130-9| s2cid = 14699173 }} and DNA gene expression analysis.{{cite journal | last1 = Tan | first1 = A.-H. | last2 = Pan | year = 2005 | title = Predictive Neural Networks for Gene Expression Data Analysis | url = http://www3.ntu.edu.sg/home/ASAHTan/Papers/Predictive%20NN%20for%20Gene%20Expression%20Analysis.pdf | journal = Neural Networks | volume = 18 | issue = 3| pages = 297–306 | doi=10.1016/j.neunet.2005.01.003| pmid = 15896577 | s2cid = 5058995 }} In many benchmark experiments, ARAM has demonstrated predictive performance superior to those of many state-of-the-art machine learning systems, including C4.5, Backpropagation Neural Network, K Nearest Neighbour, and Support Vector Machines.

= Fusion ART with domain knowledge =

During learning, fusion ART formulates recognition categories of input patterns across multiple channels. The knowledge that fusion ART discovers during learning, is compatible with symbolic rule-based representation. Specifically, the recognition categories learned by the F_2 category nodes are compatible with a class of IF-THEN rules that maps a set of input attributes (antecedents) in one pattern channel to a disjoint set of output attributes (consequents) in another channel. Due to this compatibility, at any point of the incremental learning process, instructions in the form of IF-THEN rules can be readily translated into the recognition categories of a fusion ART system. The rules are conjunctive in the sense that the attributes in the IF clause and in the THEN clause have an AND relationship. Augmenting a fusion ART network with domain knowledge through

explicit instructions serves to improve learning efficiency and predictive accuracy.

The fusion ART rule insertion strategy is similar to that used in Cascade ARTMAP, a generalization of ARTMAP that performs domain knowledge insertion, refinement, and extraction.{{Cite journal|last=Tan|first=A.-H.|date=1997|title=Cascade ARTMAP: Integrating Neural Computation and Symbolic Knowledge Processing|url=http://www3.ntu.edu.sg/home/ASAHTan/Papers/Cascade%20ARTMAP-TNN97.pdf|journal=IEEE Transactions on Neural Networks|volume=8|issue=2|pages=237–250|doi=10.1109/72.557661|pmid=18255628}} For direct knowledge insertion, the IF and THEN clauses of each instruction (rule) is translated into a pair of vectors A and B respectively. The vector pairs derived are then used

as training patterns for inserting into a fusion ART network. During rule insertion, the vigilance parameters are set to 1s to ensure that each distinct rule is encoded by one category node.

= Fusion architecture for learning and cognition (FALCON) =

Reinforcement learning is a paradigm wherein an autonomous system learns to adjust its behaviour based on reinforcement signals received from the environment. An instance of fusion ART, known as FALCON (fusion architecture for learning and cognition), learns mappings simultaneously across multi-modal input patterns, involving states, actions, and rewards, in an online and incremental manner. Compared with other ART-based reinforcement learning systems, FALCON presents a truly

integrated solution in the sense that there is no implementation of a separate reinforcement learning module or Q-value table. Using competitive coding as the underlying principle of computation, the network dynamics encompasses several learning

paradigms, including unsupervised learning, supervised learning, as well as reinforcement learning.

FALCON employs a three-channel architecture, comprising a category field F_2 and three pattern fields, namely a sensory field F_1^{c1} for representing current states, a motor field F_1^{c2} for representing actions, and a feedback field F_1^{c3} for representing reward values. A class of FALCON networks, known as TD-FALCON, incorporates Temporal Difference (TD) methods to estimate and learn value function Q(s,a), that indicates the goodness to take a certain action a in a given state s.

The general sense-act-learn algorithm for TD-FALCON is summarized. Given the current state s, the FALCON network is used to predict the value of performing each available action a in the action set A based on the corresponding state vector \vec{s} and action vector \vec{a}. The value functions are then processed by an action selection strategy (also known as policy) to select an action. Upon receiving a feedback (if any) from the environment after performing the

action, a TD formula is used to compute a new estimate of the Q-value for performing the chosen action in the current state. The new Q-value is then used as the teaching signal (represented as reward vector R) for FALCON to learn the association of the current state and the chosen action to the estimated value.

References

{{reflist}}

Category:Theories