Glossary of artificial intelligence
{{short description|List of definitions of terms and concepts commonly used in the study of artificial intelligence}}
{{Use dmy dates|date=September 2017}}
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
{{Compact TOC|side=yes|center=yes|nobreak=yes|seealso=yes|refs=yes|}}
{{Artificial intelligence}}
A
{{glossary}}
{{anchor|A* search}}{{term|A* search}}
{{ghat|Pronounced "A-star".}}
{{defn|A {{gli|graph traversal}} and {{gli|pathfinding}} {{gli|algorithm}} which is used in many fields of {{gli|computer science}} due to its completeness, optimality, and optimal efficiency.}}
{{term|abductive logic programming (ALP)}}
{{defn|A high-level knowledge-representation framework that can be used to solve problems declaratively based on {{gli|abductive reasoning}}. It extends normal {{gli|logic programming}} by allowing some predicates to be incompletely defined, declared as abducible predicates.}}
{{term|abductive reasoning}}
{{ghat|Also abduction.}}
{{defn|A form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it.For example: {{Cite book |title=Abductive Inference: Computation, Philosophy, Technology |date=1994 |publisher=Cambridge University Press |isbn=978-0521434614 |editor-last=Josephson |editor-first=John R. |location=Cambridge, UK; New York |doi=10.1017/CBO9780511530128 |oclc=28149683 |editor-last2=Josephson |editor-first2=Susan G.}} abductive inference, or retroduction{{Cite web |url=https://commens.org/dictionary/term/retroduction |title=Retroduction |website=Commens – Digital Companion to C. S. Peirce |publisher=Mats Bergman, Sami Paavola & João Queiroz |access-date=2014-08-24 |archive-date=5 July 2022 |archive-url=https://web.archive.org/web/20220705094016/http://www.commens.org/dictionary/term/retroduction |url-status=dead }}}}
{{term|ablation}}
{{defn|The removal of a component of an AI system. An ablation study aims to determine the contribution of a component to an AI system by removing the component, and then analyzing the resultant performance of the system.{{Cite book |last=Sheikholeslami |first=Sina |url=https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258413 |title=Ablation Programming for Machine Learning |date=2019}}}}
{{term|abstract data type}}
{{defn|A mathematical model for data types, where a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations.}}
{{term|abstraction}}
{{defn|The process of removing physical, spatial, or temporal details{{Cite journal |last1=Colburn |first1=Timothy |last2=Shute |first2=Gary |date=2007-06-05 |title=Abstraction in Computer Science |journal=Minds and Machines |volume=17 |issue=2 |pages=169–184 |doi=10.1007/s11023-007-9061-7 |s2cid=5927969 |issn=0924-6495}} or attributes in the study of objects or systems in order to more closely attend to other details of interest{{Cite journal |last1=Kramer |first1=Jeff |date=2007-04-01 |title=Is abstraction the key to computing? |journal=Communications of the ACM |volume=50 |issue=4 |pages=36–42 |citeseerx=10.1.1.120.6776 |doi=10.1145/1232743.1232745 |s2cid=12481509 |issn=0001-0782}}}}
{{term|accelerating change}}
{{defn|A perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.}}
{{term|action language}}
{{defn|A language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world.Michael Gelfond, Vladimir Lifschitz (1998) "[https://ep.liu.se/ea/cis/1998/016/ Action Languages]", Linköping Electronic Articles in Computer and Information Science, vol 3, nr 16. Action languages are commonly used in the {{gli|artificial intelligence}} and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning.}}
{{term|action model learning}}
{{defn|An area of {{gli|machine learning}} concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.}}
{{term|action selection}}
{{defn|A way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment.}}
{{term|activation function}}
{{defn|In {{gli|artificial neural network|artificial neural networks}}, the activation function of a node defines the output of that node given an input or set of inputs.}}
{{term|adaptive algorithm}}
{{defn|An algorithm that changes its behavior at the time it is run, based on a priori defined reward mechanism or criterion.}}
{{term|adaptive neuro fuzzy inference system (ANFIS)}}
{{ghat|Also adaptive network-based fuzzy inference system.}}
{{defn|A kind of {{gli|artificial neural network}} that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s.{{Cite conference |last1=Jang |first1=Jyh-Shing R |year=1991 |title=Fuzzy Modeling Using Generalized Neural Networks and Kalman Filter Algorithm |url=https://aaai.org/Papers/AAAI/1991/AAAI91-119.pdf |conference=Proceedings of the 9th National Conference on Artificial Intelligence, Anaheim, CA, USA, July 14–19 |volume=2 |pages=762–767}}{{Cite journal |last1=Jang |first1=J.-S.R. |s2cid=14345934 |year=1993 |title=ANFIS: adaptive-network-based fuzzy inference system |journal= IEEE Transactions on Systems, Man, and Cybernetics|volume=23 |issue=3 |pages=665–685 |doi=10.1109/21.256541}} Since it integrates both neural networks and {{gli|fuzzy logic}} principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions.{{Citation |last1=Abraham |first1=A. |title=Fuzzy Systems Engineering: Theory and Practice |volume=181 |pages=53–83 |year=2005 |editor-last=Nedjah |editor-first=Nadia |series=Studies in Fuzziness and Soft Computing |chapter=Adaptation of Fuzzy Inference System Using Neural Learning |place=Germany |publisher=Springer Verlag |citeseerx=10.1.1.161.6135 |doi=10.1007/11339366_3 |isbn=978-3-540-25322-8 |editor2-last=De Macedo Mourelle |editor2-first=Luiza}} Hence, ANFIS is considered to be a universal estimator.Jang, Sun, Mizutani (1997) – Neuro-Fuzzy and Soft Computing – Prentice Hall, pp 335–368, {{ISBN|0-13-261066-3}} For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained by genetic algorithm.{{Cite journal |last1=Tahmasebi |first1=P. |year=2012 |title=A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation |journal=Computers & Geosciences |volume=42 |pages=18–27 |bibcode=2012CG.....42...18T |doi=10.1016/j.cageo.2012.02.004 |pmc=4268588 |pmid=25540468}}{{Cite journal |last1=Tahmasebi |first1=P. |year=2010 |title=Comparison of optimized neural network with fuzzy logic for ore grade estimation |url=https://www.researchgate.net/publication/266881168 |journal=Australian Journal of Basic and Applied Sciences |volume=4 |pages=764–772}}}}
{{term|admissible heuristic}}
{{defn|In computer science, specifically in {{gli|algorithm|algorithms}} related to {{gli|pathfinding}}, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path.{{Cite book |last1=Russell |first1=S.J. |title=Artificial Intelligence: A Modern Approach |title-link=Artificial Intelligence: A Modern Approach |last2=Norvig, P. |publisher=Prentice Hall |year=2002 |isbn=978-0-13-790395-5}}}}
{{term|affective computing}}
{{ghat|Also artificial emotional intelligence or emotion AI.}}
{{defn|The study and development of systems and devices that can recognize, interpret, process, and simulate human affects. Affective computing is an interdisciplinary field spanning computer science, psychology, and cognitive science.{{Cite conference |last1=Tao |first1=Jianhua |last2=Tieniu Tan |year=2005 |title=Affective Computing: A Review |publisher=Springer |volume=LNCS 3784 |pages=981–995 |doi=10.1007/11573548 |book-title=Affective Computing and Intelligent Interaction}}{{Cite news |last1=El Kaliouby |first1=Rana |url=https://technologyreview.com/s/609071/we-need-computers-with-empathy/ |archive-url=https://wayback.archive-it.org/all/20180707140902/https://technologyreview.com/s/609071/we-need-computers-with-empathy/ |url-status=dead |archive-date=7 July 2018 |title=We Need Computers with Empathy |date=Nov–Dec 2017 |work=Technology Review |issue=6 |volume=120 |page=8 |access-date=6 November 2018 }}}}
{{term|agent architecture}}
{{defn|A blueprint for software agents and {{gli|intelligent control}} systems, depicting the arrangement of components. The architectures implemented by {{gli|intelligent agent|intelligent agents}} are referred to as cognitive architectures.[https://hri.cogs.indiana.edu/publications/aaai04ws.pdf Comparison of Agent Architectures] {{webarchive |url=https://web.archive.org/web/20080827222057/https://hri.cogs.indiana.edu/publications/aaai04ws.pdf |date=August 27, 2008 }}}}
{{term|AI accelerator}}
{{defn|A class of microprocessor{{Cite web |url=https://v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator |title=Intel unveils Movidius Compute Stick USB AI Accelerator |date=2017-07-21 |url-status=dead |archive-url=https://web.archive.org/web/20170811193632/https://v3.co.uk/v3-uk/news/3014293/intel-unveils-movidius-compute-stick-usb-ai-accelerator |archive-date=11 August 2017 |access-date=28 November 2018}} or computer system{{Cite web |url=https://insidehpc.com/2017/06/inspurs-unveils-gx4-ai-accelerator/ |title=Inspurs unveils GX4 AI Accelerator |date=2017-06-21}} designed as hardware acceleration for {{gli|artificial intelligence}} applications, especially {{gli|artificial neural network|artificial neural networks}}, {{gli|machine vision}}, and {{gli|machine learning}}.}}
{{term|AI-complete}}
{{defn|In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or {{gli|artificial general intelligence|strong AI}}.Shapiro, Stuart C. (1992). [https://cse.buffalo.edu/~shapiro/Papers/ai.pdf Artificial Intelligence] In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".) To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.}}
{{term|algorithm}}
{{defn|An unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks.}}
{{term|algorithmic efficiency}}
{{defn|A property of an {{gli|algorithm}} which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.}}
{{term|algorithmic probability}}
{{defn|In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s.Solomonoff, R., "[https://world.std.com/~rjs/z138.pdf A Preliminary Report on a General Theory of Inductive Inference]", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).}}
{{term|AlphaGo}}
{{defn|A computer program that plays the board game Go.{{Cite news |url=https://bbc.com/news/technology-35785875 |title=Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol |date=2016-03-12 |work=BBC News |access-date=17 March 2016}} It was developed by Alphabet Inc.'s Google DeepMind in London. AlphaGo has several versions including AlphaGo Zero, AlphaGo Master, AlphaGo Lee, etc.{{Cite web |url=https://deepmind.com/research/alphago/ |title=AlphaGo {{!}} DeepMind |website=DeepMind}} In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicaps on a full-sized 19×19 board.{{Cite web |url=https://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html |title=Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning |date=27 January 2016 |website=Google Research Blog}}{{Cite news |url=https://bbc.com/news/technology-35420579 |title=Google achieves AI 'breakthrough' by beating Go champion |date=27 January 2016 |work=BBC News}}}}
{{term|ambient intelligence (AmI)}}
{{defn|Electronic environments that are sensitive and responsive to the presence of people.}}
{{term|analysis of algorithms}}
{{defn|The determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity).}}
{{term|analytics}}
{{defn|The discovery, interpretation, and communication of meaningful patterns in data.}}
{{term|answer set programming (ASP)}}
{{defn|A form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. In ASP, search problems are reduced to computing stable models, and answer set solvers—programs for generating stable models—are used to perform search.}}
{{term|ant colony optimization (ACO)}}
{{defn|A probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs.}}
{{term|anytime algorithm}}
{{defn|An {{gli|algorithm}} that can return a valid solution to a problem even if it is interrupted before it ends.}}
{{term|application programming interface (API)}}
{{defn|A set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API may be for a web-based system, operating system, database system, computer hardware, or software library.}}
{{term|approximate string matching}}
{{ghat|Also fuzzy string searching.}}
{{defn|The technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately.}}
{{term|approximation error}}
{{defn|The discrepancy between an exact value and some approximation to it.}}
{{term|argumentation framework}}
{{ghat|Also argumentation system.}}
{{defn|A way to deal with contentious information and draw conclusions from it. In an abstract argumentation framework,See Dung (1995) entry-level information is a set of abstract arguments that, for instance, represent data or a proposition. Conflicts between arguments are represented by a binary relation on the set of arguments. In concrete terms, you represent an argumentation framework with a directed graph such that the nodes are the arguments, and the arrows represent the attack relation. There exist some extensions of the Dung's framework, like the logic-based argumentation frameworksSee Besnard and Hunter (2001) or the value-based argumentation frameworks.see Bench-Capon (2002)}}
{{anchor|artificial general intelligence}}{{term|artificial general intelligence (AGI)}}
{{defn|A type of AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks.}}
{{term|artificial immune system (AIS)}}
{{defn|A class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.}}
{{anchor|artificial intelligence}}{{term|artificial intelligence (AI)}}
{{ghat|Also machine intelligence.}}
{{defn|Any intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science, AI research is defined as the study of "{{gli|intelligent agent|intelligent agents}}": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.
Definition of AI as the study of intelligent agents:
- {{Harvnb|Poole|Mackworth|Goebel|1998|loc=[https://people.cs.ubc.ca/~poole/ci/ch1.pdf p. 1]}}, which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
- {{Harvtxt|Russell|Norvig|2003}} (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" {{Harv|Russell|Norvig|2003|p=55}}.
- {{Harvnb|Nilsson|1998}}
- {{Harvnb|Legg|Hutter|2007}}.
Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".{{sfn|Russell|Norvig|2009|p=2}}}}
{{term|Artificial Intelligence Markup Language}}
{{defn|An XML dialect for creating natural language software agents.}}
{{term|Association for the Advancement of Artificial Intelligence (AAAI)}}
{{defn|An international, nonprofit, scientific society devoted to promote research in, and responsible use of, {{gli|artificial intelligence}}. AAAI also aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.{{Cite web |url=https://aaai.org/Organization/bylaws.php |title=AAAI Corporate Bylaws}}}}
{{term|asymptotic computational complexity}}
{{defn|In {{gli|computational complexity theory}}, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of {{gli|algorithm|algorithms}} and computational problems, commonly associated with the usage of the big O notation.}}
{{term|attention mechanism}}
{{defn|{{gli|machine learning|Machine learning}}-based attention is a mechanism mimicking cognitive attention. It calculates "soft" weights for each word, more precisely for its embedding, in the context window. It can do it either in parallel (such as in {{gli|transformer|transformers}}) or sequentially (such as in recursive neural networks). "Soft" weights can change during each runtime, in contrast to "hard" weights, which are (pre-)trained and fine-tuned and remain frozen afterwards. Multiple attention heads are used in transformer-based large language models.}}
{{term|attributional calculus}}
{{defn|A logic and representation system defined by Ryszard S. Michalski. It combines elements of predicate logic, propositional calculus, and multi-valued logic. Attributional calculus provides a formal language for natural induction, an inductive learning process whose results are in forms natural to people.}}
{{anchor|augmented reality}}{{term|augmented reality (AR)}}
{{Main|Augmented reality}}
{{defn|An interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.{{cite journal | last1=Cipresso | first1=Pietro | last2=Giglioli | first2=Irene Alice Chicchi | last3=Raya | first3=iz | last4=Riva | first4=Giuseppe | title=The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature | journal=Frontiers in Psychology | volume=9 | date=2011-12-07 | pmid=30459681 | doi=10.3389/fpsyg.2018.02086 | page=2086| pmc=6232426 | doi-access=free }}}}
{{anchor|variational autoencoder}}{{term|autoencoder}}
{{defn|A type of {{gli|artificial neural network}} used to learn {{gli|feature learning|efficient codings}} of unlabeled data ({{gli|unsupervised learning}}). A common implementation is the variational autoencoder (VAE).}}
{{term|automata theory}}
{{defn|The study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science).}}
{{term|automated machine learning (AutoML)}}
{{defn| A field of {{gli|machine learning}} (ML) which aims to automatically configure an ML system to maximize its performance (e.g, {{gli|classification}} accuracy).}}
{{term|automated planning and scheduling}}
{{ghat|Also simply AI planning.}}
{{defn|A branch of {{gli|artificial intelligence}} that concerns the realization of strategies or action sequences, typically for execution by {{gli|intelligent agent|intelligent agents}}, autonomous robots and unmanned vehicles. Unlike classical control and {{gli|classification}} problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.{{Citation |last1=Ghallab |first1=Malik |title=Automated Planning: Theory and Practice |url=https://laas.fr/planning/ |year=2004 |publisher=Morgan Kaufmann |isbn=978-1-55860-856-6 |last2=Nau |first2=Dana S. |last3=Traverso |first3=Paolo}}}}
{{term|automated reasoning}}
{{defn|An area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of {{gli|artificial intelligence}}, it also has connections with theoretical computer science, and even philosophy.}}
{{anchor|autonomic computing}}{{term|autonomic computing (AC)}}
{{defn|The self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.{{Citation |last1=Kephart |first1=J.O. |title=The vision of autonomic computing |journal= Computer|volume=36 |pages=41–52 |year=2003 |citeseerx=10.1.1.70.613 |doi=10.1109/MC.2003.1160055 |last2=Chess |first2=D.M.}}}}
{{term|autonomous car}}
{{ghat|Also self-driving car, robot car, and driverless car.}}
{{defn|A vehicle that is capable of sensing its environment and moving with little or no human input.{{Cite conference |last1=Gehrig |first1=Stefan K. |last2=Stein |first2=Fridtjof J. |year=1999 |title=Dead reckoning and cartography using stereo vision for an automated car |conference=IEEE/RSJ International Conference on Intelligent Robots and Systems |location=Kyongju |volume=3 |pages=1507–1512 |doi=10.1109/IROS.1999.811692 |isbn=0-7803-5184-3}}{{Cite news |url=https://reuters.com/article/us-autos-selfdriving-uber-idUSKBN1GV296 |title=Self-driving Uber car kills Arizona woman crossing street |date=20 March 2018 |work=Reuters}}{{Cite journal |last1=Thrun |first1=Sebastian |year=2010 |title=Toward Robotic Cars |journal=Communications of the ACM |volume=53 |issue=4 |pages=99–106 |doi=10.1145/1721654.1721679|s2cid=207177792 }}}}
{{term|autonomous robot}}
{{defn|A robot that performs behaviors or tasks with a high degree of autonomy. Autonomous robotics is usually considered to be a subfield of {{gli|artificial intelligence}}, robotics, and information engineering.{{Cite web |url=https://robots.ox.ac.uk/ |title=Information Engineering Main/Home Page |publisher=University of Oxford |access-date=2018-10-03 |archive-date=3 July 2022 |archive-url=https://web.archive.org/web/20220703164507/http://arno@robots.ox.ac.uk/ |url-status=dead }}}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
B
{{glossary}}
{{term|backpropagation}}
{{defn|A method used in {{gli|artificial neural network|artificial neural networks}} to calculate a gradient that is needed in the calculation of the weights to be used in the network.Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016) Deep Learning. MIT Press. p. 196. {{ISBN|9780262035613}} Backpropagation is shorthand for "the backward propagation of errors", since an error is computed at the output and distributed backwards throughout the network's layers. It is commonly used to train {{gli|deep learning|deep neural networks}},{{Cite journal |last1=Nielsen |first1=Michael A. |year=2015 |title=Chapter 6 |url=https://neuralnetworksanddeeplearning.com/chap6.html |journal=Neural Networks and Deep Learning |access-date=5 July 2022 |archive-date=8 August 2022 |archive-url=https://web.archive.org/web/20220808100458/https://neuralnetworksanddeeplearning.com/chap6.html |url-status=dead }} a term referring to neural networks with more than one {{gli|hidden layer}}.{{Cite web |url=https://ufldl.stanford.edu/wiki/index.php/Deep_Networks:_Overview |title=Deep Networks: Overview – Ufldl |website=ufldl.stanford.edu |access-date=2017-08-04 |archive-date=16 March 2022 |archive-url=https://web.archive.org/web/20220316074444/http://ufldl.stanford.edu/wiki/index.php/Deep_Networks:_Overview |url-status=dead }}}}
{{anchor|backpropagation through structure}}{{term|backpropagation through structure (BPTS)}}
{{defn|A gradient-based technique for training {{gli|recurrent neural network|recurrent neural networks}}, proposed in a 1996 paper written by Christoph Goller and Andreas Küchler.{{cite book|first1=Christoph|last1=Goller|first2=Andreas|last2=Küchler|title=Proceedings of International Conference on Neural Networks (ICNN'96)|s2cid=6536466|chapter=Learning Task-Dependent Distributed Representations by Backpropagation Through Structure|citeseerx = 10.1.1.49.1968|year=1996|volume=1|pages=347–352|doi=10.1109/ICNN.1996.548916|isbn=0-7803-3210-5}}}}
{{anchor|backpropagation through time}}{{term|backpropagation through time (BPTT)}}
{{defn|A gradient-based technique for training certain types of {{gli|recurrent neural network|recurrent neural networks}}, such as Elman networks. The algorithm was independently derived by numerous researchers.{{Cite book |last1=Mozer |first1=M. C. |title=Backpropagation: Theory, architectures, and applications |publisher=Hillsdale, NJ: Lawrence Erlbaum Associates |year=1995 |editor-last=Chauvin |editor-first=Y. |pages=137–169 |language=en |chapter=A Focused Backpropagation Algorithm for Temporal Pattern Recognition |access-date=2017-08-21 |editor-last2=Rumelhart |editor-first2=D. |chapter-url=https://www.researchgate.net/publication/243781476}}{{Cite tech report |title=The utility driven dynamic error propagation network |last=Robinson |first=A. J. |last2=Fallside |first2=F. |name-list-style=amp |institution=Cambridge University, Engineering Department |number=CUED/F-INFENG/TR.1 |year=1987 |url=https://bibsonomy.org/bibtex/269a88ecbac9a51cbf0b4be189c412820/idsia}}{{Cite journal |last1=Werbos |first1=Paul J. |year=1988 |title=Generalization of backpropagation with application to a recurrent gas market model |url=https://zenodo.org/record/1258627 |journal=Neural Networks |volume=1 |issue=4 |pages=339–356 |doi=10.1016/0893-6080(88)90007-x}}}}
{{term|backward chaining}}
{{ghat|Also backward reasoning.}}
{{defn|An inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other {{gli|artificial intelligence}} applications.{{Cite book |last1=Feigenbaum |first1=Edward |url=https://archive.org/details/riseofexpertco00feig |title=The Rise of the Expert Company |publisher=Times Books |year=1988 |isbn=978-0-8129-1731-4 |page=[https://archive.org/details/riseofexpertco00feig/page/317 317] |url-access=registration}}}}
{{term|bag-of-words model}}
{{defn|A simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for {{gli|bag-of-words model in computer vision|computer vision}}.{{Cite journal |last1=Sivic |first1=Josef |date=April 2009 |title=Efficient visual search of videos cast as text retrieval |url=https://di.ens.fr/~josef/publications/sivic09a.pdf |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=31 |issue=4 |pages=591–605 |citeseerx=10.1.1.174.6841 |doi=10.1109/TPAMI.2008.111 |pmid=19229077 |s2cid=9899337 |access-date=5 July 2022 |archive-date=31 March 2022 |archive-url=https://web.archive.org/web/20220331011840/https://www.di.ens.fr/~josef/publications/sivic09a.pdf |url-status=dead }} The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a {{gli|feature}} for training a {{gli|classification|classifier}}.McTear et al 2016, p. 167.}}
{{term|bag-of-words model in computer vision}}
{{defn|In computer vision, the bag-of-words model (BoW model) can be applied to image classification, by treating {{gli|feature|image features}} as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features.}}
{{term|batch normalization}}
{{defn|A technique for improving the performance and stability of {{gli|artificial neural network|artificial neural networks}}. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance.{{Cite web |url=https://kratzert.github.io/2016/02/12/understanding-the-gradient-flow-through-the-batch-normalization-layer.html |title=Understanding the backward pass through Batch Normalization Layer |website=kratzert.github.io |access-date=24 April 2018}} Batch normalization was introduced in a 2015 paper.{{Cite arXiv |last1=Ioffe |first1=Sergey |last2=Szegedy |first2=Christian |year=2015 |title=Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift |class=cs.LG |eprint=1502.03167}}{{Cite web |url=https://medium.com/deeper-learning/glossary-of-deep-learning-batch-normalisation-8266dcd2fa82 |title=Glossary of Deep Learning: Batch Normalisation |date=2017-06-27 |website=medium.com |access-date=24 April 2018}} It is used to normalize the input layer by adjusting and scaling the activations.}}
{{term|Bayesian programming}}
{{defn|A formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.}}
{{term|bees algorithm}}
{{defn|A population-based {{gli|search algorithm}} which was developed by Pham, Ghanbarzadeh and et al. in 2005.Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighborhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies.Pham, D.T., Castellani, M. (2009), [https://pic.sagepub.com/content/223/12/2919.short The Bees Algorithm – Modelling Foraging Behaviour to Solve Continuous Optimisation Problems] {{Webarchive|url=https://web.archive.org/web/20161109125453/https://pic.sagepub.com/content/223/12/2919.short |date=9 November 2016 }}. Proc. ImechE, Part C, 223(12), 2919–2938.{{Cite journal |last1=Pham |first1=D. T. |last2=Castellani |first2=M. |year=2014 |title=Benchmarking and comparison of nature-inspired population-based continuous optimisation algorithms |journal=Soft Computing |volume=18 |issue=5 |pages=871–903 |doi=10.1007/s00500-013-1104-9|s2cid=35138140 }}{{Cite journal |last1=Pham |first1=Duc Truong |last2=Castellani |first2=Marco |year=2015 |title=A comparative study of the Bees Algorithm as a tool for function optimisation |journal=Cogent Engineering |volume=2 |doi=10.1080/23311916.2015.1091540 |doi-access=free}}{{Cite journal |last1=Nasrinpour |first1=H. R. |last2=Massah Bavani |first2=A. |last3=Teshnehlab |first3=M. |year=2017 |title=Grouped Bees Algorithm: A Grouped Version of the Bees Algorithm |journal=Computers |volume=6 |issue=1 |page=5 |doi=10.3390/computers6010005 |doi-access=free}}}}
{{anchor|behavior informatics}}{{term|behavior informatics (BI)}}
{{defn|The informatics of behaviors so as to obtain behavior intelligence and behavior insights.{{Cite journal |last1=Cao |first1=Longbing |year=2010 |title=In-depth Behavior Understanding and Use: the Behavior Informatics Approach |journal=Information Science |volume=180 |issue=17 |pages=3067–3085 |doi=10.1016/j.ins.2010.03.025|arxiv=2007.15516 |s2cid=7400761 }}}}
{{term|behavior tree (BT)}}
{{defn|A mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures.Colledanchise Michele, and Ögren Petter 2016. [https://michelecolledanchise.com/tro16colledanchise.pdf How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees. In IEEE Transactions on Robotics vol.PP, no.99, pp.1–18 (2016)]{{Cite book|arxiv = 1709.00084|doi = 10.1201/9780429489105|title = Behavior Trees in Robotics and AI|year = 2018|last1 = Colledanchise|first1 = Michele|last2 = Ögren|first2 = Petter|isbn = 9780429950902|s2cid = 27470659}}}}
{{term|belief–desire–intention software model (BDI)}}
{{defn|A software model developed for programming {{gli|intelligent agent|intelligent agents}}. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.}}
{{term|bias–variance tradeoff}}
{{defn|In statistics and {{gli|machine learning}}, the bias–variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa.}}
{{term|big data}}
{{defn|A term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.{{Cite journal |last1=Breur |first1=Tom |date=July 2016 |title=Statistical Power Analysis and the contemporary "crisis" in social sciences |journal=Journal of Marketing Analytics |volume=4 |issue=2–3 |pages=61–65 |doi=10.1057/s41270-016-0001-3 |issn=2050-3318 |doi-access=free}}}}
{{term|Big O notation}}
{{defn|A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann,{{Cite book |last1=Bachmann |first1=Paul |url=https://archive.org/stream/dieanalytischeza00bachuoft#page/402/mode/2up |title=Analytische Zahlentheorie |date=1894 |publisher=Teubner |volume=2 |location=Leipzig |language=de |trans-title=Analytic Number Theory |author-link=Paul Bachmann}} Edmund Landau,{{Cite book |last1=Landau |first1=Edmund |url=https://archive.org/details/handbuchderlehre01landuoft |title=Handbuch der Lehre von der Verteilung der Primzahlen |date=1909 |publisher=B. G. Teubner |location=Leipzig |page=883 |language=de |trans-title=Handbook on the theory of the distribution of the primes |author-link=Edmund Landau}} and others, collectively called Bachmann–Landau notation or asymptotic notation.}}
{{term|binary tree}}
{{defn|A tree data structure in which each node has at most two children, which are referred to as the {{visible anchor|left child}} and the {{visible anchor|right child}}. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set.{{Cite book |editor-last=Garnier |editor-first=Rowan |url=https://books.google.com/books?id=WnkZSSc4IkoC&pg=PA620 |title=Discrete Mathematics: Proofs, Structures and Applications, Third Edition |last1=John |first1=Taylor |publisher=CRC Press |year=2009 |isbn=978-1-4398-1280-8 |page=620}} Some authors allow the binary tree to be the empty set as well.{{Cite book |last1=Skiena |first1=Steven S |url=https://books.google.com/books?id=7XUSn0IKQEgC&pg=PA77 |title=The Algorithm Design Manual |publisher=Springer Science & Business Media |year=2009 |isbn=978-1-84800-070-4 |page=77}}}}
{{term|blackboard system}}
{{defn|An {{gli|artificial intelligence}} approach based on the blackboard architectural model,{{Cite journal |last1=Erman |first1=L. D. |last2=Hayes-Roth |first2=F. |last3=Lesser |first3=V. R. |last4=Reddy |first4=D. R. |year=1980 |title=The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty |journal=ACM Computing Surveys |volume=12 |issue=2 |pages=213 |doi=10.1145/356810.356816|s2cid=118556 }} {{Cite journal |last1=Corkill |first1=Daniel D. |date=September 1991 |title=Blackboard Systems |url=https://bbtech.com/papers/ai-expert.pdf |journal=AI Expert |volume=6 |issue=9 |pages=40–47 |access-date=5 July 2022 |archive-date=16 April 2012 |archive-url=https://web.archive.org/web/20120416034609/http://www.bbtech.com/papers/ai-expert.pdf |url-status=dead }}* {{Cite tech report |first=H. Yenny |last=Nii |title=Blackboard Systems |number=STAN-CS-86-1123 |institution=Department of Computer Science, Stanford University |year=1986 |url=https://i.stanford.edu/pub/cstr/reports/cs/tr/86/1123/CS-TR-86-1123.pdf |access-date=2013-04-12}}{{Cite journal |last1=Hayes-Roth |first1=B.|author1-link=Barbara Hayes-Roth |year=1985 |title=A blackboard architecture for control |journal=Artificial Intelligence |volume=26 |issue=3 |pages=251–321 |doi=10.1016/0004-3702(85)90063-3}} where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem.}}
{{term|Boltzmann machine}}
{{ghat|Also stochastic Hopfield network with hidden units.}}
{{defn|A type of stochastic recurrent neural network and Markov random field.{{Cite journal |last1=Hinton |first1=Geoffrey E. |date=2007-05-24 |title=Boltzmann machine |journal=Scholarpedia |volume=2 |issue=5 |pages=1668 |bibcode=2007SchpJ...2.1668H |doi=10.4249/scholarpedia.1668 |issn=1941-6016 |doi-access=free}} Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield networks.}}
{{term|Boolean satisfiability problem}}
{{ghat|Also propositional satisfiability problem; abbreviated SATISFIABILITY or SAT.}}
{{defn|The problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a {{=}} TRUE and b {{=}} FALSE, which make (a AND NOT b) {{=}} TRUE. In contrast, "a AND NOT a" is unsatisfiable.}}
{{anchor|boosting}}{{term|boosting}}
{{defn|A {{gli|machine learning}} {{gli|ensemble learning|ensemble}} metaheuristic for primarily reducing {{gli|bias–variance tradeoff|bias (as opposed to variance)}}, by training models sequentially, each one correcting the errors of its predecessor.}}
{{anchor|bagging}}{{anchor|bootstrapping}}{{term|bootstrap aggregating}}
{{ghat|Also bagging or bootstrapping.}}
{{defn|A {{gli|machine learning}} {{gli|ensemble learning|ensemble}} metaheuristic for primarily reducing {{gli|bias–variance tradeoff|variance (as opposed to bias)}}, by training multiple models independently and averaging their predictions.}}
{{term|brain technology}}
{{ghat|Also self-learning know-how system.}}
{{defn|A technology that employs the latest findings in neuroscience. The term was first introduced by the Artificial Intelligence Laboratory in Zurich, Switzerland, in the context of the ROBOY project.[https://nzz.ch/aktuell/zuerich/uebersicht/die-zangengeburt-eines-designierten-stammvaters-1.18029566 NZZ- Die Zangengeburt eines möglichen Stammvaters]. Website Neue Zürcher Zeitung. Seen 16. August 2013. Brain Technology can be employed in robots,[https://roboy.org/mediaundnews.html Official Homepage Roboy] {{Webarchive|url=https://web.archive.org/web/20130803052035/https://roboy.org/mediaundnews.html |date=2013-08-03 }}. Website Roboy. Seen 16. August 2013. know-how management systems[https://starmind.com/ Official Homepage Starmind]. Website Starmind. Seen 16. August 2013. and any other application with self-learning capabilities. In particular, Brain Technology applications allow the visualization of the underlying learning architecture often coined as "know-how maps".}}
{{term|branching factor}}
{{defn|In computing, tree data structures, and game theory, the number of children at each {{gli|node}}, the outdegree. If this value is not uniform, an average branching factor can be calculated.}}
{{term|brute-force search}}
{{ghat|Also exhaustive search or generate and test.}}
{{defn|A very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
C
{{glossary}}
{{term|capsule neural network (CapsNet)}}
{{defn|A {{gli|machine learning}} system that is a type of {{gli|artificial neural network}} (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.{{Cite arXiv |eprint=1710.09829 |class=cs.CV |first1=Sara |last1=Sabour |first2=Nicholas |last2=Frosst |title=Dynamic Routing Between Capsules |date=2017-10-26 |last3=Hinton |first3=Geoffrey E.}}}}
{{anchor|case-based reasoning}}{{term|case-based reasoning (CBR)}}
{{defn|Broadly construed, the process of solving new problems based on the solutions of similar past problems.}}
{{term|chatbot}}
{{ghat|Also smartbot, talkbot, chatterbot, bot, IM bot, interactive agent, conversational interface, or artificial conversational entity.}}
{{defn|A computer program or an {{gli|artificial intelligence}} which conducts a conversation via auditory or textual methods.{{Cite web |url=https://searchdomino.techtarget.com/sDefinition/0,,sid4_gci935566,00.html |title=What is a chatbot? |website=techtarget.com |access-date=30 January 2017}}}}
{{term|cloud robotics}}
{{defn|A field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centred on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low cost, smarter robots have intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, {{gli|deep learning}}, information processing, environment models, communication support, etc.{{Cite journal |last1=Civera |first1=Javier |last2=Ciocarlie |first2=Matei |last3=Aydemir |first3=Alper |last4=Bekris |first4=Kostas |last5=Sarma |first5=Sanjay |s2cid=16080778 |year=2015 |title=Guest Editorial Special Issue on Cloud Robotics and Automation |journal=IEEE Transactions on Automation Science and Engineering |volume=12 |issue=2 |pages=396–397 |doi=10.1109/TASE.2015.2409511|doi-access=free }}{{Cite web |url=https://roboearth.org/ |title=Robo Earth - Tech News |website=Robo Earth}}{{Cite web |url=https://goldberg.berkeley.edu/cloud-robotics |title=Cloud Robotics and Automation |last1=Goldberg |first1=Ken}}{{Cite web |url=https://sites.google.com/site/ruijiaoli/blogs/page |title=Cloud Robotics-Enable cloud computing for robots |last1=Li |first1=R |access-date=7 December 2014}}}}
{{term|cluster analysis}}
{{ghat|Also clustering.}}
{{defn|The task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including {{gli|machine learning}}, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics.}}
{{term|Cobweb}}
{{defn|An incremental system for hierarchical conceptual clustering. COBWEB was invented by Professor Douglas H. Fisher, currently at Vanderbilt University.{{Cite journal |last1=Fisher |first1=Douglas |year=1987 |title=Knowledge acquisition via incremental conceptual clustering |journal=Machine Learning |volume=2 |issue=2 |pages=139–172 |doi=10.1007/BF00114265 |doi-access=free}}{{Cite conference |last1=Fisher |first1=Douglas H. |date=July 1987 |title=Improving inference through conceptual clustering |conference=AAAI Conference |location=Seattle Washington |pages=461–465 |book-title=Proceedings of the 1987 AAAI Conferences}} COBWEB incrementally organizes observations into a classification tree. Each node in a classification tree represents a class (concept) and is labeled by a probabilistic concept that summarizes the attribute-value distributions of objects classified under the node. This classification tree can be used to predict missing attributes or the class of a new object.{{Cite book |last1=Iba |first1=William |last2=Langley |first2=Pat |title=Formal approaches in categorization |date=2011-01-27 |publisher=Cambridge University Press |isbn=9780521190480 |editor-last=Pothos |editor-first=Emmanuel M. |editor-last2=Wills |editor-first2=Andy J. |location=Cambridge |pages=253–273 |chapter=Cobweb models of categorization and probabilistic concept formation}}}}
{{term|cognitive architecture}}
{{defn|The Institute of Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments."Refer to the ICT website: https://cogarch.ict.usc.edu/}}
{{term|cognitive computing}}
{{defn|In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain{{Cite web |url=https://labs.hpe.com/research/next-next/brain/ |title=Hewlett Packard Labs |access-date=5 July 2022 |archive-date=30 October 2016 |archive-url=https://web.archive.org/web/20161030143900/http://www.labs.hpe.com/research/next-next/brain/ |url-status=dead }}Terdiman, Daniel (2014) .IBM's TrueNorth processor mimics the human brain.https://cnet.com/news/ibms-truenorth-processor-mimics-the-human-brain/Knight, Shawn (2011). [https://techspot.com/news/45138-ibm-unveils-cognitive-computing-chips-that-mimic-human-brain.html IBM unveils cognitive computing chips that mimic human brain] TechSpot: August 18, 2011, 12:00 PMHamill, Jasper (2013). [https://theregister.co.uk/2013/08/08/ibm_unveils_computer_architecture_based_upon_your_brain/ Cognitive computing: IBM unveils software for its brain-like SyNAPSE chips] The Register: August 8, 2013{{Cite journal |last1=Denning. |first1=P.J. |year=2014 |title=Surfing Toward the Future |journal=Communications of the ACM |volume=57 |issue=3 |pages=26–29 |doi=10.1145/2566967|s2cid=20681733 }}{{Cite thesis |last1=Ludwig |first1=Lars |year=2013 |title=Extended Artificial Memory: Toward an integral cognitive theory of memory and technology |url=https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3662 |format=pdf |publisher=Technical University of Kaiserslautern |access-date=2017-02-07}} and helps to improve human decision-making.{{Cite web |url=https://hpl.hp.com/research/ |title=Research at HP Labs |access-date=5 July 2022 |archive-date=7 March 2022 |archive-url=https://web.archive.org/web/20220307214522/https://www.hpl.hp.com/research/ |url-status=dead }} In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus.}}
{{term|cognitive science}}
{{defn|The interdisciplinary scientific study of the mind and its processes.Cognitive science is an interdisciplinary field of researchers from Linguistics, psychology, neuroscience, philosophy, computer science, and anthropology that seek to understand the mind. [https://aft.org/newspubs/periodicals/ae/summer2002/willingham.cfm How We Learn: Ask the Cognitive Scientist]}}
{{term|combinatorial optimization}}
{{defn|In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects.Schrijver, Alexander (February 1, 2006). A Course in Combinatorial Optimization (PDF), page 1.}}
{{term|committee machine}}
{{defn|A type of {{gli|artificial neural network}} using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response.HAYKIN, S. Neural Networks – A Comprehensive Foundation. Second edition. Pearson Prentice Hall: 1999. The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare ensembles of classifiers.}}
{{term|commonsense knowledge}}
{{defn|In {{gli|artificial intelligence}} research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.{{Cite web |url=https://www-formal.stanford.edu/jmc/mcc59/mcc59.html |title=PROGRAMS WITH COMMON SENSE |website=www-formal.stanford.edu |access-date=2018-04-11}}}}
{{term|commonsense reasoning}}
{{defn|A branch of artificial intelligence concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day.{{Cite magazine |last1=Davis |first1=Ernest |last2=Marcus |first2=Gary |year=2015 |title=Commonsense reasoning |url=https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext |magazine=Communications of the ACM |volume=58 |pages=92–103 |doi=10.1145/2701413 |number=9}}}}
{{term|computational chemistry}}
{{defn|A branch of chemistry that uses computer simulation to assist in solving chemical problems.}}
{{term|computational complexity theory}}
{{defn|Focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.}}
{{term|computational creativity}}
{{ghat|Also artificial creativity, mechanical creativity, creative computing, or creative computation.}}
{{defn|A multidisciplinary endeavour that includes the fields of {{gli|artificial intelligence}}, cognitive psychology, philosophy, and the arts.}}
{{term|computational cybernetics}}
{{defn|The integration of cybernetics and {{gli|computational intelligence}} techniques.}}
{{term|computational humor}}
{{defn|A branch of computational linguistics and {{gli|artificial intelligence}} which uses computers in humor research.Hulstijn, J, and Nijholt, A. (eds.). Proceedings of the International Workshop on Computational Humor. Number 12 in Twente Workshops on Language Technology, Enschede, Netherlands. University of Twente, 1996.}}
{{anchor|computational intelligence}}{{term|computational intelligence (CI)}}
{{defn|Usually refers to the ability of a computer to learn a specific task from data or experimental observation.}}
{{term|computational learning theory}}
{{defn|In computer science, computational learning theory (or just learning theory) is a subfield of {{gli|artificial intelligence}} devoted to studying the design and analysis of {{gli|machine learning}} algorithms.{{Cite web |url=https://learningtheory.org/ |title=ACL – Association for Computational Learning}}}}
{{term|computational linguistics}}
{{defn|An interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions.}}
{{term|computational mathematics}}
{{defn|The mathematical research in areas of science where computing plays an essential role.}}
{{term|computational neuroscience}}
{{ghat|Also theoretical neuroscience or mathematical neuroscience.}}
{{defn|A branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology, and cognitive abilities of the nervous system.Trappenberg, Thomas P. (2002). Fundamentals of Computational Neuroscience. United States: Oxford University Press Inc. p. 1. {{ISBN|978-0-19-851582-1}}.What is computational neuroscience? Patricia S. Churchland, Christof Koch, Terrence J. Sejnowski. in Computational Neuroscience pp.46–55. Edited by Eric L. Schwartz. 1993. MIT Press {{Cite web |title=Computational Neuroscience Edited by Eric L. Schwartz |url=https://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=7195 |url-status=dead |archive-url=https://web.archive.org/web/20110604124206/https://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=7195 |archive-date=2011-06-04 |access-date=2009-06-11}}{{Cite web |url=https://mitpress.mit.edu/books/theoretical-neuroscience |title=Theoretical Neuroscience |website=The MIT Press |url-status=dead |archive-url=https://web.archive.org/web/20180531150713/https://mitpress.mit.edu/books/theoretical-neuroscience |archive-date=31 May 2018 |access-date=2018-05-24}}{{Cite book |last1=Gerstner |first1=W. |title=Neuronal Dynamics |last2=Kistler, W. |last3=Naud, R. |last4=Paninski, L. |publisher=Cambridge University Press |year=2014 |isbn=9781107447615 |location=Cambridge, UK}}}}
{{term|computational number theory}}
{{ghat|Also algorithmic number theory.}}
{{defn|The study of {{gli|algorithm|algorithms}} for performing number theoretic computations.}}
{{term|computational problem}}
{{defn|In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve.}}
{{term|computational statistics}}
{{ghat|Also statistical computing.}}
{{defn|The interface between statistics and {{gli|computer science}}.}}
{{anchor|computer-automated design}}{{term|computer-automated design (CAutoD)}}
{{defn|Design automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and computer-automated design{{Cite journal |last1=Kamentsky |first1=L.A. |last2=Liu |first2=C.-N. |year=1963 |title=Computer-Automated Design of Multifont Print Recognition Logic |url=https://domino.research.ibm.com/tchjr/journalindex.nsf/0/a5cb0910ea78194885256bfa00683e5a?OpenDocument |journal=IBM Journal of Research and Development |volume=7 |issue=1 |page=2 |doi=10.1147/rd.71.0002 |access-date=5 July 2022 |archive-date=3 March 2016 |archive-url=https://web.archive.org/web/20160303202147/http://domino.research.ibm.com/tchjr/journalindex.nsf/0/a5cb0910ea78194885256bfa00683e5a?OpenDocument |url-status=dead }}{{Cite journal |last1=Brncick |first1=M |year=2000 |title=Computer automated design and computer automated manufacture |journal=Phys Med Rehabil Clin N Am |volume=11 |issue=3 |pages=701–13 |doi=10.1016/s1047-9651(18)30806-4 |pmid=10989487}}{{Cite journal |last1=Li |first1=Y. |display-authors=etal |year=2004 |title=CAutoCSD - Evolutionary search and optimisation enabled computer automated control system design |url=https://link.springer.com/article/10.1007%2Fs11633-004-0076-8 |journal=International Journal of Automation and Computing |volume=1 |issue=1 |pages=76–88 |doi=10.1007/s11633-004-0076-8|s2cid=55417415 }} are concerned with a broader range of applications, such as automotive engineering, civil engineering,{{Cite journal |last1=Kramer |first1=GJE |last2=Grierson |first2=DE |title=Computer automated design of structures under dynamic loads |journal=Computers & Structures |year=1989 |volume=32 |issue=2 |pages=313–325 |doi=10.1016/0045-7949(89)90043-6}}{{Cite journal |last1=Moharrami |first1=H |last2=Grierson |first2=DE |title=Computer-Automated Design of Reinforced Concrete Frameworks |journal=Journal of Structural Engineering |year=1993 |volume=119 |issue=7 |pages=2036–2058 |doi=10.1061/(asce)0733-9445(1993)119:7(2036)}}{{Cite journal |last1=Xu |first1=L |last2=Grierson |first2=DE |title=Computer-Automated Design of Semirigid Steel Frameworks |journal=Journal of Structural Engineering |year=1993 |volume=119 |issue=6 |pages=1740–1760 |doi=10.1061/(asce)0733-9445(1993)119:6(1740)}}Barsan, GM; Dinsoreanu, M, (1997). Computer-automated design based on structural performance criteria, Mouchel Centenary Conference on Innovation in Civil and Structural Engineering, Aug 19-21, Cambridge England, Innovation in Civil and Structural Engineering, 167–172 composite material design, control engineering,{{Cite journal |last1=Li |first1=Yun |year=1996 |title=Genetic algorithm automated approach to the design of sliding mode control systems |journal=International Journal of Control |volume=63 |issue=4 |pages=721–739 |doi=10.1080/00207179608921865}} dynamic system identification and optimization,{{Cite journal |last1=Li |first1=Yun |last2=Chwee Kim |first2=Ng |last3=Chen Kay |first3=Tan |year=1995 |title=Automation of Linear and Nonlinear Control Systems Design by Evolutionary Computation |url=https://sciencedirect.com/science/article/pii/S1474667017451585/pdf?md5=b7aedf998282848dfcf44a1ea2f003dd&pid=1-s2.0-S1474667017451585-main.pdf |journal=IFAC Proceedings Volumes |volume=28 |issue=16 |pages=85–90 |doi=10.1016/S1474-6670(17)45158-5}} financial systems, industrial equipment, {{gli|mechatronics|mechatronic}} systems, steel construction,Barsan, GM, (1995) Computer-automated design of semirigid steel frameworks according to EUROCODE-3, Nordic Steel Construction Conference 95, JUN 19–21, 787–794 structural optimisation,{{Cite journal |last1=Gray |first1=Gary J. |last2=Murray-Smith |first2=David J. |last3=Li |first3=Yun |display-authors=etal |year=1998 |title=Nonlinear model structure identification using genetic programming |url=https://sciencedirect.com/science/article/pii/S0967066198000872/pdf?md5=5ad89d3029a3ebad83086271f3c78f75&pid=1-s2.0-S0967066198000872-main.pdf |journal=Control Engineering Practice |volume=6 |issue=11 |pages=1341–1352 |doi=10.1016/s0967-0661(98)00087-2}} and the invention of novel systems. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically inspired {{gli|machine learning}},{{cite journal | url=https://ieeexplore.ieee.org/document/6052374 | doi=10.1109/MCI.2011.942584 | title=Evolutionary Computation Meets Machine Learning: A Survey | year=2011 | last1=Zhang | first1=Jun | last2=Zhan | first2=Zhi-hui | last3=Lin | first3=Ying | last4=Chen | first4=Ni | last5=Gong | first5=Yue-Jiao | last6=Zhong | first6=Jing-hui | last7=Chung | first7=Henry S.H. | last8=Li | first8=Yun | last9=Shi | first9=Yu-hui | journal=IEEE Computational Intelligence Magazine | volume=6 | issue=4 | pages=68–75 | s2cid=6760276 }} including heuristic search techniques such as evolutionary computation,[https://ti.arc.nasa.gov/m/pub-archive/768h/0768%20(Hornby).pdf Gregory S. Hornby (2003). Generative Representations for Computer-Automated Design Systems, NASA Ames Research Center, Mail Stop 269–3, Moffett Field, CA 94035-1000][https://msu.edu/~jclune/webfiles/publications/2011-CluneLipson-Evolving3DObjectsWithCPPNs-ECAL.pdf J. Clune and H. Lipson (2011). Evolving three-dimensional objects with a generative encoding inspired by developmental biology. Proceedings of the European Conference on Artificial Life. 2011.] and {{gli|swarm intelligence}} algorithms.{{Cite journal |last1=Zhan |first1=Z.H. |display-authors=etal |year=2009 |title=Adaptive Particle Swarm Optimization |journal= IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)|volume=39 |issue=6 |pages=1362–1381 |doi=10.1109/tsmcb.2009.2015956 |pmid=19362911|s2cid=11191625 |url=https://eprints.gla.ac.uk/7645/1/7645.pdf }}}}
{{term|computer audition (CA)}}
{{defn|See {{gli|machine listening}}.}}
{{term|computer science}}
{{defn|The theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of {{gli|algorithm|algorithms}} that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems.{{Cite web |url=https://wordnetweb.princeton.edu/perl/webwn?s=computer%20scientist |title=WordNet Search—3.1 |publisher=Wordnetweb.princeton.edu |access-date=14 May 2012 |archive-date=14 January 2013 |archive-url=https://web.archive.org/web/20130114223028/http://wordnetweb.princeton.edu/perl/webwn?s=computer%20scientist |url-status=dead }}}}
{{term|computer vision}}
{{defn|An interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.Dana H. Ballard; Christopher M. Brown (1982). Computer Vision. Prentice Hall. {{ISBN|0-13-165316-4}}.Huang, T. (1996-11-19). Vandoni, Carlo, E, ed. Computer Vision : Evolution And Promise (PDF). 19th CERN School of Computing. Geneva: CERN. pp. 21–25. {{doi|10.5170/CERN-1996-008.21}}. {{ISBN|978-9290830955}}.Milan Sonka; Vaclav Hlavac; Roger Boyle (2008). Image Processing, Analysis, and Machine Vision. Thomson. {{ISBN|0-495-08252-X}}.}}
{{term|concept drift}}
{{defn|In {{gli|predictive analytics}} and {{gli|machine learning}}, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.}}
{{term|connectionism}}
{{defn|An approach in the fields of cognitive science, that hopes to explain mental phenomena using {{gli|artificial neural network|artificial neural networks}}.{{Cite book |last1=Garson |first1=James |url=https://plato.stanford.edu/archives/fall2018/entries/connectionism/ |title=The Stanford Encyclopedia of Philosophy |date=27 November 2018 |publisher=Metaphysics Research Lab, Stanford University |editor-last=Zalta |editor-first=Edward N. |via=Stanford Encyclopedia of Philosophy}}}}
{{term|consistent heuristic}}
{{defn|In the study of path-finding problems in {{gli|artificial intelligence}}, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor.}}
{{term|constrained conditional model (CCM)}}
{{defn|A {{gli|machine learning}} and inference framework that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints.}}
{{term|constraint logic programming}}
{{defn|A form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is {{code|2=prolog|A(X,Y) :- X+Y>0, B(X), C(Y)}}. In this clause, {{code|2=prolog|X+Y>0}} is a constraint; A(X,Y)
, B(X)
, and C(Y)
are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y)
holds: X+Y
is greater than zero and both B(X)
and C(Y)
are true.}}
{{term|constraint programming}}
{{defn|A programming paradigm wherein relations between variables are stated in the form of constraints. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found.}}
{{term|constructed language}}
{{ghat|Also conlang.}}
{{defn|A language whose phonology, grammar, and vocabulary are consciously devised, instead of having developed naturally. Constructed languages may also be referred to as artificial, planned, or invented languages.{{Cite web |url=https://eurovision.tv/page/news?id=554&_t=ishtar_for_belgium_to_belgrade |title=Ishtar for Belgium to Belgrade |publisher=European Broadcasting Union |access-date=19 May 2013}}}}
{{term|control theory}}
{{defn|In control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability.}}
{{term|convolutional neural network}}
{{defn|In {{gli|deep learning}}, a convolutional neural network (CNN, or ConvNet) is a class of deep {{gli|neural network}} most commonly applied to image analysis. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing.{{Cite web |url=https://yann.lecun.com/exdb/lenet/ |title=LeNet-5, convolutional neural networks |last1=LeCun |first1=Yann |access-date=16 November 2013}} They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of annual conference of the Japan Society of Applied Physics.{{Cite journal |last1=Zhang |first1=Wei |year=1990 |title=Parallel distributed processing model with local space-invariant interconnections and its optical architecture |journal=Applied Optics |volume=29 |issue=32 |pages=4790–7 |bibcode=1990ApOpt..29.4790Z |doi=10.1364/AO.29.004790 |pmid=20577468}}}}
{{term|crossover}}
{{ghat|Also recombination.}}
{{defn|In {{gli|genetic algorithm|genetic algorithms}} and {{gli|evolutionary computation}}, a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biological organisms. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically {{gli|mutation|mutated}} before being added to the population.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
D
{{glossary}}
{{term|Darkforest}}
{{defn|A computer go program developed by Facebook, based on {{gli|deep learning}} techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search.{{Cite arXiv |eprint=1511.06410v1 |class=cs.LG |first1=Yuandong |last1=Tian |first2=Yan |last2=Zhu |title=Better Computer Go Player with Neural Network and Long-term Prediction |year=2015}}{{Cite web |url=https://technologyreview.com/s/544181/how-facebooks-ai-researchers-built-a-game-changing-go-engine/ |title=How Facebook's AI Researchers Built a Game-Changing Go Engine |date=December 4, 2015 |website=MIT Technology Review |access-date=2016-02-03}} The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them.{{Cite web |url=https://techtimes.com/articles/128636/20160128/facebook-ai-go-player-gets-smarter-with-neural-network-and-long-term-prediction-to-master-worlds-hardest-game.htm |title=Facebook AI Go Player Gets Smarter With Neural Network And Long-Term Prediction To Master World's Hardest Game |date=2016-01-28 |website=Tech Times |access-date=2016-04-24}} With the update, the system is known as Darkfmcts3.{{Cite web |url=https://venturebeat.com/2016/01/26/facebooks-artificially-intelligent-go-player-is-getting-smarter/ |title=Facebook's artificially intelligent Go player is getting smarter |date=2016-01-27 |website=VentureBeat |access-date=2016-04-24}}}}
{{term|Dartmouth workshop}}
{{defn|The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by manySolomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149–153Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87–9, 2006 (though not allKline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society) to be the seminal event for {{gli|artificial intelligence}} as a field.}}
{{term|data augmentation}}
{{defn|Data augmentation in data analysis are techniques used to increase the amount of data. It helps reduce {{gli|overfitting}} when training a learning {{gli|algorithm}}.}}
{{term|data fusion}}
{{defn|The process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source.{{Cite journal |last1=Haghighat |first1=Mohammad |last2=Abdel-Mottaleb |first2=Mohamed |last3=Alhalabi |first3=Wadee |year=2016 |title=Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition |url=https://zenodo.org/record/889881 |journal=IEEE Transactions on Information Forensics and Security |volume=11 |issue=9 |pages=1984–1996 |doi=10.1109/TIFS.2016.2569061|s2cid=15624506 }}}}
{{term|data integration}}
{{defn|The process of combining data residing in different sources and providing users with a unified view of them.{{Cite conference |last1=Lenzerini |first1=Maurizio |year=2002 |title=Data Integration: A Theoretical Perspective |url=https://dis.uniroma1.it/~lenzerin/homepagine/talks/TutorialPODS02.pdf |pages=233–246 |book-title=PODS 2002 |access-date=5 July 2022 |archive-date=27 October 2021 |archive-url=https://web.archive.org/web/20211027091431/https://www.dis.uniroma1.it/~lenzerin/homepagine/talks/TutorialPODS02.pdf |url-status=dead }} This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes.{{Cite news |last1=Lane |first1=Frederick |url=https://toptechnews.com/story.xhtml?story_id=01300000E3D0&full_skip=1 |title=IDC: World Created 161 Billion Gigs of Data in 2006 |year=2006}} It has become the focus of extensive theoretical work, and numerous open problems remain unsolved.}}
{{term|data mining}}
{{defn|The process of discovering patterns in large data sets involving methods at the intersection of {{gli|machine learning}}, statistics, and database systems.}}
{{term|data science}}
{{defn|An interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured,{{Cite journal |last1=Dhar |first1=V. |year=2013 |title=Data science and prediction |url=https://cacm.acm.org/magazines/2013/12/169933-data-science-and-prediction/fulltext |journal=Communications of the ACM |volume=56 |issue=12 |pages=64–73 |doi=10.1145/2500499|s2cid=6107147 }}{{Cite web |url=https://simplystatistics.org/2013/12/12/the-key-word-in-data-science-is-not-data-it-is-science/ |title=The key word in 'Data Science' is not Data, it is Science |last1=Leek |first1=Jeff |author-link=Jeffrey T. Leek |date=2013-12-12 |publisher=Simply Statistics |access-date=11 November 2018 |archive-date=2 January 2014 |archive-url=https://web.archive.org/web/20140102194117/https://simplystatistics.org/2013/12/12/the-key-word-in-data-science-is-not-data-it-is-science/ |url-status=dead }} similar to data mining. Data science is a "concept to unify statistics, data analysis, {{gli|machine learning}}, and their related methods" in order to "understand and analyze actual phenomena" with data.{{Cite book |last1=Hayashi |first1=Chikio |chapter=What is Data Science ? Fundamental Concepts and a Heuristic Example |title=Data Science, Classification, and Related Methods |date=1998-01-01 |publisher=Springer Japan |isbn=9784431702085 |editor-last=Hayashi |editor-first=Chikio |series=Studies in Classification, Data Analysis, and Knowledge Organization |pages=40–51 |language=en |doi=10.1007/978-4-431-65950-1_3 |editor-last2=Yajima |editor-first2=Keiji |editor-last3=Bock |editor-first3=Hans-Hermann |editor-last4=Ohsumi |editor-first4=Noboru |editor-last5=Tanaka |editor-first5=Yutaka |editor-last6=Baba |editor-first6=Yasumasa |chapter-url=https://springer.com/book/9784431702085}} It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science.}}
{{term|data set}}
{{ghat|Also dataset.}}
{{defn|A collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows.}}
{{term|data warehouse (DW or DWH)}}
{{ghat|Also enterprise data warehouse (EDW).}}
{{defn|A system used for reporting and data analysis.{{Cite conference |last1=Dedić |first1=Nedim |last2=Stanier |first2=Clare |year=2016 |editor-last=Hammoudi |editor-first=Slimane |editor2-last=Maciaszek |editor2-first=Leszek |editor3-last=Missikoff |editor3-first=Michele M. Missikoff |editor4-last=Camp |editor4-first=Olivier |editor5-last=Cordeiro |editor5-first=José |title=An Evaluation of the Challenges of Multilingualism in Data Warehouse Development |url=https://eprints.staffs.ac.uk/2770/ |conference=International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy |publisher=SciTePress |volume=1 |pages=196–206 |doi=10.5220/0005858401960206 |isbn=978-989-758-187-8 |doi-access=free |conference-url=https://eprints.staffs.ac.uk/2770/1/ICEIS_2016_Volume_1.pdf |journal=Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016)}} DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place{{Cite web |url=https://blog.rjmetrics.com/2014/12/04/10-common-mistakes-when-building-a-data-warehouse/ |title=9 Reasons Data Warehouse Projects Fail |date=4 December 2014 |publisher=blog.rjmetrics.com |access-date=2017-04-30}}}}
{{term|Datalog}}
{{defn|A declarative {{gli|logic programming}} language that syntactically is a subset of {{gli|Prolog}}. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing.{{Citation |last1=Huang |last2=Green |last3=Loo |title=SIGMOD 2011 |url=https://cs.ucdavis.edu/~green/papers/sigmod906t-huang.pdf |contribution=Datalog and Emerging applications |publisher=UC Davis |access-date=5 July 2022 |archive-date=1 July 2022 |archive-url=https://web.archive.org/web/20220701172125/https://www.cs.ucdavis.edu/~green/papers/sigmod906t-huang.pdf |url-status=dead }}.}}
{{term|decision boundary}}
{{defn|In the case of {{gli|backpropagation}}-based {{gli|artificial neural network|artificial neural networks}} or {{gli|perceptron|perceptrons}}, the type of decision boundary that the network can learn is determined by the number of {{gli|hidden layer|hidden layers}} in the network. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary.}}
{{term|decision support system (DSS)}}
{{defn|Aan information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.}}
{{term|decision theory}}
{{ghat|Also theory of choice.}}
{{defn|The study of the reasoning underlying an agent's choices.Steele, Katie and Stefánsson, H. Orri, "Decision Theory", The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), URL = [https://plato.stanford.edu/archives/win2015/entries/decision-theory] Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions.}}
{{term|decision tree learning}}
{{defn|Uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and {{gli|machine learning}}.}}
{{term|declarative programming}}
{{defn|A programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.{{Citation |last1=Lloyd |first1=J.W. |title=Practical Advantages of Declarative Programming}}}}
{{term|deductive classifier}}
{{defn|A type of {{gli|artificial intelligence}} inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values.}}
{{term|Deep Blue}}
{{defn|was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.}}
{{term|deep learning}}
{{defn|A subset of {{gli|machine learning}} that focuses on utilizing {{gli|neural network|neural networks}} to perform tasks such as {{gli|classification}}, {{gli|regression}}, and {{gli|representation learning}}. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either {{gli|supervised learning|supervised}}, {{gli|semi-supervised learning|semi-supervised}}, or {{gli|unsupervised learning|unsupervised}}.}}
{{term|DeepMind Technologies}}
{{defn|A British {{gli|artificial intelligence}} company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada,{{Cite web |url=https://deepmind.com/about/ |title=About Us {{!}} DeepMind |website=DeepMind|date=17 December 2024 }} France,{{Cite web |url=https://deepmind.com/blog/a-return-to-paris/ |title=A return to Paris {{!}} DeepMind |website=DeepMind|date=29 March 2018 }} and the United States. Acquired by Google in 2014, the company has created a {{gli|neural network}} that learns how to play video games in a fashion similar to that of humans,{{Cite web |url=https://medium.com/the-physics-arxiv-blog/the-last-ai-breakthrough-deepmind-made-before-google-bought-it-for-400m-7952031ee5e1 |title=The Last AI Breakthrough DeepMind Made Before Google Bought It |date=2014-01-29 |publisher=The Physics arXiv Blog |access-date=12 October 2014}} as well as a neural Turing machine,{{Cite arXiv |eprint=1410.5401 |class=cs.NE |first1=Alex |last1=Graves |first2=Greg |last2=Wayne |author-link=Alex Graves (computer scientist) |title=Neural Turing Machines |last3=Danihelka |first3=Ivo |year=2014}} or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[https://technologyreview.com/view/533741/best-of-2014-googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/ Best of 2014: Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine"] {{Webarchive|url=https://web.archive.org/web/20151204081728/http://www.technologyreview.com/view/533741/best-of-2014-googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/ |date=4 December 2015 }}, MIT Technology Review{{Cite journal |last1=Graves |first1=Alex |author-link=Alex Graves (computer scientist) |last2=Wayne |first2=Greg |last3=Reynolds |first3=Malcolm |last4=Harley |first4=Tim |last5=Danihelka |first5=Ivo |last6=Grabska-Barwińska |first6=Agnieszka |last7=Colmenarejo |first7=Sergio Gómez |last8=Grefenstette |first8=Edward |last9=Ramalho |first9=Tiago |date=12 October 2016 |title=Hybrid computing using a neural network with dynamic external memory |journal=Nature |volume=538 |issue=7626 |pages=471–476 |bibcode=2016Natur.538..471G |doi=10.1038/nature20101 |issn=1476-4687 |pmid=27732574|s2cid=205251479 |url=https://ora.ox.ac.uk/objects/uuid:dd8473bd-2d70-424d-881b-86d9c9c66b51 }} The company made headlines in 2016 after its AlphaGo program beat human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.{{Citation |last1=Kohs |first1=Greg |title=AlphaGo |date=29 September 2017 |url=https://imdb.com/title/tt6700846/ |others=Ioannis Antonoglou, Lucas Baker, Nick Bostrom |access-date=9 January 2018}} A more general program, AlphaZero, beat the most powerful programs playing Go, chess, and shogi (Japanese chess) after a few days of play against itself using {{gli|reinforcement learning}}.{{Cite arXiv |eprint=1712.01815 |class=cs.AI |first1=David |last1=Silver |first2=Thomas |last2=Hubert |author-link=David Silver (programmer) |title=Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm |date=5 December 2017 |first3=Julian |last3=Schrittwieser |first4=Ioannis |last4=Antonoglou |first5=Matthew |last5=Lai |first6=Arthur |last6=Guez |first7=Marc |last7=Lanctot |first8=Laurent |last8=Sifre |first9=Dharshan |last9=Kumaran |first10=Thore |last10=Graepel |first11=Timothy |last11=Lillicrap |first12=Karen |last12=Simonyan |first13=Demis |last13=Hassabis |author-link13=Demis Hassabis}}}}
{{term|default logic}}
{{defn|A non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.}}
{{anchor|DBSCAN}}{{term|Density-based spatial clustering of applications with noise (DBSCAN)}}
{{defn|A {{gli|cluster analysis|clustering}} {{gli|algorithm}} proposed by Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu in 1996.{{Cite conference
| author1-link = Martin Ester | author1-first = Martin | author1-last = Ester | author2-link = Hans-Peter Kriegel | author2-first = Hans-Peter | author2-last =Kriegel | first3 = Jörg | last3 = Sander | first4 = Xiaowei | last4 = Xu
| title = A density-based algorithm for discovering clusters in large spatial databases with noise
| url = https://cdn.aaai.org/KDD/1996/KDD96-037.pdf
| pages = 226–231
| editor1-first = Evangelos | editor1-last = Simoudis | editor2-first = Jiawei | editor2-last = Han | editor3-first = Usama M. | editor3-last = Fayyad
| conference = Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96)
| publisher = AAAI Press
| year = 1996
| isbn = 1-57735-004-9
| citeseerx = 10.1.1.121.9220
}}}}
{{term|description logic (DL)}}
{{defn|A family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) {{gli|decision problem|decidable}}, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and {{gli|knowledge representation and reasoning|reasoning}} complexity by supporting different sets of mathematical constructors.{{Cite book |last1=Sikos |first1=Leslie F. |url=https://springer.com/us/book/9783319540658 |title=Description Logics in Multimedia Reasoning |date=2017 |publisher=Springer International Publishing |isbn=978-3-319-54066-5 |location=Cham |doi=10.1007/978-3-319-54066-5|s2cid=3180114 }}}}
{{term|developmental robotics (DevRob)}}
{{ghat|Also epigenetic robotics.}}
{{defn|A scientific field which aims at studying the developmental mechanisms, architectures, and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines.}}
{{term|diagnosis}}
{{defn|Concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour.}}
{{term|dialogue system}}
{{ghat|Also conversational agent (CA).}}
{{defn|A computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.}}
{{term|diffusion model}}
{{defn|In {{gli|machine learning}}, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable models. They are Markov chains trained using variational inference.{{cite book |last1=Ho |first1=Jonathan |last2=Jain |first2=Ajay |last3=Abbeel |first3=Pieter |title=Denoising Diffusion Probabilistic Models |date=19 June 2020 |arxiv=2006.11239}} The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space. In computer vision, this means that a neural network is trained to denoise images blurred with Gaussian noise by learning to reverse the diffusion process.{{cite arXiv |last1=Song |first1=Yang |last2=Sohl-Dickstein |first2=Jascha |last3=Kingma |first3=Diederik P. |last4=Kumar |first4=Abhishek |last5=Ermon |first5=Stefano |last6=Poole |first6=Ben |date=2021-02-10 |title=Score-Based Generative Modeling through Stochastic Differential Equations |class=cs.LG |eprint=2011.13456}}{{cite arXiv |last1=Gu |first1=Shuyang |last2=Chen |first2=Dong |last3=Bao |first3=Jianmin |last4=Wen |first4=Fang |last5=Zhang |first5=Bo |last6=Chen |first6=Dongdong |last7=Yuan |first7=Lu |last8=Guo |first8=Baining |title=Vector Quantized Diffusion Model for Text-to-Image Synthesis |date=2021 |class=cs.CV |eprint=2111.14822}} It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure.{{cite arXiv |last1=Chang |first1=Ziyi |last2=Koulieris |first2=George Alex |last3=Shum |first3=Hubert P. H. |title=On the Design Fundamentals of Diffusion Models: A Survey |date=2023 |eprint=2306.04542 |class=cs.LG}} Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.{{cite journal |last1=Croitoru |first1=Florinel-Alin |last2=Hondru |first2=Vlad |last3=Ionescu |first3=Radu Tudor |last4= Shah |first4= Mubarak |title=Diffusion Models in Vision: A Survey |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |date=2023 |volume=45 |issue=9 |pages=10850–10869 |doi=10.1109/TPAMI.2023.3261988 |pmid=37030794 |arxiv=2209.04747|s2cid=252199918 }}}}
{{term|Dijkstra's algorithm}}
{{defn|An {{gli|algorithm}} for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks.}}
{{term|dimensionality reduction}}
{{ghat|Also dimension reduction.}}
{{defn|The process of reducing the number of random variables under consideration{{Cite journal |last1=Roweis |first1=S. T. |last2=Saul |first2=L. K. |year=2000 |title=Nonlinear Dimensionality Reduction by Locally Linear Embedding |journal=Science |volume=290 |issue=5500 |pages=2323–2326 |bibcode=2000Sci...290.2323R |citeseerx=10.1.1.111.3313 |doi=10.1126/science.290.5500.2323 |pmid=11125150|s2cid=5987139}} by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.{{Cite book |last1=Pudil |first1=P. |title=Feature Extraction, Construction and Selection |url=https://archive.org/details/featureextractio00liuh |url-access=limited |last2=Novovičová |first2=J. |year=1998 |isbn=978-1-4613-7622-4 |editor-last=Liu |editor-first=Huan |pages=[https://archive.org/details/featureextractio00liuh/page/n120 101] |chapter=Novel Methods for Feature Subset Selection with Respect to Problem Knowledge |doi=10.1007/978-1-4615-5725-8_7 |editor-last2=Motoda |editor-first2=Hiroshi}}}}
{{term|discrete system}}
{{defn|Any system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.}}
{{term|distributed artificial intelligence (DAI)}}
{{ghat|Also decentralized artificial intelligence.}}
{{defn|A subfield of {{gli|artificial intelligence}} research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of {{gli|multi-agent system|multi-agent systems}}.Demazeau, Yves, and J-P. Müller, eds. Decentralized Ai. Vol. 2. Elsevier, 1990.}}
{{term|double descent}}
{{defn|A phenomenon in statistics and {{gli|machine learning}} where a model with a small number of parameters and a model with an extremely large number of parameters have a small test error, but a model whose number of parameters is about the same as the number of data points used to train the model will have a large error.{{Cite web |date=2019-12-05 |title=Deep Double Descent |url=https://openai.com/blog/deep-double-descent/ |access-date=2022-08-12 |website=OpenAI |language=en}} This phenomenon has been considered surprising, as it contradicts assumptions about {{gli|overfitting}} in classical machine learning.{{Cite arXiv |eprint=2303.14151v1 |class=cs.LG |first1=Rylan |last1=Schaeffer |first2=Mikail |last2=Khona |title=Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle |date=2023-03-24 |language=en |last3=Robertson |first3=Zachary |last4=Boopathy |first4=Akhilan |last5=Pistunova |first5=Kateryna |last6=Rocks |first6=Jason W. |last7=Fiete |first7=Ila Rani |last8=Koyejo |first8=Oluwasanmi}}}}
{{anchor|dropout}}{{term|dropout}}
{{ghat|Also dilution.}}
{{defn|A {{gli|regularization}} technique for reducing {{gli|overfitting}} in {{gli|artificial neural network|artificial neural networks}} by preventing complex co-adaptations on training data.}}
{{term|dynamic epistemic logic (DEL)}}
{{defn|A logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
E
{{glossary}}
{{term|eager learning}}
{{defn|A learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where {{gli|generalization}} beyond the training data is delayed until a query is made to the system.{{Cite conference |last1=Hendrickx |first1=Iris |last2=Van den Bosch, Antal |author-link2=Antal van den Bosch |date=October 2005 |title=Hybrid algorithms with Instance-Based Classification |url=https://books.google.com/books?id=GtcevX7n90wC&pg=PA158 |publisher=Springer |pages=158–169 |isbn=9783540292432 |book-title=Machine Learning: ECML2005}}}}
{{term|early stopping}}
{{defn|A {{gli|regularization}} technique often used when training a {{gli|machine learning}} model with an iterative method such as gradient descent.}}
{{term|Ebert test}}
{{defn|A test which gauges whether a computer-based synthesized voice{{Cite news |last1=Lee |first1=Jennifer |url=https://bits.blogs.nytimes.com/2011/03/07/roger-ebert-tests-his-vocal-cords-and-comedic-delivery/?src=me |title=Roger Ebert Tests His Vocal Cords, and Comedic Delivery |date=March 7, 2011 |work=The New York Times |access-date=2011-09-12 |quote=Now perhaps, there is the Ebert Test, a way to see if a synthesized voice can deliver humor with the timing to make an audience laugh.... He proposed the Ebert Test as a way to gauge the humanness of a synthesized voice.}} can tell a joke with sufficient skill to cause people to laugh.{{Cite news |url=https://tips-tricks.co.in/2011/03/roger-eberts-inspiring-digital.html |title=Roger Ebert's Inspiring Digital Transformation |date=March 5, 2011 |access-date=2011-09-12 |url-status=dead |archive-url=https://web.archive.org/web/20110325160035/https://tips-tricks.co.in/2011/03/roger-eberts-inspiring-digital.html |archive-date=25 March 2011 |publisher=Tech News |quote=Meanwhile, the technology that enables Ebert to "speak" continues to see improvements – for example, adding more realistic inflection for question marks and exclamation points. In a test of that, which Ebert called the "Ebert test" for computerized voices,}} It was proposed by film critic Roger Ebert at the 2011 TED conference as a challenge to software developers to have a computerized voice master the inflections, delivery, timing, and intonations of a speaking human.{{Cite news |last1=Ostrow |first1=Adam |url=https://mashable.com/2011/03/05/roger-ebert-ted-talk/ |title=Roger Ebert's Inspiring Digital Transformation |date=March 5, 2011 |access-date=2011-09-12 |publisher=Mashable Entertainment |quote=With the help of his wife, two colleagues and the Alex-equipped MacBook that he uses to generate his computerized voice, famed film critic Roger Ebert delivered the final talk at the TED conference on Friday in Long Beach, California....}} The test is similar to the Turing test proposed by Alan Turing in 1950 as a way to gauge a computer's ability to exhibit intelligent behavior by generating performance indistinguishable from a human being.{{Cite news |last1=Pasternack |first1=Alex |url=https://motherboard.tv/2011/4/18/a-macbook-may-have-given-roger-ebert-his-voice-but-an-ipod-saved-his-life-video |title=A MacBook May Have Given Roger Ebert His Voice, But An iPod Saved His Life (Video) |date=Apr 18, 2011 |access-date=2011-09-12 |url-status=dead |archive-url=https://web.archive.org/web/20110906063605/https://motherboard.tv/2011/4/18/a-macbook-may-have-given-roger-ebert-his-voice-but-an-ipod-saved-his-life-video |archive-date=6 September 2011 |publisher=Motherboard |quote=He calls it the "Ebert Test," after Turing's AI standard...}}}}
{{term|echo state network (ESN)}}
{{defn|A recurrent neural network with a sparsely connected {{gli|hidden layer}} (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can (re)produce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.{{Cite journal |last1=Jaeger |first1=Herbert |last2=Haas |first2=Harald |year=2004 |title=Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication |url=https://columbia.edu/cu/biology/courses/w4070/Reading_List_Yuste/haas_04.pdf |journal=Science |volume=304 |issue=5667 |pages=78–80 |doi=10.1126/science.1091277 |pmid=15064413 |bibcode=2004Sci...304...78J |s2cid=2184251 |access-date=5 July 2022 |archive-date=1 September 2022 |archive-url=https://web.archive.org/web/20220901054532/http://www.columbia.edu/cu/biology/courses/w4070/Reading_List_Yuste/haas_04.pdf |url-status=dead }}Herbert Jaeger (2007) [https://scholarpedia.org/article/Echo_State_Network Echo State Network.] {{Webarchive|url=https://web.archive.org/web/20220628153329/http://www.scholarpedia.org/article/Echo_state_network |date=28 June 2022 }} Scholarpedia.}}
{{term|embodied agent}}
{{ghat|Also interface agent.}}
{{defn|An {{gli|intelligent agent}} that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment.{{Cite journal |last1=Serenko |first1=Alexander |last2=Bontis |first2=Nick |last3=Detlor |first3=Brian |year=2007 |title=End-user adoption of animated interface agents in everyday work applications |url=https://aserenko.com/papers/Serenko_Bontis_Detlor_end_user_adoption_agent.pdf |journal=Behaviour and Information Technology |volume=26 |issue=2 |pages=119–132 |doi=10.1080/01449290500260538|s2cid=2175427 }}}}
{{term|embodied cognitive science}}
{{defn|An interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) the experimental use of robotic agents in controlled environments.}}
{{term|error-driven learning}}
{{defn|A sub-area of {{gli|machine learning}} concerned with how an {{gli|intelligent agent|agent}} ought to take actions in an environment so as to minimize some error feedback. It is a type of {{gli|reinforcement learning}}.}}
{{term|ensemble learning}}
{{defn|The use of multiple {{gli|machine learning}} algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.{{cite journal
|last1=Opitz |first1=D. |last2=Maclin |first2=R. |title=Popular ensemble methods: An empirical study
|journal=Journal of Artificial Intelligence Research
|volume=11 |pages=169–198 |year=1999
|doi=10.1613/jair.614
|doi-access=free|arxiv=1106.0257}}{{cite journal
|last1=Polikar |first1=R. |title=Ensemble based systems in decision making
|journal=IEEE Circuits and Systems Magazine
|volume=6 |issue=3 |pages=21–45 |year=2006
|doi=10.1109/MCAS.2006.1688199
|s2cid=18032543
|last1=Rokach |first1=L. |title=Ensemble-based classifiers
|journal=Artificial Intelligence Review
|volume=33
|issue=1–2 |pages=1–39 |year=2010
|doi=10.1007/s10462-009-9124-7
|s2cid=11149239
|hdl=11323/1748
|hdl-access=free
}}}}
{{anchor|epoch}}{{term|epoch}}
{{defn|In {{gli|machine learning}}, particularly in the creation of {{gli|artificial neural network|artificial neural networks}}, an epoch is training the model for one cycle through the full training dataset. Small models are typically trained for as many epochs as it takes to reach the best performance on the validation dataset. The largest models may train for only one epoch.}}
{{term|ethics of artificial intelligence}}
{{defn|The part of the ethics of technology specific to artificial intelligence.}}
{{anchor|evolutionary algorithm}}{{term|evolutionary algorithm (EA)}}
{{defn|A subset of {{gli|evolutionary computation}},{{Cite book |last1=Vikhar |first1=P. A. |title=2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC) |chapter=Evolutionary algorithms: A critical review and its future prospects |year=2016 |publisher=Jalgaon, 2016, pp. 261–265 |pages=261–265 |doi=10.1109/ICGTSPICC.2016.7955308 |isbn=978-1-5090-0467-6|s2cid=22100336 }} a generic population-based metaheuristic optimization {{gli|algorithm}}. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.}}
{{term|evolutionary computation}}
{{defn|A family of {{gli|algorithm|algorithms}} for global optimization inspired by biological evolution, and the subfield of {{gli|artificial intelligence}} and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.}}
{{term|evolving classification function (ECF)}}
{{defn|Evolving classification functions are used for {{gli|classification|classifying}} and {{gli|cluster analysis|clustering}} in the field of {{gli|machine learning}} and {{gli|artificial intelligence}}, typically employed for data stream mining tasks in dynamic and changing environments.}}
{{term|existential risk}}
{{defn|The hypothesis that substantial progress in {{gli|artificial general intelligence}} (AGI) could someday result in human extinction or some other unrecoverable global catastrophe.{{Cite book |last1=Russell |first1=Stuart |title=Artificial Intelligence: A Modern Approach |title-link=Artificial Intelligence: A Modern Approach |last2=Norvig |first2=Peter |date=2009 |publisher=Prentice Hall |isbn=978-0-13-604259-4 |chapter=26.3: The Ethics and Risks of Developing Artificial Intelligence |author-link=Stuart J. Russell |author-link2=Peter Norvig}}{{Cite journal |last1=Bostrom |first1=Nick |author-link=Nick Bostrom |year=2002 |title=Existential risks |journal=Journal of Evolution and Technology |volume=9 |issue=1 |pages=1–31}}{{Cite news |url=https://slate.com/articles/technology/future_tense/2016/04/killer_a_i_101_a_cheat_sheet_to_the_terminology_the_ethical_debates_the.html |title=Your Artificial Intelligence Cheat Sheet |date=1 April 2016 |work=Slate |access-date=16 May 2016}}}}
{{term|expert system}}
{{defn|A computer system that emulates the decision-making ability of a human expert.{{Citation |last1=Jackson |first1=Peter |title=Introduction To Expert Systems |page=2 |year=1998 |edition=3 |publisher=Addison Wesley |isbn=978-0-201-87686-4}} Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code.{{Cite web |url=https://pcmag.com/encyclopedia_term/0,2542,t=conventional+programming&i=40325,00.asp |title=Conventional programming |work=PC Magazine |access-date=2013-09-15 |archive-date=14 October 2012 |archive-url=https://web.archive.org/web/20121014124656/https://pcmag.com/encyclopedia_term/0%2C2542%2Ct%3Dconventional+programming%26i%3D40325%2C00.asp |url-status=dead }}}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
F
{{glossary}}
{{term|fast-and-frugal trees}}
{{defn|A type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category.Martignon, Laura; Vitouch, Oliver; Takezawa, Masanori; Forster, Malcolm. [https://researchgate.net/publication/27278577_Naive_and_Yet_Enlightened_From_Natural_Frequencies_to_Fast_and_Frugal_Decision_Trees "Naive and Yet Enlightened: From Natural Frequencies to Fast and Frugal Decision Trees"], published in Thinking : Psychological perspectives on reasoning, judgement and decision making (David Hardman and Laura Macchi; editors), Chichester: John Wiley & Sons, 2003.}}
{{anchor|feature}}{{term|feature}}
{{defn|An individual measurable property or characteristic of a phenomenon.{{cite book |author=Bishop, Christopher |title=Pattern recognition and machine learning |publisher=Springer |location=Berlin |year=2006 |isbn=0-387-31073-8 }} In {{gli|computer vision}} and image processing, a feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in an image (such as points, edges, or objects), or the result of a general neighborhood operation or feature detection applied to the image.}}
{{term|feature extraction}}
{{defn|In {{gli|machine learning}}, pattern recognition, and image processing, feature extraction starts from an initial set of measured data and builds derived values ({{gli|feature|features}}) intended to be informative and non-redundant, facilitating the subsequent learning and {{gli|generalization}} steps, and in some cases leading to better human interpretations.}}
{{anchor|representation learning}}
{{term|feature learning}}
{{ghat|Also representation learning.}}
{{defn|In {{gli|machine learning}}, {{gli|feature}} learning or representation learning{{Cite journal |last1=Bengio |first1=Y. |last2=Courville |first2=A. |last3=Vincent |first3=P. |year=2013 |title=Representation Learning: A Review and New Perspectives |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=35 |issue=8 |pages=1798–1828 |arxiv=1206.5538 |doi=10.1109/tpami.2013.50 |pmid=23787338|s2cid=393948 }} is a set of techniques that allows a system to automatically discover the representations needed for feature detection or {{gli|classification}} from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.}}
{{term|feature selection}}
{{defn|In {{gli|machine learning}} and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant {{gli|feature|features}} (variables, predictors) for use in model construction.}}
{{term|federated learning}}
{{defn|A {{gli|machine learning}} technique that allows for training models on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data.}}
{{term|first-order logic}}
{{ghat|Also first-order predicate calculus or predicate logic.}}
{{defn|A collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is a variable.Hodgson, Dr. J. P. E., [https://people.sju.edu/~jhodgson/ugai/1order.html "First Order Logic"] {{Webarchive|url=https://web.archive.org/web/20190921071136/http://people.sju.edu/~jhodgson/ugai/1order.html |date=21 September 2019 }}, Saint Joseph's University, Philadelphia, 1995. This distinguishes it from propositional logic, which does not use quantifiers or relations.Hughes, G. E., & Cresswell, M. J., A New Introduction to Modal Logic (London: Routledge, 1996), [https://books.google.com/books?id=_CB5wiBeaA4C&pg=PA161 p.161].}}
{{term|fluent}}
{{defn|A condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.}}
{{term|formal language}}
{{defn|A set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.}}
{{term|forward chaining}}
{{ghat|Also forward reasoning.}}
{{defn|One of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, businesses and production rule systems. The opposite of forward chaining is {{gli|backward chaining}}. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.{{Cite book |last1=Feigenbaum |first1=Edward |url=https://archive.org/details/riseofexpertco00feig |title=The Rise of the Expert Company |publisher=Times Books |year=1988 |isbn=978-0-8129-1731-4 |page=[https://archive.org/details/riseofexpertco00feig/page/318 318] |url-access=registration}}}}
{{term|frame}}
{{defn|An artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations". Frames are the primary data structure used in artificial intelligence {{gli|frame language}}.}}
{{term|frame language}}
{{defn|A technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.}}
{{term|frame problem}}
{{defn|The problem of finding adequate collections of axioms for a viable description of a robot environment.{{Cite journal |last1=Hayes |first1=Patrick |title=The Frame Problem and Related Problems in Artificial Intelligence |url=https://aitopics.org/sites/default/files/classic/Webber-Nilsson-Readings/Rdgs-NW-Hayes-FrameProblem.pdf |publisher=University of Edinburgh |journal=Readings in Artificial Intelligence |access-date=9 March 2019 |archive-date=3 December 2013 |archive-url=https://web.archive.org/web/20131203002046/https://aitopics.org/sites/default/files/classic/Webber-Nilsson-Readings/Rdgs-NW-Hayes-FrameProblem.pdf |url-status=dead |date=1981 |pages=223–230 |doi=10.1016/B978-0-934613-03-3.50020-9|isbn=9780934613033 |s2cid=141711662 }}}}
{{term|friendly artificial intelligence}}
{{ghat|Also friendly AI or FAI.}}
{{defn|A hypothetical {{gli|artificial general intelligence}} (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.}}
{{term|futures studies}}
{{defn|The study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.{{Cite journal |last1=Sardar |first1=Z |year=2010 |title=The Namesake: Futures; futures studies; futurology; futuristic; Foresight – What's in a name? |journal=Futures |volume=42 |issue=3 |pages=177–184 |doi=10.1016/j.futures.2009.11.001}}}}
{{term|fuzzy control system}}
{{defn|A control system based on {{gli|fuzzy logic}}—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).{{Cite book |last1=Pedrycz |first1=Witold |title=Fuzzy control and fuzzy systems |publisher=Research Studies Press Ltd. |year=1993 |edition=2}}{{Cite book |last1=Hájek |first1=Petr |title=Metamathematics of fuzzy logic |publisher=Springer Science & Business Media |year=1998 |edition=4}}}}
{{term|fuzzy logic}}
{{defn|A simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only.}}
{{term|fuzzy rule}}
{{defn|A rule used within {{gli|fuzzy logic|fuzzy logic systems}} to infer an output based on input variables.}}
{{term|fuzzy set}}
{{defn|In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1.D. Dubois and H. Prade (1988) Fuzzy Sets and Systems. Academic Press, New York. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.{{Cite journal |last1=Liang |first1=Lily R. |last2=Lu |first2=Shiyong |last3=Wang |first3=Xuena |last4=Lu |first4=Yi |last5=Mandal |first5=Vinay |last6=Patacsil |first6=Dorrelyn |last7=Kumar |first7=Deepak |year=2006 |title=FM-test: A fuzzy-set-theory-based approach to differential gene expression data analysis |journal=BMC Bioinformatics |volume=7 |issue=Suppl 4 |pages=S7 |doi=10.1186/1471-2105-7-S4-S7 |pmc=1780132 |pmid=17217525 |doi-access=free }}}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
G
{{glossary}}
{{term|game theory}}
{{defn|The study of mathematical models of strategic interaction between rational decision-makers.Myerson, Roger B. (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. [https://books.google.com/books?id=E8WQFRCsNr0C&pg=PA1 1]. Chapter-preview links, pp. [https://books.google.com/books?id=E8WQFRCsNr0C&pg=PR7 vii–xi].}}
{{term|general game playing (GGP)}}
{{defn|General game playing is the design of artificial intelligence programs to be able to run and play more than one game successfully.{{cite web |last1=Pell |first1=Barney |editor1=H. van den Herik |editor2=L. Allis |title=Metagame: a new challenge for games and learning. |date=1992 |url=https://svn.sable.mcgill.ca/sable/courses/COMP763/oldpapers/pell-92-metagame.pdf |trans-title=Heuristic programming in artificial intelligence 3–the third computerolympiad |location=Ellis-Horwood |access-date=13 June 2020 |archive-date=17 February 2020 |archive-url=https://web.archive.org/web/20200217154408/https://svn.sable.mcgill.ca/sable/courses/COMP763/oldpapers/pell-92-metagame.pdf |url-status=dead }}{{cite journal |last1=Pell |first1=Barney |title=A Strategic Metagame Player for General Chess-Like Games |journal=Computational Intelligence |date=1996 |volume=12 |issue=1 |pages=177–198 |doi=10.1111/j.1467-8640.1996.tb00258.x |s2cid=996006 |language=en |issn=1467-8640}}{{cite journal |last1=Genesereth |first1=Michael |last2=Love |first2=Nathaniel |last3=Pell |first3=Barney |title=General Game Playing: Overview of the AAAI Competition |journal=AI Magazine |date=15 June 2005 |volume=26 |issue=2 |pages=62 |doi=10.1609/aimag.v26i2.1813 |language=en |issn=2371-9621}}}}
{{anchor|generalization}}{{term|generalization}}
{{defn|The concept that humans, other animals, and {{gli|artificial neural network|artificial neural networks}} use past learning in present situations of learning if the conditions in the situations are regarded as similar.{{cite book|last1=Gluck|first1=Mark A.|last2=Mercado|first2=Eduardo|last3=Myers|first3=Catherine E.|title=Learning and memory: from brain to behavior|date=2011|publisher=Worth Publishers|location=New York|isbn=9781429240147|page=209|edition=2nd}}}}
{{term|generalization error}}
{{defn|For {{gli|supervised learning}} applications in {{gli|machine learning}} and statistical learning theory, generalization errorMohri, M., Rostamizadeh A., Talwakar A., (2018) Foundations of Machine learning, 2nd ed., Boston: MIT Press (also known as the out-of-sample errorY S. Abu-Mostafa, M.Magdon-Ismail, and H.-T. Lin (2012) Learning from Data, AMLBook Press. {{ISBN|978-1600490064}} or the risk) is a measure of how accurately a learning algorithm is able to predict outcomes for previously unseen data.}}
{{term|generative adversarial network (GAN)}}
{{defn|A class of {{gli|machine learning}} systems. Two {{gli|neural network|neural networks}} contest with each other in a zero-sum game framework.}}
{{term|generative artificial intelligence}}
{{defn|Generative artificial intelligence is artificial intelligence capable of generating text, images, or other media in response to prompts.{{Cite web|url=https://www.nytimes.com/2023/01/27/technology/anthropic-ai-funding.html|title=Anthropic Said to Be Closing In on $300 Million in New A.I. Funding|last1=Griffith|first1=Erin|last2=Metz|first2=Cade|date=2023-01-27|work=The New York Times|access-date=2023-03-14}}{{cite news |last1=Lanxon |first1=Nate |last2=Bass |first2=Dina |last3=Davalos |first3=Jackie |title=A Cheat Sheet to AI Buzzwords and Their Meanings |url=https://news.bloomberglaw.com/tech-and-telecom-law/a-cheat-sheet-to-ai-buzzwords-and-their-meanings-quicktake |access-date=March 14, 2023 |newspaper=Bloomberg News |date=March 10, 2023 |location=}} Generative AI models {{gli|machine learning|learn}} the patterns and structure of their input training data and then generate new data that has similar characteristics, typically using {{gli|transformer}}-based {{gli|deep learning|deep}} {{gli|neural network|neural networks}}.{{Cite news |last=Pasick |first=Adam |date=2023-03-27 |title=Artificial Intelligence Glossary: Neural Networks and Other Terms Explained |language=en-US |work=The New York Times |url=https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html |access-date=2023-04-22 |issn=0362-4331}}{{cite web | url=https://openai.com/research/generative-models | title=Generative models | author1=Andrej Karpathy | author2=Pieter Abbeel | author3=Greg Brockman | author4=Peter Chen | author5=Vicki Cheung | author6=Yan Duan | author7=Ian Goodfellow | author8=Durk Kingma | author9=Jonathan Ho | author10=Rein Houthooft | author11=Tim Salimans | author12=John Schulman | author13=Ilya Sutskever | author14=Wojciech Zaremba | date=2016-06-16 | website=OpenAI}}}}
{{term|generative pretrained transformer (GPT)}}
{{defn|A {{gli|large language model}} based on the {{gli|transformer}} architecture that generates text. It is first pretrained to predict the next token in texts (a token is typically a word, subword, or punctuation). After their pretraining, GPT models can generate human-like text by repeatedly predicting the token that they would expect to follow. GPT models are usually also fine-tuned, for example with {{gli|reinforcement learning from human feedback}} to reduce {{gli|hallucination}} or harmful behaviour, or to format the output in a conversationnal format.{{Cite web |last=Smith |first=Craig S. |date=March 15, 2023 |title=ChatGPT-4 Creator Ilya Sutskever on AI Hallucinations and AI Democracy |url=https://www.forbes.com/sites/craigsmith/2023/03/15/gpt-4-creator-ilya-sutskever-on-ai-hallucinations-and-ai-democracy/ |access-date=2023-12-25 |website=Forbes |language=en}}}}
{{anchor|genetic algorithm}}{{term|genetic algorithm (GA)}}
{{defn|A metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.{{sfn|Mitchell|1996|p=2}}}}
{{term|genetic operator}}
{{defn|An operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful.}}
{{term|glowworm swarm optimization}}
{{defn|A {{gli|swarm intelligence}} optimization {{gli|algorithm}} based on the behaviour of glowworms (also known as fireflies or lightning bugs).}}
{{term|gradient boosting}}
{{defn|A {{gli|machine learning}} technique based on {{gli|boosting}} in a functional space, where the target is pseudo-residuals instead of residuals as in traditional boosting.}}
{{term|graph (abstract data type)}}
{{defn|In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of {{gli|graph theory}}.}}
{{term|graph (discrete mathematics)}}
{{defn|In mathematics, and more specifically in {{gli|graph theory}}, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called an arc or line).{{Cite book |last1=Trudeau |first1=Richard J. |url=https://store.doverpublications.com/0486678709.html |title=Introduction to Graph Theory |publisher=Dover Pub. |year=1993 |isbn=978-0-486-67870-2 |edition=Corrected, enlarged republication. |location=New York |pages=19 |quote=A graph is an object consisting of two sets called its vertex set and its edge set. |access-date=8 August 2012}}}}
{{term|graph database (GDB)}}
{{defn|A database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A key concept of the system is the graph (or edge or relationship), which directly relates data items in the store a collection of nodes of data and edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly, and in many cases retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships within a graph database is fast because they are perpetually stored within the database itself. Relationships can be intuitively visualized using graph databases, making it useful for heavily inter-connected data.{{Cite journal |last1=Yoon |first1=Byoung-Ha |last2=Kim |first2=Seon-Kyu |last3=Kim |first3=Seon-Young |date=March 2017 |title=Use of Graph Database for the Integration of Heterogeneous Biological Data |journal=Genomics & Informatics |volume=15 |issue=1 |pages=19–27 |doi=10.5808/GI.2017.15.1.19 |issn=1598-866X |pmc=5389944 |pmid=28416946}}{{Cite book |last1=Bourbakis |first1=Nikolaos G. |url=https://books.google.com/books?id=mV3wxKLHlnwC&q=%22gdb%22+%22graph+database%22&pg=PA381 |title=Artificial Intelligence and Automation |publisher=World Scientific |year=1998 |isbn=9789810226374 |page=381 |access-date=2018-04-20}}}}
{{term|graph theory}}
{{defn|The study of graphs, which are mathematical structures used to model pairwise relations between objects.}}
{{term|graph traversal}}
{{ghat|Also graph search.}}
{{defn|The process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
H
{{glossary}}
{{term|hallucination}}
{{defn|A response generated by AI that contains false or misleading information presented as fact.}}
{{term|heuristic}}
{{defn|A technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in {{gli|search algorithm|search algorithms}} at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.{{Cite book |last1=Pearl |first1=Judea |title=Heuristics: intelligent search strategies for computer problem solving |url=https://archive.org/details/intelligentsearc00jude |url-access=limited |publisher=Addison-Wesley Pub. Co., Inc., Reading, MA |year=1984 |location=United States |page=[https://archive.org/details/intelligentsearc00jude/page/n21 3] |bibcode=1985hiss.book.....P |osti=5127296}}}}
{{term|hidden layer}}
{{defn|A layer of neurons in an {{gli|artificial neural network}} that is neither an input layer nor an output layer.}}
{{term|hyper-heuristic}}
{{defn|A {{gli|heuristic}} search method that seeks to automate the process of selecting, combining, generating, or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems, often by the incorporation of {{gli|machine learning}} techniques. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.E. K. Burke, E. Hart, G. Kendall, J. Newall, P. Ross, and S. Schulenburg, Hyper-heuristics: An emerging direction in modern search technology, Handbook of Metaheuristics (F. Glover and G. Kochenberger, eds.), Kluwer, 2003, pp. 457–474.P. Ross, Hyper-heuristics, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques (E. K. Burke and G. Kendall, eds.), Springer, 2005, pp. 529–556.{{Cite journal |last1=Ozcan |first1=E. |last2=Bilgin |first2=B. |last3=Korkmaz |first3=E. E. |year=2008 |title=A Comprehensive Analysis of Hyper-heuristics |journal=Intelligent Data Analysis |volume=12 |issue=1 |pages=3–23 |doi=10.3233/ida-2008-12102}}}}
{{anchor|hyperparameter}}{{term|hyperparameter}}
{{defn|A parameter that can be set in order to define any configurable part of a {{gli|machine learning}} model's learning process.}}
{{term|hyperparameter optimization}}
{{defn|The process of choosing a set of optimal {{gli|hyperparameter|hyperparameters}} for a learning {{gli|algorithm}}.}}
{{term|hyperplane}}
{{defn|A decision boundary in {{gli|machine learning}} {{gli|classification|classifiers}} that partitions the input space into two or more sections, with each section corresponding to a unique class label.}}
{{glossaryend}}
I
{{glossary}}
{{term|IEEE Computational Intelligence Society}}
{{defn|A professional society of the Institute of Electrical and Electronics Engineers (IEEE) focussing on "the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing {{gli|neural network|neural networks}}, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained".{{Cite web |url=https://cis.ieee.org/scope.html |title=IEEE CIS Scope |url-status=dead |archive-url=https://web.archive.org/web/20160604143046/https://cis.ieee.org/scope.html |archive-date=4 June 2016 |access-date=18 March 2019}}}}
{{term|incremental learning}}
{{defn|A method of {{gli|machine learning}}, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of {{gli|supervised learning|supervised}} and {{gli|unsupervised learning}} that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.}}
{{term|inference engine}}
{{defn|A component of the system that applies logical rules to the knowledge base to deduce new information.}}
{{term|information integration (II)}}
{{defn|The merging of information from heterogeneous sources with differing conceptual, contextual and typographical representations. It is used in data mining and consolidation of data from unstructured or semi-structured resources. Typically, information integration refers to textual representations of knowledge but is sometimes applied to rich-media content. Information fusion, which is a related term, involves the combination of information into a new set of information towards reducing redundancy and uncertainty.}}
{{term|Information Processing Language (IPL)}}
{{defn|A programming language that includes features intended to help with programs that perform simple problem solving actions such as lists, dynamic memory allocation, data types, recursion, functions as arguments, generators, and cooperative multitasking. IPL invented the concept of list processing, albeit in an assembly-language style.}}
{{term|intelligence amplification (IA)}}
{{ghat|Also cognitive augmentation, machine augmented intelligence, and enhanced intelligence.}}
{{defn|The effective use of information technology in augmenting human intelligence.}}
{{term|intelligence explosion}}
{{defn|A possible outcome of humanity building {{gli|artificial general intelligence}} (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown, at the time of the technological singularity.}}
{{anchor|intelligent agent}}{{term|intelligent agent (IA)}}
{{defn|An autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also {{gli|machine learning|learn}} or use knowledge to achieve their goals. They may be very simple or very complex.}}
{{term|intelligent control}}
{{defn|A class of control techniques that use various {{gli|artificial intelligence}} computing approaches like neural networks, Bayesian probability, {{gli|fuzzy logic}}, {{gli|machine learning}}, {{gli|reinforcement learning}}, evolutionary computation and genetic algorithms.{{Cite web |url=https://engineering.purdue.edu/ManLab/control/intell_control.htm |title=Control of Machining Processes – Purdue ME Manufacturing Laboratories |website=engineering.purdue.edu}}}}
{{term|intelligent personal assistant}}
{{ghat|Also virtual assistant or personal digital assistant.}}
{{defn|A software agent that can perform tasks or services for an individual based on verbal commands. Sometimes the term "chatbot" is used to refer to virtual assistants generally or specifically accessed by online chat (or in some cases online chat programs that are exclusively for entertainment purposes). Some virtual assistants are able to interpret human speech and respond via synthesized voices. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands.{{Cite journal |last1=Hoy |first1=Matthew B. |year=2018 |title=Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants |journal=Medical Reference Services Quarterly |volume=37 |issue=1 |pages=81–88 |doi=10.1080/02763869.2018.1404391 |pmid=29327988|s2cid=30809087 }}}}
{{term|interpretation}}
{{defn|An assignment of meaning to the symbols of a {{gli|formal language}}. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.}}
{{term|intrinsic motivation}}
{{defn|An intelligent agent is intrinsically motivated to act if the information content alone, of the experience resulting from the action, is the motivating factor. Information content in this context is measured in the information theory sense as quantifying uncertainty. A typical intrinsic motivation is to search for unusual (surprising) situations, in contrast to a typical extrinsic motivation such as the search for food. Intrinsically motivated artificial agents display behaviours akin to exploration and curiosity.{{Cite book |last1=Oudeyer |first1=Pierre-Yves |title=Proc. of the 8th Conf. on Epigenetic Robotics |last2=Kaplan |first2=Frederic |date=2008 |volume=5 |pages=29–31 |chapter=How can we define intrinsic motivation?}}}}
{{term|issue tree}}
{{ghat|Also logic tree.}}
{{defn|A graphical breakdown of a question that dissects it into its different components vertically and that progresses into details as it reads to the right.{{Cite journal |last1=Chevallier |first1=Arnaud |s2cid=157255130 |title=Strategic thinking in complex problem solving |date=2016 |journal=Oxford Scholarship Online |publisher=Oxford University Press |isbn=9780190463908 |location=Oxford; New York |doi=10.1093/acprof:oso/9780190463908.001.0001 |oclc=940455195}}{{rp|47}} Issue trees are useful in problem solving to identify the root causes of a problem as well as to identify its potential solutions. They also provide a reference point to see how each piece fits into the whole picture of a problem.{{Cite web |url=https://interactive.cabinetoffice.gov.uk/strategy/survivalguide/skills/s_issue.htm |title=Strategy survival guide: Issue trees |date=July 2004 |publisher=Government of the United Kingdom |location=London |archive-url=https://web.archive.org/web/20120217163843/https://interactive.cabinetoffice.gov.uk/strategy/survivalguide/skills/s_issue.htm |archive-date=2012-02-17 |access-date=2018-10-06 }} Also available in [http://webarchive.nationalarchives.gov.uk/20060213205515/https://strategy.gov.uk/downloads/survivalguide/downloads/ssg_v2.1.pdf PDF format].}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
J
{{glossary}}
{{term|junction tree algorithm}}
{{ghat|Also Clique Tree.}}
{{defn|A method used in {{gli|machine learning}} to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches.{{cite web |url=https://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf |title=A Short Course on Graphical Models |last1=Paskin |first1=Mark |website=Stanford}}}}
{{glossaryend}}
K
{{glossary}}
{{term|kernel method}}
{{defn|In {{gli|machine learning}}, kernel methods are a class of algorithms for pattern analysis, whose best known member is the {{gli|support vector machine}} (SVM). The general task of pattern analysis is to find and study general types of relations (e.g., {{gli|cluster analysis}}, rankings, principal components, correlations, {{gli|classification|classifications}}) in datasets.}}
{{term|KL-ONE}}
{{defn|A well-known knowledge representation system in the tradition of semantic networks and frames; that is, it is a frame language. The system is an attempt to overcome semantic indistinctness in semantic network representations and to explicitly represent conceptual information as a structured inheritance network.{{Cite journal |last1=Woods |first1=W. A. |last2=Schmolze |first2=J. G. |year=1992 |title=The KL-ONE family |journal=Computers & Mathematics with Applications |volume=23 |issue=2–5 |pages=133 |doi=10.1016/0898-1221(92)90139-9 |author-link1=William Aaron Woods}}{{Cite journal |last1=Brachman |first1=R. J. |last2=Schmolze |first2=J. G. |year=1985 |title=An Overview of the KL-ONE Knowledge Representation System |url=https://dli.iiit.ac.in/vdata/IJCAI/IJCAI-83-VOL-1/PDF/072.pdf |journal=Cognitive Science |volume=9 |issue=2 |pages=171 |doi=10.1207/s15516709cog0902_1 |author-link1=Ronald J. Brachman }}{{Dead link|date=September 2023 |bot=InternetArchiveBot |fix-attempted=yes }}{{Cite book |last1=Duce |first1=D.A. |last2=Ringland |first2=G.A. |url=https://archive.org/details/approachestoknow0000unse |title=Approaches to Knowledge Representation, An Introduction |publisher=Research Studies Press, Ltd. |year=1988 |isbn=978-0-86380-064-1 |url-access=registration}}}}
{{anchor|k-nearest neighbors}}{{term|k-nearest neighbors}}
{{defn|A non-parametric {{gli|supervised learning}} method first developed by Evelyn Fix and Joseph Hodges in 1951,{{Cite report | last1=Fix | first1=Evelyn | last2= Hodges | first2=Joseph L. | title=Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties | issue=Report Number 4, Project Number 21-49-004 | year=1951 | url=https://apps.dtic.mil/dtic/tr/fulltext/u2/a800276.pdf | archive-url=https://web.archive.org/web/20200926212807/https://apps.dtic.mil/dtic/tr/fulltext/u2/a800276.pdf | url-status=live | archive-date=September 26, 2020 | publisher=USAF School of Aviation Medicine, Randolph Field, Texas}} and later expanded by Thomas Cover. It is used for {{gli|classification}} and {{gli|regression}}.}}
{{term|knowledge acquisition}}
{{defn|The process used to define the rules and ontologies required for a knowledge-based system. The phrase was first used in conjunction with expert systems to describe the initial tasks associated with developing an expert system, namely finding and interviewing domain experts and capturing their knowledge via rules, objects, and frame-based ontologies.}}
{{anchor|knowledge-based system}}{{term|knowledge-based system (KBS)}}
{{defn|A computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine.}}
{{term|knowledge distillation}}
{{defn|The process of transferring knowledge from a large {{gli|machine learning}} model to a smaller one.}}
{{term|knowledge engineering (KE)}}
{{defn|All technical, scientific, and social aspects involved in building, maintaining, and using {{gli|knowledge-based system|knowledge-based systems}}.}}
{{term|knowledge extraction}}
{{defn|The creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data.}}
{{term|knowledge Interchange Format (KIF)}}
{{defn|A computer language designed to enable systems to share and reuse information from knowledge-based systems. KIF is similar to frame languages such as KL-ONE and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc.}}
{{anchor|knowledge representation and reasoning}}{{term|knowledge representation and reasoning (KR² or KR&R)}}
{{defn|The field of {{gli|artificial intelligence}} dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology{{Cite book |last1=Schank |first1=Roger |title=Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures |last2=Robert Abelson |date=1977 |publisher=Lawrence Erlbaum Associates, Inc.}} about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.{{Cite news |url=https://deepminds.science/knowledge-representation-neural-networks/ |title=Knowledge Representation in Neural Networks – deepMinds |date=2018-08-16 |work=deepMinds |access-date=2018-08-16 |archive-date=17 August 2018 |archive-url=https://web.archive.org/web/20180817023355/https://deepminds.science/knowledge-representation-neural-networks/ |url-status=dead }} Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.}}
{{term|k-means clustering}}
{{defn|A method of vector quantization, originally from signal processing, that aims to partition n observations into k {{gli|cluster analysis|clusters}} in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster.}}
{{glossaryend}}
L
{{glossary}}
{{term|language model}}
{{defn|A probabilistic model that manipulates natural language.}}
{{anchor|large language model}}{{term|large language model (LLM)}}
{{defn|A {{gli|language model}} with a large number of parameters (typically at least a billion) that are adjusted during training. Due to its size, it requires a lot of data and computing capability to train. Large language models are usually based on the {{gli|transformer}} architecture.{{Cite web |last=Kerner |first=Sean Michael |title=What are Large Language Models? |url=https://www.techtarget.com/whatis/definition/large-language-model-LLM |access-date=2024-01-28 |website=TechTarget |language=en}}}}
{{term|lazy learning}}
{{defn|In {{gli|machine learning}}, lazy learning is a learning method in which {{gli|generalization}} of the training data is, in theory, delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries.}}
{{term|Lisp (programming language) (LISP)}}
{{defn|A family of programming languages with a long history and a distinctive, fully parenthesized prefix notation.{{Cite book |last1=Reilly |first1=Edwin D. |url=https://archive.org/details/milestonesincomp0000reil |title=Milestones in computer science and information technology |publisher=Greenwood Publishing Group |year=2003 |isbn=978-1-57356-521-9 |pages=[https://archive.org/details/milestonesincomp0000reil/page/156 156]–157 |url-access=registration}}}}
{{term|logic programming}}
{{defn|A type of programming paradigm which is largely based on formal logic. Any program written in a logic {{gli|programming language}} is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include {{gli|Prolog}}, {{gli|answer set programming}} (ASP), and {{gli|Datalog}}.}}
{{anchor|long short-term memory}}{{term|long short-term memory (LSTM)}}
{{defn|An artificial {{gli|recurrent neural network}} architecture{{Cite journal |last1=Hochreiter |first1=Sepp |last2=Schmidhuber |first2=Jürgen |year=1997 |title=Long short-term memory |journal=Neural Computation |volume=9 |issue=8 |pages=1735–1780 |doi=10.1162/neco.1997.9.8.1735 |pmid=9377276|s2cid=1915014 }} used in the field of {{gli|deep learning}}. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a "general purpose computer" (that is, it can compute anything that a {{gli|Turing machine}} can).{{Cite book |last1=Siegelmann |first1=Hava T. |last2=Sontag |first2=Eduardo D. |title=Proceedings of the fifth annual workshop on Computational learning theory |chapter=On the computational power of neural nets |date=1992 |work=ACM |isbn=978-0897914970 |volume=COLT '92 |pages=440–449 |doi=10.1145/130385.130432|s2cid=207165680 }} It can not only process single data points (such as images), but also entire sequences of data (such as speech or video).}}
{{glossaryend}}
M
{{glossary}}
{{anchor|machine vision}}{{term|machine vision (MV)}}
{{defn|The technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance.}}
{{term|Markov chain}}
{{defn|A stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.{{Cite web |url=https://en.oxforddictionaries.com/definition/us/markov_chain |archive-url=https://web.archive.org/web/20171215001435/https://en.oxforddictionaries.com/definition/us/markov_chain |url-status=dead |archive-date=15 December 2017 |title=Markov chain {{!}} Definition of Markov chain in US English by Oxford Dictionaries |website=Oxford Dictionaries {{!}} English |access-date=2017-12-14}}[https://brilliant.org/wiki/markov-chains/ Definition at Brilliant.org "Brilliant Math and Science Wiki"]. Retrieved 12 May 2019}}
{{anchor|markov decision process}}{{term|Markov decision process (MDP)}}
{{defn|A discrete time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming and {{gli|reinforcement learning}}.}}
{{term|mathematical optimization}}
{{ghat|Also mathematical programming.}}
{{defn|In mathematics, computer science, and operations research, the selection of a best element (with regard to some criterion) from some set of available alternatives."[https://glossary.computing.society.informs.org/index.php?page=nature.html The Nature of Mathematical Programming] {{webarchive|url=https://web.archive.org/web/20140305080324/https://glossary.computing.society.informs.org/index.php?page=nature.html |date=2014-03-05 }}," Mathematical Programming Glossary, INFORMS Computing Society.}}
{{anchor|machine learning}}{{term|machine learning (ML)}}
{{defn|The scientific study of {{gli|algorithm|algorithms}} and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead.}}
{{term|machine listening}}
{{ghat|Also computer audition (CA).}}
{{defn|A general field of study of {{gli|algorithm|algorithms}} and systems for audio understanding by machine.{{Cite book |last1=Wang |first1=Wenwu |url=https://igi-global.com/book/machine-audition-principles-algorithms-systems/40288 |title=Machine Audition: Principles, Algorithms and Systems |date=1 July 2010 |publisher=IGI Global |isbn=9781615209194 |via=igi-global.com}}{{Cite web |url=https://epubs.surrey.ac.uk/596085/1/Wang_Preface_MA_2010.pdf |title=Machine Audition: Principles, Algorithms and Systems}}}}
{{term|machine perception}}
{{defn|The capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.Malcolm Tatum (October 3, 2012). "What is Machine Perception".Alexander Serov (January 29, 2013). "Subjective Reality and Strong Artificial Intelligence" (PDF).{{Cite web |url=https://ccs.fau.edu/~hahn/mpcr/ |title=Machine Perception & Cognitive Robotics Laboratory |website=ccs.fau.edu |access-date=2016-06-18}}}}
{{term|mechanism design}}
{{defn|A field in economics and game theory that takes an engineering approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics (markets, auctions, voting procedures) to networked-systems (internet interdomain routing, sponsored search auctions).}}
{{term|mechatronics}}
{{ghat|Also mechatronic engineering.}}
{{defn|A multidisciplinary branch of engineering that focuses on the engineering of both electrical and mechanical systems, and also includes a combination of robotics, electronics, computer, telecommunications, systems, control, and product engineering.{{Cite web |url=https://mme.uwaterloo.ca/undergrad/mechatronics/prospective/prospective.html |title=What is Mechatronics Engineering? |website=Prospective Student Information |publisher=University of Waterloo |url-status=dead |archive-url=https://web.archive.org/web/20111006100431/https://mme.uwaterloo.ca/undergrad/mechatronics/prospective/prospective.html |archive-date=6 October 2011 |access-date=30 May 2011}}{{Cite web |url=https://mechatronics.tul.cz |title=Mechatronics (Bc., Ing., PhD.) |access-date=15 April 2011 |archive-date=15 August 2016 |archive-url=https://web.archive.org/web/20160815193349/https://mechatronics.tul.cz/ |url-status=dead }}}}
{{term|metabolic network reconstruction and simulation}}
{{defn|Allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology.{{Cite journal |last1=Franke |last2=Siezen |first2=Teusink |year=2005 |title=Reconstructing the metabolic network of a bacterium from its genome. |journal=Trends in Microbiology |volume=13 |issue=11 |pages=550–558 |doi=10.1016/j.tim.2005.09.001 |pmid=16169729}}}}
{{term|metaheuristic}}
{{defn|In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial {{gli|search algorithm}}) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity.{{Cite journal |last1=Balamurugan |first1=R. |last2=Natarajan |first2=A.M. |last3=Premalatha |first3=K. |year=2015 |title=Stellar-Mass Black Hole Optimization for Biclustering Microarray Gene Expression Data |journal=Applied Artificial Intelligence |volume=29 |issue=4 |pages=353–381 |doi=10.1080/08839514.2015.1016391|s2cid=44624424 |doi-access=free }}{{Cite journal |last1=Bianchi |first1=Leonora |last2=Dorigo |first2=Marco |last3=Maria Gambardella |first3=Luca |last4=Gutjahr |first4=Walter J. |year=2009 |title=A survey on metaheuristics for stochastic combinatorial optimization |journal=Natural Computing |volume=8 |issue=2 |pages=239–287 |doi=10.1007/s11047-008-9098-4|s2cid=9141490 |url=https://doc.rero.ch/record/319945/files/11047_2008_Article_9098.pdf }} Metaheuristics sample a set of solutions which is too large to be completely sampled.}}
{{term|model checking}}
{{defn|In computer science, model checking or property checking is, for a given model of a system, exhaustively and automatically checking whether this model meets a given specification. Typically, one has hardware or software systems in mind, whereas the specification contains safety requirements such as the absence of deadlocks and similar critical states that can cause the system to crash. Model checking is a technique for automatically verifying correctness properties of finite-state systems.}}
{{term|modus ponens}}
{{defn|In propositional logic, modus ponens is a rule of inference.Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition Enderton:110, Harcourt Academic Press, Burlington MA, {{ISBN|978-0-12-238452-3}}. It can be summarized as "P implies Q and P is asserted to be true, therefore Q must be true."}}
{{term|modus tollens}}
{{defn|In propositional logic, modus tollens is a valid argument form and a rule of inference. It is an application of the general truth that if a statement is true, then so is its contrapositive. The inference rule modus tollens asserts that the inference from P implies Q to the negation of Q implies the negation of P is valid.}}
{{term|Monte Carlo tree search}}
{{defn|In computer science, Monte Carlo tree search (MCTS) is a heuristic {{gli|search algorithm}} for some kinds of decision processes.}}
{{anchor|multi-agent system}}{{term|multi-agent system (MAS)}}
{{ghat|Also self-organized system.}}
{{defn|A computerized system composed of multiple interacting {{gli|intelligent agent|intelligent agents}}. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or {{gli|reinforcement learning}}.}}
{{anchor|multilayer perceptron}}{{term|multilayer perceptron (MLP)}}
{{defn|In {{gli|deep learning}}, a multilayer perceptron (MLP) is a name for a modern feedforward {{gli|neural network}} consisting of fully connected neurons with nonlinear {{gli|activation function|activation functions}}, organized in layers, notable for being able to distinguish data that is not linearly separable.Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems, 2(4), 303–314.}}
{{term|multi-swarm optimization}}
{{defn|A variant of particle swarm optimization (PSO) based on the use of multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that each sub-swarm focuses on a specific region while a specific diversification method decides where and when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist.}}
{{term|mutation}}
{{defn|A genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. It is analogous to biological mutation. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution. Hence GA can come to a better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search.}}
{{term|Mycin}}
{{defn|An early {{gli|backward chaining}} expert system that used {{gli|artificial intelligence}} to identify bacteria causing severe infections, such as bacteremia and meningitis, and to recommend antibiotics, with the dosage adjusted for patient's body weight – the name derived from the antibiotics themselves, as many antibiotics have the suffix "-mycin". The MYCIN system was also used for the diagnosis of blood clotting diseases.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
N
{{glossary}}
{{term|naive Bayes classifier}}
{{defn|In {{gli|machine learning}}, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the {{gli|feature|features}}.}}
{{term|naive semantics}}
{{defn|An approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas."[https://portal.acm.org/citation.cfm?id=628188 Naive Semantics to Support Automated Database Design]", 'IEEE Transactions on Knowledge and Data Engineering, Volume 14, issue 1 (January 2002) by V. C. Storey, R. C. Goldstein and
H. Ullrich}}
{{term|name binding}}
{{defn|In programming languages, name binding is the association of entities (data and/or code) with identifiers.{{Citation |title=Using early binding and late binding in Automation |date=May 11, 2007 |url=https://support.microsoft.com/kb/245115 |publisher=Microsoft |access-date=May 11, 2009}} An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier id
in a context that establishes a binding for id
is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences.}}
{{term|named-entity recognition (NER)}}
{{ghat|Also entity identification, entity chunking, and entity extraction.}}
{{defn|A subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.}}
{{term|named graph}}
{{defn|A key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI,strictly speaking a URIRef allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data modelhttps://w3.org/TR/PR-rdf-syntax/ "Resource Description Framework (RDF) Model and Syntax Specification" through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large.}}
{{term|natural language generation (NLG)}}
{{defn|A software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system.}}
{{anchor|natural language processing}}{{term|natural language processing (NLP)}}
{{defn|A subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.}}
{{term|natural language programming}}
{{defn|An ontology-assisted way of programming in terms of natural-language sentences, e.g. English.Miller, Lance A. "Natural language programming: Styles, strategies, and contrasts." IBM Systems Journal 20.2 (1981): 184–215.}}
{{term|network motif}}
{{defn|All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns.}}
{{term|neural machine translation (NMT)}}
{{defn|An approach to machine translation that uses a large {{gli|artificial neural network}} to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.}}
{{anchor|artificial neural network}}
{{anchor|neural network}}
{{term|neural network}}
{{defn|A neural network can refer to either a neural circuit of biological neurons (sometimes also called a biological neural network), {{em|or}} a network of artificial neurons or nodes in the case of an artificial neural network.{{cite journal |first=J. J. |last=Hopfield |title=Neural networks and physical systems with emergent collective computational abilities |journal=Proc. Natl. Acad. Sci. U.S.A. |volume=79 |issue= 8|pages=2554–2558 |year=1982 |doi=10.1073/pnas.79.8.2554 |pmc=346238 |pmid=6953413|bibcode=1982PNAS...79.2554H |doi-access=free }} Artificial neural networks are used for solving {{gli|artificial intelligence}} (AI) problems; they model connections of biological neurons as weights between nodes. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an {{gli|activation function}} controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.}}
{{term|neural Turing machine (NTM)}}
{{defn|A recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of {{gli|neural network|neural networks}} with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.{{Cite web |url=https://linkedin.com/pulse/deep-minds-interview-googles-alex-graves-koray-sophie-curtis |title=Deep Minds: An Interview with Google's Alex Graves & Koray Kavukcuoglu |access-date=May 17, 2016}} An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.{{Cite arXiv|eprint = 1410.5401|last1 = Graves|first1 = Alex|last2 = Wayne|first2 = Greg|last3 = Danihelka|first3 = Ivo|title = Neural Turing Machines|year = 2014|class = cs.NE}}}}
{{term|neuro-fuzzy}}
{{defn|Combinations of {{gli|artificial neural network|artificial neural networks}} and {{gli|fuzzy logic}}.}}
{{term|neurocybernetics}}
{{ghat|Also brain–computer interface (BCI), neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI).}}
{{defn|A direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.{{Cite journal |last1=Krucoff |first1=Max O. |last2=Rahimpour |first2=Shervin |last3=Slutzky |first3=Marc W. |last4=Edgerton |first4=V. Reggie |last5=Turner |first5=Dennis A. |date=2016-01-01 |title=Enhancing Nervous System Recovery through Neurobiologics, Neural Interface Training, and Neurorehabilitation |journal=Frontiers in Neuroscience |volume=10 |pages=584 |doi=10.3389/fnins.2016.00584 |pmc=5186786 |pmid=28082858|doi-access=free }}}}
{{term|neuromorphic engineering}}
{{ghat|Also neuromorphic computing.}}
{{defn|A concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.{{Cite journal |last1=Mead |first1=Carver |year=1990 |title=Neuromorphic electronic systems |url=https://authors.library.caltech.edu/53090/1/00058356.pdf |journal=Proceedings of the IEEE |volume=78 |issue=10 |pages=1629–1636 |citeseerx=10.1.1.161.9762 |doi=10.1109/5.58356|s2cid=1169506 }} In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors,{{Cite journal |last1=Maan |first1=A. K. |last2=Jayadevi |first2=D. A. |last3=James |first3=A. P. |date=2016-01-01 |title=A Survey of Memristive Threshold Logic Circuits |journal=IEEE Transactions on Neural Networks and Learning Systems |volume=PP |issue=99 |pages=1734–1746 |arxiv=1604.07121 |doi=10.1109/TNNLS.2016.2547842 |issn=2162-237X |pmid=27164608|s2cid=1798273 }} spintronic memories,"[https://academia.edu/37832670/A_Survey_of_Spintronic_Architectures_for_Processing-in-Memory_and_Neural_Networks A Survey of Spintronic Architectures for Processing-in-Memory and Neural Networks]", JSA, 2018 threshold switches, and transistors.{{Cite journal |last1=Zhou |first1=You |last2=Ramanathan |first2=S. |date=2015-08-01 |title=Mott Memory and Neuromorphic Devices |journal=Proceedings of the IEEE |volume=103 |issue=8 |pages=1289–1310 |doi=10.1109/JPROC.2015.2431914 |s2cid=11347598 |issn=0018-9219|url=https://zenodo.org/record/895565 }}{{Cite journal |last1=Monroe |first1=D. |year=2014 |title=Neuromorphic computing gets ready for the (really) big time |journal=Communications of the ACM |volume=57 |issue=6 |pages=13–15 |doi=10.1145/2601069|s2cid=20051102 }}{{Cite journal |last1=Zhao |first1=W. S. |last2=Agnus |first2=G. |last3=Derycke |first3=V. |last4=Filoramo |first4=A. |last5=Bourgoin |first5=J. -P. |last6=Gamrat |first6=C. |year=2010 |title=Nanotube devices based crossbar architecture: Toward neuromorphic computing |url=https://zenodo.org/record/3428659 |journal=Nanotechnology |volume=21 |issue=17 |pages=175202 |bibcode=2010Nanot..21q5202Z |doi=10.1088/0957-4484/21/17/175202 |pmid=20368686 |s2cid=16253700 |access-date=2 December 2019 |archive-date=10 April 2021 |archive-url=https://web.archive.org/web/20210410124706/https://zenodo.org/record/3428659 |url-status=dead }}{{YouTube|id=6RoiZ90mGfw|title=The Human Brain Project SP 9: Neuromorphic Computing Platform}}}}
{{term|node}}
{{defn|A basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.}}
{{term|nondeterministic algorithm}}
{{defn|An {{gli|algorithm}} that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.}}
{{term|nouvelle AI}}
{{defn|Nouvelle AI differs from {{gli|artificial intelligence|classical AI}} by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world", instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them.{{Cite web |url=https://alanturing.net/turing_archive/pages/reference%20articles/what_is_AI/What%20is%20AI11.html |title=What is Artificial Intelligence? |last1=Copeland |first1=Jack |date=May 2000 |website=AlanTuring.net |access-date=7 November 2015 |archive-date=9 November 2015 |archive-url=https://web.archive.org/web/20151109070037/http://www.alanturing.net/turing_archive/pages/reference%20articles/What_is_AI/What%20is%20AI11.html |url-status=dead }}}}
{{term|NP}}
{{defn|In {{gli|computational complexity theory}}, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time.{{Cite book |last1=Kleinberg |first1=Jon |url=https://archive.org/details/algorithmdesign0000klei |title=Algorithm Design |last2=Tardos |first2=Éva |publisher=Addison-Wesley |year=2006 |isbn=0-321-37291-3 |edition=2nd |page=[https://archive.org/details/algorithmdesign0000klei/page/464 464] |url-access=registration}}polynomial time refers to how quickly the number of operations needed by an algorithm, relative to the size of the problem, grows. It is therefore a measure of efficiency of an algorithm.}}
{{term|NP-completeness}}
{{defn|In {{gli|computational complexity theory}}, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time{{Cite book |last1=Cobham |first1=Alan |title=Proc. Logic, Methodology, and Philosophy of Science II |publisher=North Holland |year=1965 |chapter=The intrinsic computational difficulty of functions |author-link=Alan Cobham}}), such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty.}}
{{term|NP-hardness}}
{{ghat|Also non-deterministic polynomial-time hardness.}}
{{defn|In {{gli|computational complexity theory}}, the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
O
{{glossary}}
{{term|Occam's razor}}
{{ghat|Also Ockham's razor or Ocham's razor.}}
{{defn|The problem-solving principle that states that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions;{{Cite web |url=https://math.ucr.edu/home/baez/physics/General/occam.html |title=What is Occam's Razor? |website=math.ucr.edu |access-date=2019-06-01}} the principle is not meant to filter out hypotheses that make different predictions. The idea is attributed to the English Franciscan friar William of Ockham ({{c.}} 1287–1347), a scholastic philosopher and theologian.}}
{{term|offline learning}}
{{defn|A {{gli|machine learning}} training approach in which a model is trained on a fixed dataset that is not updated during the learning process.}}
{{term|online machine learning}}
{{defn|A method of {{gli|machine learning}} in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time.}}
{{term|ontology learning}}
{{ghat|Also ontology extraction, ontology generation, or ontology acquisition.}}
{{defn|The automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval.}}
{{term|OpenAI}}
{{defn|The for-profit corporation OpenAI LP, whose parent organization is the non-profit organization OpenAI Inc"OpenAI shifts from nonprofit to 'capped-profit' to attract capital". TechCrunch. Retrieved 2019-05-10. that conducts research in the field of {{gli|artificial intelligence}} (AI) with the stated aim to promote and develop {{gli|friendly artificial intelligence|friendly AI}} in such a way as to benefit humanity as a whole.}}
{{term|OpenCog}}
{{defn|A project that aims to build an {{gli|open-source software|open-source}} artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent {{gli|artificial general intelligence}} (AGI) as an emergent phenomenon of the whole system.{{Cite web |url=https://cybertechnews.org/?p=915 |title=OpenCog: Open-Source Artificial General Intelligence for Virtual Worlds |website=CyberTech News |date=2009-03-06 |url-status=dead |archive-url=https://web.archive.org/web/20090306053354/https://cybertechnews.org/?p=915 |archive-date=2009-03-06 |access-date=2016-10-01}}}}
{{term|Open Mind Common Sense}}
{{defn|An artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web.}}
{{anchor|open-source software}}{{term|open-source software (OSS)}}
{{defn|A type of computer software in which source code is released under an license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose.{{Cite book |last1=St. Laurent |first1=Andrew M. |url=https://books.google.com/books?id=04jG7TTLujoC&pg=PA4 |title=Understanding Open Source and Free Software Licensing |publisher=O'Reilly Media |year=2008 |isbn=9780596553951 |page=4}} Open-source software may be developed in an collaborative public manner. Open-source software is a prominent example of open collaboration.{{Cite journal |last1=Levine |first1=Sheen S. |last2=Prietula |first2=Michael J. |date=2013-12-30 |title=Open Collaboration for Innovation: Principles and Performance |journal=Organization Science |volume=25 |issue=5 |pages=1414–1433 |arxiv=1406.7541 |doi=10.1287/orsc.2013.0872 |s2cid=6583883 |issn=1047-7039}}}}
{{anchor|underfitting}}{{term|overfitting}}
{{defn|"The production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably".Definition of "[https://web.archive.org/web/20171107014257/https://en.oxforddictionaries.com/definition/overfitting overfitting]" at OxfordDictionaries.com: this definition is specifically for statistics. In other words, an overfitted model memorizes training data details but cannot {{gli|generalization|generalize}} to new data. Conversely, an underfitted model is too simple to capture the complexity of the training data.}}
{{glossaryend}}
P
{{glossary}}
{{term|partial order reduction}}
{{defn|A technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders.}}
{{term|partially observable Markov decision process (POMDP)}}
{{defn|A generalization of a {{gli|markov decision process|Markov decision process}} (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.}}
{{term|particle swarm optimization (PSO)}}
{{defn|A computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.}}
{{term|pathfinding}}
{{ghat|Also pathing.}}
{{defn|The plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on {{gli|Dijkstra's algorithm}} for finding a shortest path on a weighted graph.}}
{{term|pattern recognition}}
{{defn|Concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning (PDF). Springer. p. vii. Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years.}}
{{term|perceptron}}
{{defn|An {{gli|algorithm}} for {{gli|supervised learning}} of binary {{gli|classification|classifiers}}.}}
{{term|predicate logic}}
{{ghat|Also first-order logic, predicate logic, and first-order predicate calculus.}}
{{defn|A collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations;Hughes, G. E., & Cresswell, M. J., [https://books.google.com/books?id=Dsn1xWNB4MEC&q=%22first-order+logic%22 A New Introduction to Modal Logic] (London: Routledge, 1996), [https://books.google.com/books?id=_CB5wiBeaA4C&pg=PA161 p.161]. in this sense, propositional logic is the foundation of first-order logic.}}
{{term|predictive analytics}}
{{defn|A variety of statistical techniques from data mining, predictive modelling, and {{gli|machine learning}}, that analyze current and historical facts to make predictions about future or otherwise unknown events.{{Citation |last1=Nyce |first1=Charles |title=Predictive Analytics White Paper |url=https://the-digital-insurer.com/wp-content/uploads/2013/12/78-Predictive-Modeling-White-Paper.pdf |page=1 |year=2007 |publisher=American Institute for Chartered Property Casualty Underwriters/Insurance Institute of America}}{{Citation |last1=Eckerson |first1=Wayne |title=Extending the Value of Your Data Warehousing Investment |date=May 10, 2007 |url=https://tdwi.org/articles/2007/05/10/predictive-analytics.aspx?sc_lang=en |publisher=The Data Warehouse Institute}}}}
{{term|principal component analysis (PCA)}}
{{defn|A statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.}}
{{term|principle of rationality}}
{{ghat|Also rationality principle.}}
{{defn|A principle coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework.Karl R. Popper, The Myth of Framework, London (Routledge) 1994, chap. 8. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism.Karl R. Popper, The Poverty of Historicism, London (Routledge) 1960, chap. iv, sect. 31. According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational logic.}}
{{term|probabilistic programming (PP)}}
{{defn|A programming paradigm in which probabilistic models are specified and inference for these models is performed automatically.{{Cite news |url=https://phys.org/news/2015-04-probabilistic-lines-code-thousands.html |title=Probabilistic programming does in 50 lines of code what used to take thousands |date=April 13, 2015 |work=phys.org |access-date=2015-04-13}} It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable.{{Cite web |url=https://probabilistic-programming.org/wiki/Home |title=Probabilistic Programming |website=probabilistic-programming.org |url-status=dead |archive-url=https://web.archive.org/web/20160110035042/https://probabilistic-programming.org/wiki/Home |archive-date=10 January 2016 |access-date=31 July 2019}}Pfeffer, Avrom (2014), Practical Probabilistic Programming, Manning Publications. p.28. {{ISBN|978-1 6172-9233-0}} It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as "Probabilistic programming languages" (PPLs).}}
{{anchor|production system}}{{term|production system}}
{{defn|A computer program typically used to provide some form of AI, which consists primarily of a set of rules about behavior, but also includes the mechanism necessary to follow those rules as the system responds to states of the world.}}
{{term|programming language}}
{{defn|A {{gli|formal language}}, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement {{gli|algorithm|algorithms}}.}}
{{term|Prolog}}
{{defn|A {{gli|logic programming}} language associated with artificial intelligence and computational linguistics.{{Cite book |last1=Clocksin |first1=William F. |title=Programming in Prolog |last2=Mellish |first2=Christopher S. |publisher=Springer-Verlag |year=2003 |isbn=978-3-540-00678-7 |location=Berlin; New York}}{{Cite book |last1=Bratko |first1=Ivan |title=Prolog programming for artificial intelligence |publisher=Addison Wesley |year=2012 |isbn=978-0-321-41746-6 |edition=4th |location=Harlow, England; New York}}{{Cite book |last1=Covington |first1=Michael A. |title=Natural language processing for Prolog programmers |publisher=Prentice Hall |year=1994 |isbn=978-0-13-629213-5 |location=Englewood Cliffs, N.J.}} Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.Lloyd, J. W. (1984). Foundations of logic programming. Berlin: Springer-Verlag. {{ISBN|978-3-540-13299-8}}.}}
{{term|propositional calculus}}
{{ghat|Also propositional logic, statement logic, sentential calculus, sentential logic, and zeroth-order logic.}}
{{defn|A branch of logic which deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.}}
{{anchor|proximal policy optimization}}{{term|proximal policy optimization (PPO)}}
{{defn|A {{gli|reinforcement learning}} {{gli|algorithm}} for training an {{gli|intelligent agent}}'s decision function to accomplish difficult tasks.}}
{{term|Python}}
{{defn|An interpreted, high-level, general-purpose {{gli|programming language}} created by Guido van Rossum and first released in 1991. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.Kuhlman, Dave. "A Python Book: Beginning Python, Advanced Python, and Python Exercises". Section 1.1. Archived from the original (PDF) on 23 June 2012.}}
{{term|PyTorch}}
{{defn|A {{gli|machine learning}} library based on the Torch library,{{cite news|url=https://www.infoworld.com/article/3159120/artificial-intelligence/facebook-brings-gpu-powered-machine-learning-to-python.html|title=Facebook brings GPU-powered machine learning to Python|last=Yegulalp|first=Serdar|date=19 January 2017|work=InfoWorld|access-date=11 December 2017}}{{cite web|url=https://www.oreilly.com/ideas/why-ai-and-machine-learning-researchers-are-beginning-to-embrace-pytorch|title=Why AI and machine learning researchers are beginning to embrace PyTorch|last=Lorica|first=Ben|date=3 August 2017|publisher=O'Reilly Media|access-date=11 December 2017}}{{Cite book|title=Deep Learning with Python|last=Ketkar|first=Nikhil|date=2017|publisher=Apress, Berkeley, CA|isbn=9781484227657|pages=195–208|language=en|doi=10.1007/978-1-4842-2766-4_12|chapter=Introduction to PyTorch}} used for applications such as {{gli|computer vision}} and {{gli|natural language processing}},{{Cite web|url=https://www.datacamp.com/tutorial/nlp-with-pytorch-a-comprehensive-guide|title=NLP with PyTorch: A Comprehensive Guide|author=Moez Ali|date=Jun 2023|website=datacamp.com|language=en|access-date=2024-04-01}} originally developed by Meta AI and now part of the Linux Foundation umbrella.{{Cite news|url=https://www.oreilly.com/ideas/when-two-trends-fuse-pytorch-and-recommender-systems|title=When two trends fuse: PyTorch and recommender systems|last=Patel|first=Mo|date=2017-12-07|work=O'Reilly Media|access-date=2017-12-18|language=en}}{{Cite news|url=https://techcrunch.com/2017/09/07/facebook-and-microsoft-collaborate-to-simplify-conversions-from-pytorch-to-caffe2/|title=Facebook and Microsoft collaborate to simplify conversions from PyTorch to Caffe2|last=Mannes|first=John|work=TechCrunch|access-date=2017-12-18|language=en|quote=FAIR is accustomed to working with PyTorch – a deep learning framework optimized for achieving state of the art results in research, regardless of resource constraints. Unfortunately in the real world, most of us are limited by the computational capabilities of our smartphones and computers.}}{{Cite web|url=https://venturebeat.com/2017/11/29/tech-giants-are-using-open-source-frameworks-to-dominate-the-ai-community/|title=Tech giants are using open source frameworks to dominate the AI community|last=Arakelyan|first=Sophia|date=2017-11-29|website=VentureBeat|language=en-US|access-date=2017-12-18}}{{Cite web |title=PyTorch strengthens its governance by joining the Linux Foundation |url=https://pytorch.org/blog/PyTorchfoundation/ |access-date=2022-09-13 |website=pytorch.org |language=en}}}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
Q
{{glossary}}
{{term|Q-learning}}
{{defn|A model-free {{gli|reinforcement learning}} {{gli|algorithm}} for learning the value of an action in a particular state.}}
{{term|qualification problem}}
{{defn|In philosophy and artificial intelligence (especially {{gli|knowledge-based system|knowledge-based systems}}), the qualification problem is concerned with the impossibility of listing all of the preconditions required for a real-world action to have its intended effect.{{Cite book |last1=Reiter |first1=Raymond |title=Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems |url=https://archive.org/details/knowledgeactionl00reit_022 |url-access=limited |publisher=The MIT Press |year=2001 |isbn=9780262527002 |location=Cambridge, Massachusetts |pages=[https://archive.org/details/knowledgeactionl00reit_022/page/n40 20]–22}}{{Cite journal |last1=Thielscher |first1=Michael |date=September 2001 |title=The Qualification Problem: A solution to the problem of anomalous models |journal=Artificial Intelligence |volume=131 |issue=1–2 |pages=1–37 |doi=10.1016/S0004-3702(01)00131-X}} It might be posed as how to deal with the things that prevent me from achieving my intended result. It is strongly connected to, and opposite the ramification side of, the frame problem.}}
{{term|quantifier}}
{{defn|In logic, quantification specifies the quantity of specimens in the domain of discourse that satisfy an open formula. The two most common quantifiers mean "for all" and "there exists". For example, in arithmetic, quantifiers allow one to say that the natural numbers go on forever, by writing that for all n (where n is a natural number), there is another number (say, the successor of n) which is one bigger than n.}}
{{term|quantum computing}}
{{defn|The use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.{{Cite book |title=Quantum Computing : Progress and Prospects (2018) |publisher=National Academies Press |year=2019 |isbn=978-0-309-47969-1 |editor-last=Grumbling |editor-first=Emily |location=Washington, DC |page=I-5 |doi=10.17226/25196 |s2cid=125635007 |oclc=1081001288 |editor-last2=Horowitz |editor-first2=Mark}}{{rp|I-5}}}}
{{term|query language}}
{{defn|Query languages or data query languages (DQLs) are computer languages used to make queries in databases and information systems. Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry.}}
{{glossaryend}}
R
{{glossary}}
{{term|R programming language}}
{{defn|A {{gli|programming language}} and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing.{{refn | R language and environment
{{Cite web |url=https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f |title=R FAQ |last1=Hornik |first1=Kurt |date=2017-10-04 |website=The Comprehensive R Archive Network |at=2.1 What is R? |access-date=2018-08-06}}
R Foundation
{{Cite web |url=https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-the-R-Foundation_003f |title=R FAQ |last1=Hornik |first1=Kurt |date=2017-10-04 |website=The Comprehensive R Archive Network |at=2.13 What is the R Foundation? |access-date=2018-08-06}}
The R Core Team asks authors who use R in their data analysis to cite the software using:
R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://R-project.org/.
}} The R language is widely used among statisticians and data miners for developing statistical software{{refn | widely used
{{Cite web |last1=Fox |first1=John |last2=Andersen |first2=Robert |name-list-style=amp |date=January 2005 |title=Using the R Statistical Computing Environment to Teach Social Statistics Courses |url=https://socialsciences.mcmaster.ca/jfox/Teaching-with-R.pdf |publisher=Department of Sociology, McMaster University |access-date=2018-08-06}}
{{Cite news |last1=Vance |first1=Ashlee |author-link=Ashlee Vance |url=https://nytimes.com/2009/01/07/technology/business-computing/07program.html |title=Data Analysts Captivated by R's Power |date=2009-01-06 |work=The New York Times |access-date=2018-08-06 |quote=R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca...}}
}} and data analysis.{{Cite news |last1=Vance |first1=Ashlee |url=https://nytimes.com/2009/01/07/technology/business-computing/07program.html |title=Data Analysts Captivated by R's Power |date=2009-01-06 |work=The New York Times |access-date=2018-08-06 |quote=R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca...}}}}
{{term|radial basis function network}}
{{defn|In the field of mathematical modeling, a radial basis function network is an {{gli|artificial neural network}} that uses radial basis functions as {{gli|activation function|activation functions}}. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, {{gli|classification}}, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.{{Cite tech report |last1=Broomhead |first1=D. S. |last2=Lowe |first2=David |year=1988 |title=Radial basis functions, multi-variable functional interpolation and adaptive networks |institution=RSRE |number=4148 |url=https://apps.dtic.mil/sti/pdfs/ADA196234.pdf|archive-url=https://web.archive.org/web/20130409223044/https://dtic.mil/cgi-bin/GetTRDoc?AD=ADA196234|url-status=live|archive-date=9 April 2013}}{{Cite journal |last1=Broomhead |first1=D. S. |last2=Lowe |first2=David |year=1988 |title=Multivariable functional interpolation and adaptive networks |url=https://sci2s.ugr.es/keel/pdf/algorithm/articulo/1988-Broomhead-CS.pdf |journal=Complex Systems |volume=2 |pages=321–355}}{{Cite journal |last1=Schwenker |first1=Friedhelm |last2=Kestler |first2=Hans A. |last3=Palm |first3=Günther |year=2001 |title=Three learning phases for radial-basis-function networks |journal=Neural Networks |volume=14 |issue=4–5 |pages=439–458 |doi=10.1016/s0893-6080(01)00027-2 |pmid=11411631}}}}
{{term|random forest}}
{{ghat|Also random decision forest.}}
{{defn|An ensemble learning method for {{gli|classification}}, {{gli|regression}}, and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees.Ho, Tin Kam (1995). Random Decision Forests (PDF). Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16 August 1995. pp. 278–282. Archived from the original (PDF) on 17 April 2016. Retrieved 5 June 2016.{{Cite journal |last1=Ho |first1=TK |year=1998 |title=The Random Subspace Method for Constructing Decision Forests |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=20 |issue=8 |pages=832–844 |doi=10.1109/34.709601|s2cid=206420153 |url=https://repositorio.unal.edu.co/handle/unal/81834 }} Random decision forests correct for decision trees' habit of {{gli|overfitting}} to their training set.Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome(2008). The Elements of Statistical Learning (2nd ed.). Springer. {{ISBN|0-387-95284-5}}.}}
{{term|reasoning system}}
{{defn|In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.}}
{{term|recurrent neural network (RNN)}}
{{defn|A class of {{gli|artificial neural network|artificial neural networks}} where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition{{Cite journal |last1=Graves |first1=A. |last2=Liwicki |first2=M. |last3=Fernandez |first3=S. |last4=Bertolami |first4=R. |last5=Bunke |first5=H. |last6=Schmidhuber |first6=J. |author-link6=Jürgen Schmidhuber |year=2009 |title=A Novel Connectionist System for Improved Unconstrained Handwriting Recognition |url=https://idsia.ch/~juergen/tpami_2008.pdf |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=31 |issue=5 |pages=855–868 |citeseerx=10.1.1.139.4502 |doi=10.1109/tpami.2008.137 |pmid=19299860|s2cid=14635907 }} or speech recognition.{{Cite web |url=https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf |title=Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling |last1=Sak |first1=Hasim |last2=Senior |first2=Andrew |year=2014 |url-status=dead |archive-url=https://web.archive.org/web/20180424203806/https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf |archive-date=24 April 2018 |access-date=6 August 2019 |last3=Beaufays |first3=Francoise}}{{Cite arXiv |eprint=1410.4281 |class=cs.CL |first1=Xiangang |last1=Li |first2=Xihong |last2=Wu |title=Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition |date=2014-10-15}}}}
{{anchor|regression}}{{term|regression analysis}}
{{defn|A set of statistical processes for estimating the relationships between a dependent variable (often called the outcome or response variable, or label in {{gli|machine learning}}) and one or more error-free independent variables (often called regressors, predictors, covariates, explanatory variables, or {{gli|feature|features}}). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion.}}
{{anchor|regularization}}{{term|regularization}}
{{defn|A set of techniques such as {{gli|dropout}}, {{gli|early stopping}}, and L1 and L2 regularization to reduce {{gli|overfitting|overfitting and underfitting}} when training a learning {{gli|algorithm}}.}}
{{anchor|reinforcement learning}}{{term|reinforcement learning (RL)}}
{{defn|An area of {{gli|machine learning}} concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside {{gli|supervised learning|supervised}} and {{gli|unsupervised learning}}. It differs from supervised learning in that labelled input/output pairs need not be presented, and sub-optimal actions need not be explicitly corrected. Instead the focus is finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).{{Cite journal |last1=Kaelbling |first1=Leslie P. |last2=Littman |first2=Michael L. |author-link2=Michael L. Littman |last3=Moore |first3=Andrew W. |author-link3=Andrew W. Moore |year=1996 |title=Reinforcement Learning: A Survey |url=https://cs.washington.edu/research/jair/abstracts/kaelbling96a.html |url-status=dead |journal=Journal of Artificial Intelligence Research |volume=4 |pages=237–285 |arxiv=cs/9605103 |doi=10.1613/jair.301 |s2cid=1708582 |archive-url=http://webarchive.loc.gov/all/20011120234539/https://cs.washington.edu/research/jair/abstracts/kaelbling96a.html |archive-date=20 November 2001 |author-link1=Leslie P. Kaelbling |access-date=5 July 2022 }}}}
{{anchor|reinforcement learning from human feedback}}{{term|reinforcement learning from human feedback (RLHF)}}
{{defn|A technique that involve training a "reward model" to predict how humans rate the quality of generated content, and then training a {{gli|generative artificial intelligence|generative AI}} model to satisfy this reward model via {{gli|reinforcement learning}}. It can be used for example to make the generative AI model more truthful or less harmful.{{Cite web |last=Patrizio |first=Andy |title=What is reinforcement learning from human feedback (RLHF)? |url=https://www.techtarget.com/whatis/definition/reinforcement-learning-from-human-feedback-RLHF |access-date=2024-01-28 |website=TechTarget |language=en}}}}
{{term|representation learning}}
{{defn|See {{gli|feature learning}}.}}
{{term|reservoir computing}}
{{defn|A framework for computation that may be viewed as an extension of {{gli|neural network|neural networks}}.Schrauwen, Benjamin, David Verstraeten, and Jan Van Campenhout. "An overview of reservoir computing: theory, applications, and implementations." Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471–482. Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines{{Cite journal |last1=Mass |first1=Wolfgang |author-link=Wolfgang Maas |last2=Nachtschlaeger |first2=T. |last3=Markram |first3=H. |year=2002 |title=Real-time computing without stable states: A new framework for neural computation based on perturbations |journal=Neural Computation |volume=14 |issue=11 |pages=2531–2560|doi=10.1162/089976602760407955 |pmid=12433288 |s2cid=1045112 |url=http://infoscience.epfl.ch/record/117805 }} and echo state networksJaeger, Herbert, "The echo state approach to analyzing and training recurrent neural networks." Technical Report 154 (2001), German National Research Center for Information Technology. are two major types of reservoir computing.{{cite journal | doi=10.4249/scholarpedia.2330 | doi-access=free | title=Echo state network | year=2007 | last1=Jaeger | first1=Herbert | journal=Scholarpedia | volume=2 | issue=9 | page=2330 | bibcode=2007SchpJ...2.2330J }}}}
{{term|Resource Description Framework (RDF)}}
{{defn|A family of World Wide Web Consortium (W3C) specifications{{Cite web |url=https://dblab.ntua.gr/~bikakis/XMLSemanticWebW3CTimeline.pdf |title=XML and Semantic Web W3C Standards Timeline |date=2012-02-04 |access-date=5 July 2022 |archive-date=6 July 2022 |archive-url=https://web.archive.org/web/20220706210815/http://www.dblab.ntua.gr/~bikakis/XMLSemanticWebW3CTimeline.pdf |url-status=dead }} originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax notations and data serialization formats. It is also used in knowledge management applications.}}
{{term|restricted Boltzmann machine (RBM)}}
{{defn|A generative stochastic {{gli|artificial neural network}} that can learn a probability distribution over its set of inputs.}}
{{term|Rete algorithm}}
{{defn|A pattern matching {{gli|algorithm}} for implementing rule-based systems. The algorithm was developed to efficiently apply many rules or patterns to many objects, or facts, in a knowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts.}}
{{term|robotics}}
{{defn|An interdisciplinary branch of science and engineering that includes mechanical engineering, electronic engineering, information engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing.}}
{{term|rule-based system}}
{{defn|In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research. Normally, the term rule-based system is applied to systems involving human-crafted or curated rule sets. Rule-based systems constructed using automatic rule inference, such as rule-based machine learning, are normally excluded from this system type.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
S
{{glossary}}
{{term|satisfiability}}
{{defn|In mathematical logic, satisfiability and validity are elementary concepts of semantics. A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true.See, for example, Boolos and Jeffrey, 1974, chapter 11. A formula is valid if all interpretations make the formula true. The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation makes the formula false. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition.}}
{{term|search algorithm}}
{{defn|Any {{gli|algorithm}} which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values.}}
{{term|selection}}
{{defn|The stage of a {{gli|genetic algorithm}} in which individual genomes are chosen from a population for later breeding (using the crossover operator).}}
{{term|self-management}}
{{defn|The process by which computer systems manage their own operation without human intervention.}}
{{term|semantic network}}
{{ghat|Also frame network.}}
{{defn|A knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts,{{Cite encyclopedia |year=1987 |title=Semantic Networks |encyclopedia=Encyclopedia of Artificial Intelligence |url=https://jfsowa.com/pubs/semnet.htm |access-date=2008-04-29 |author-link=John F. Sowa |editor-last=Shapiro |editor-first=Stuart C |last1=Sowa |first1=John F.}} mapping or connecting semantic fields.}}
{{term|semantic reasoner}}
{{ghat|Also reasoning engine, rules engine, or simply reasoner.}}
{{defn|A piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by {{gli|forward chaining}} and {{gli|backward chaining}}.}}
{{term|semantic query}}
{{defn|Allows for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer more fuzzy and wide-open questions through pattern matching and digital reasoning.}}
{{term|semantics}}
{{defn|In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of {{gli|programming language|programming languages}}. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation.}}
{{anchor|semi-supervised learning}}{{term|semi-supervised learning}}
{{ghat|Also weak supervision.}}
{{defn|A {{gli|machine learning}} training paradigm characterized by using a combination of a small amount of human-labeled data (used exclusively in {{gli|supervised learning}}), followed by a large amount of unlabeled data (used exclusively in {{gli|unsupervised learning}}).}}
{{term|sensor fusion}}
{{defn|The combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.}}
{{term|separation logic}}
{{defn|An extension of Hoare logic, a way of reasoning about programs. The assertion language of separation logic is a special case of the logic of bunched implications (BI).{{Cite journal |last1=O'Hearn |first1=P. W. |last2=Pym |first2=D. J. |date=June 1999 |title=The Logic of Bunched Implications |journal=Bulletin of Symbolic Logic |volume=5 |pages=215–244 |citeseerx=10.1.1.27.4742 |doi=10.2307/421090 |jstor=421090 |number=2|s2cid=2948552 }}}}
{{term|similarity learning}}
{{defn|An area of {{gli|supervised learning}} closely related to {{gli|classification}} and {{gli|regression}}, but the goal is to learn from a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification.}}
{{term|simulated annealing (SA)}}
{{defn|A probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem.}}
{{term|situated approach}}
{{defn|In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.}}
{{term|situation calculus}}
{{defn|A logic formalism designed for representing and reasoning about dynamical domains.}}
{{term|Selective Linear Definite clause resolution}}
{{ghat|Also simply SLD resolution.}}
{{defn|The basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.}}
{{term|software}}
{{defn|A collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media.}}
{{term|software engineering}}
{{defn|The application of engineering to the development of software in a systematic method.{{harvnb |Abran |Moore |Bourque| Dupuis |2004 |pp=1–1}}{{Cite web |url=https://computingcareers.acm.org/?page_id=12 |title=Computing Degrees & Careers |year=2007 |publisher=ACM |url-status=dead |archive-url=https://web.archive.org/web/20110617053818/https://computingcareers.acm.org/?page_id=12 |archive-date=17 June 2011 |access-date=2010-11-23}}{{Cite book |last1=Laplante |first1=Phillip |url=https://books.google.com/books?id=pFHYk0KWAEgC&q=What%20Every%20Engineer%20Should%20Know%20about%20Software%20Engineering.&pg=PA1 |title=What Every Engineer Should Know about Software Engineering |publisher=CRC |year=2007 |isbn=978-0-8493-7228-5 |location=Boca Raton |access-date=2011-01-21}}}}
{{term|spatial-temporal reasoning}}
{{defn|An area of artificial intelligence which draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space.}}
{{term|SPARQL}}
{{defn|An RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format.{{Cite web |url=https://eweek.com/development/sparql-will-make-the-web-shine |title=SPARQL Will Make the Web Shine |last1=Rapoza |first1=Jim |date=2 May 2006 |website=eWeek |access-date=2007-01-17}}{{Cite book |last1=Segaran |first1=Toby |title=Programming the Semantic Web |url=https://archive.org/details/programmingseman00sega_683 |url-access=limited |last2=Evans |first2=Colin |last3=Taylor |first3=Jamie |publisher=O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 |year=2009 |isbn=978-0-596-15381-6 |page=[https://archive.org/details/programmingseman00sega_683/page/n101 84]}}}}
{{term|sparse dictionary learning}}
{{ghat|Also sparse coding or SDL.}}
{{defn|A {{gli|feature learning}} method aimed at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves.}}
{{term|speech recognition}}
{{defn|An interdisciplinary subfield of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields.}}
{{anchor|spiking neural network}}{{term|spiking neural network (SNN)}}
{{defn|An {{gli|artificial neural network}} that more closely mimics a natural neural network.{{Cite journal |last1=Maass |first1=Wolfgang |year=1997 |title=Networks of spiking neurons: The third generation of neural network models |journal=Neural Networks |volume=10 |issue=9 |pages=1659–1671 |doi=10.1016/S0893-6080(97)00011-7 |issn=0893-6080}} In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their Operating Model.}}
{{term|state}}
{{defn|In information technology and computer science, a program is described as stateful if it is designed to remember preceding events or user interactions;{{Cite web |url=https://whatis.techtarget.com/definition/stateless |title=What is stateless? - Definition from WhatIs.com |website=techtarget.com}} the remembered information is called the state of the system.}}
{{anchor|classification}}{{term|statistical classification}}
{{defn|In {{gli|machine learning}} and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient (sex, blood pressure, presence or absence of certain symptoms, etc.). Classification is an example of pattern recognition.}}
{{anchor|SARSA}}{{term|state–action–reward–state–action (SARSA)}}
{{defn|A {{gli|reinforcement learning}} {{gli|algorithm}} for learning a {{gli|markov decision process|Markov decision process}} policy.}}
{{anchor|statistical relational learning}}{{term|statistical relational learning (SRL)}}
{{defn|A subdiscipline of artificial intelligence and {{gli|machine learning}} that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure.Lise Getoor and Ben Taskar: [https://books.google.com/books?id=lSkIewOw2WoC Introduction to statistical relational learning], MIT Press, 2007Ryan A. Rossi, Luke K. McDowell, David W. Aha, and Jennifer Neville, "[https://jair.org/media/3659/live-3659-6589-jair.pdf Transforming Graph Data for Statistical Relational Learning.] {{Webarchive|url=https://web.archive.org/web/20180106202217/http://www.jair.org/media/3659/live-3659-6589-jair.pdf |date=6 January 2018 }}" Journal of Artificial Intelligence Research (JAIR), Volume 45 (2012), pp. 363–441. Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming.}}
{{term|stochastic optimization (SO)}}
{{defn|Any optimization method that generates and uses random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization.{{Cite book |last1=Spall |first1=J. C. |url=https://jhuapl.edu/ISSO |title=Introduction to Stochastic Search and Optimization |publisher=Wiley |year=2003 |isbn=978-0-471-33052-3}} Stochastic optimization methods generalize deterministic methods for deterministic problems.}}
{{term|stochastic semantic analysis}}
{{defn|An approach used in computer science as a semantic component of natural language understanding. Stochastic models generally use the definition of segments of words as basic semantic units for the semantic models, and in some cases involve a two layered approach.Language Understanding Using Two-Level Stochastic Models by F. Pla, et al, 2001, Springer Lecture Notes in Computer Science {{ISBN|978-3-540-42557-1}}}}
{{anchor|Stanford Research Institute Problem Solver}}{{term|Stanford Research Institute Problem Solver (STRIPS)}}
{{defn|An automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International.}}
{{anchor|subject-matter expert}}{{term|subject-matter expert (SME)}}
{{defn|A person who has accumulated great knowledge in a particular field or topic, demonstrated by the person's degree, licensure, and/or through years of professional experience with the subject.}}
{{term|superintelligence}}
{{defn|A hypothetical {{gli|intelligent agent|agent}} that possesses intelligence far surpassing that of the brightest and most gifted human minds. Superintelligence may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act within the physical world. A superintelligence may or may not be created by an {{gli|intelligence explosion}} and be associated with a {{gli|technological singularity}}.}}
{{term|supervised learning}}
{{defn|The {{gli|machine learning}} task of learning a function that maps an input to an output based on example input-output pairs.Stuart J. Russell, Peter Norvig (2010) Artificial Intelligence: A Modern Approach, Third Edition, Prentice Hall {{ISBN|9780136042594}}. It infers a function from {{vanchor|labeled training data|LABELLED_DATA}} consisting of a set of training examples.Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press {{ISBN|9780262018258}}. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to {{gli|generalization|generalize}} from the training data to unseen situations in a "reasonable" way (see inductive bias).}}
{{term|support vector machines}}
{{defn|In {{gli|machine learning}}, support vector machines (SVMs, also support vector networks{{Cite journal |last1=Cortes |first1=Corinna |last2=Vapnik |first2=Vladimir N |year=1995 |title=Support vector networks |journal=Machine Learning |volume=20 |issue=3 |pages=273–297 |doi=10.1007/BF00994018 |doi-access=free}}) are {{gli|supervised learning}} models with associated learning {{gli|algorithm|algorithms}} that analyze data used for {{gli|classification}} and {{gli|regression}}.}}
{{anchor|swarm intelligence}}{{term|swarm intelligence (SI)}}
{{defn|The collective behavior of decentralized, self-organized systems, either natural or artificial. The expression was introduced in the context of cellular robotic systems.{{Cite book |last1=Beni|first1=G. |last2=Wang |first2=J. |title=Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, June 26–30 (1989) |year=1993 |isbn=978-3-642-63461-1 |pages=703–712 |chapter=Swarm Intelligence in Cellular Robotic Systems |doi=10.1007/978-3-642-58069-7_38}}}}
{{term|symbolic artificial intelligence}}
{{defn|The term for the collection of all methods in {{gli|artificial intelligence}} research that are based on high-level "symbolic" (human-readable) representations of problems, logic, and {{gli|search algorithm|search}}.}}
{{anchor|synthetic intelligence}}{{term|synthetic intelligence (SI)}}
{{defn|An alternative term for {{gli|artificial intelligence}} which emphasizes that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence.{{sfn|Haugeland|1985|p=255}}{{sfn|Poole|Mackworth|Goebel|1998|p=1}}}}
{{term|systems neuroscience}}
{{defn|A subdiscipline of neuroscience and systems biology that studies the structure and function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how nerve cells behave when connected together to form neural pathways, neural circuits, and larger brain networks.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
T
{{glossary}}
{{term|technological singularity}}
{{ghat|Also simply the singularity.}}
{{defn|A hypothetical point in the future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization.{{Cite news |url=https://singularitysymposium.com/definition-of-singularity.html |title=Collection of sources defining "singularity" |website=singularitysymposium.com |access-date=17 April 2019}}{{Cite book |last1=Eden |first1=Amnon H. |title=Singularity hypotheses: A Scientific and Philosophical Assessment |url=https://archive.org/details/singularityhypot00mueh |url-access=limited |last2=Moor, James H. |date=2012 |publisher=Springer |isbn=9783642325601 |location=Dordrecht |pages=[https://archive.org/details/singularityhypot00mueh/page/n9 1]–2}}Cadwalladr, Carole (2014). "[https://theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence Are the robots about to rise? Google's new director of engineering thinks so...]" The Guardian. Guardian News and Media Limited.}}
{{term|temporal difference learning}}
{{defn|A class of model-free {{gli|reinforcement learning}} methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.{{Cite book |last1=Sutton |first1=Richard |url=https://incompleteideas.net/sutton/book/the-book.html |title=Reinforcement Learning |last2=Andrew Barto |publisher=MIT Press |year=1998 |isbn=978-0-585-02445-5 |archive-url=https://web.archive.org/web/20170330005640/https://incompleteideas.net/sutton/book/the-book.html |archive-date=2017-03-30 |url-status=dead |name-list-style=amp}}}}
{{term|tensor network theory}}
{{defn|A theory of brain function (particularly that of the cerebellum) that provides a mathematical model of the transformation of sensory space-time coordinates into motor coordinates and vice versa by cerebellar neuronal networks. The theory was developed as a geometrization of brain function (especially of the central nervous system) using tensors.{{Cite journal |last1=Pellionisz |first1=A. |last2=Llinás |first2=R. |year=1980 |title=Tensorial Approach To The Geometry Of Brain Function: Cerebellar Coordination Via A Metric Tensor |url=https://academia.edu/download/31409354/pellionisz_1980_cerebellar_coordination_via_a_metric_tensor_fullpaper.pdf |journal=Neuroscience |volume=5 |issue=7 |pages=1125––1136 |doi=10.1016/0306-4522(80)90191-8 |pmid=6967569 |s2cid=17303132 }}{{dead link|date=July 2022|bot=medic}}{{cbignore|bot=medic}}{{Cite journal |last1=Pellionisz |first1=A. |last2=Llinás |first2=R. |year=1985 |title=Tensor Network Theory Of The Metaorganization Of Functional Geometries In The Central Nervous System |journal=Neuroscience |volume=16 |issue=2 |pages=245–273 |doi=10.1016/0306-4522(85)90001-6 |pmid=4080158|s2cid=10747593 }}}}
{{term|TensorFlow}}
{{defn|A free and {{gli|open-source software|open-source}} software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for {{gli|machine learning}} applications such as {{gli|neural network|neural networks}}.[https://youtube.com/watch?v=oZikw5k_2FM "TensorFlow: Open source machine learning"] "It is machine learning software being used for various kinds of perceptual and language understanding tasks" — Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip}}
{{term|theoretical computer science (TCS)}}
{{defn|A subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation.}}
{{term|theory of computation}}
{{defn|In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an {{gli|algorithm}}. The field is divided into three major branches: automata theory and languages, computability theory, and {{gli|computational complexity theory}}, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".{{Cite book |last1=Sipser |first1=Michael |title=Introduction to the Theory of Computation 3rd |publisher=Cengage Learning |year=2013 |isbn=978-1-133-18779-0 |quote=central areas of the theory of computation: automata, computability, and complexity. (Page 1) |author-link=Michael Sipser}}}}
{{term|Thompson sampling}}
{{defn|A {{gli|heuristic}} for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists in choosing the action that maximizes the expected reward with respect to a randomly drawn belief.{{Cite journal |last1=Thompson |first1=William R |year=1933 |title=On the likelihood that one unknown probability exceeds another in view of the evidence of two samples |journal=Biometrika |volume=25 |issue=3–4 |pages=285–294 |doi=10.1093/biomet/25.3-4.285}}{{Cite journal |last1=Russo |first1=Daniel J. |last2=Van Roy |first2=Benjamin |last3=Kazerouni |first3=Abbas |last4=Osband |first4=Ian |last5=Wen |first5=Zheng |year=2018 |title=A Tutorial on Thompson Sampling |journal=Foundations and Trends in Machine Learning |volume=11 |issue=1 |pages=1–96 |arxiv=1707.02038 |doi=10.1561/2200000070|s2cid=3929917 }}}}
{{term|time complexity}}
{{defn|The computational complexity that describes the amount of time it takes to run an {{gli|algorithm}}. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.}}
{{term|transfer learning}}
{{defn|A {{gli|machine learning}} technique in which knowledge learned from a task is reused in order to boost performance on a related task.{{cite web |last1=West |first1=Jeremy |first2=Dan |last2=Ventura |first3=Sean |last3=Warnick |url=http://cpms.byu.edu/springresearch/abstract-entry?id=861 |title=Spring Research Presentation: A Theoretical Foundation for Inductive Transfer |publisher=Brigham Young University, College of Physical and Mathematical Sciences |year=2007 |access-date=2007-08-05 |url-status=dead |archive-url=https://web.archive.org/web/20070801120743/http://cpms.byu.edu/springresearch/abstract-entry?id=861 |archive-date=2007-08-01 }} For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks.}}
{{anchor|transformer}}{{term|transformer}}
{{defn|A type of {{gli|deep learning}} architecture that exploits a multi-head {{gli|attention mechanism}}. Transformers address some of the limitations of {{gli|long short-term memory}}, and became widely used in {{gli|natural language processing}}, although it can also process other types of data such as images in the case of vision transformers.{{Cite web |last=Dickson |first=Ben |title=Machine learning: What is the transformer architecture? |url=https://bdtechtalks.com/2022/05/02/what-is-the-transformer/ |access-date=2 May 2022 |website=TechTarget |date=2 May 2022 |language=en}}}}
{{term|transhumanism}}
{{ghat|Abbreviated H+ or h+.}}
{{defn|An international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.{{Cite book |last1=Mercer |first1=Calvin |title=Religion and Transhumanism: The Unknown Future of Human Enhancement |publisher=Praeger}}{{Cite journal |last1=Bostrom |first1=Nick |author-link=Nick Bostrom |year=2005 |title=A history of transhumanist thought |url=https://nickbostrom.com/papers/history.pdf |journal=Journal of Evolution and Technology |access-date=February 21, 2006}}}}
{{term|transition system}}
{{defn|In theoretical computer science, a transition system is a concept used in the study of computation. It is used to describe the potential behavior of {{gli|discrete system|discrete systems}}. It consists of states and transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is a singleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible.}}
{{term|tree traversal}}
{{ghat|Also tree search.}}
{{defn|A form of graph traversal and refers to the process of visiting (checking and/or updating) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited.}}
{{term|true quantified Boolean formula}}
{{defn|In {{gli|computational complexity theory}}, the language TQBF is a {{gli|formal language}} consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified {{gli|Boolean satisfiability problem|SAT}}).}}
{{term|Turing machine}}
{{defn|A mathematical model of computation describing an abstract machineMinsky 1967:107 "In his 1936 paper, A. M. Turing defined the class of abstract machines that now bear his name. A Turing machine is a finite-state machine associated with a special kind of environment – its tape – in which it can store (and later recover) sequences of symbols," also Stone 1972:8 where the word "machine" is in quotation marks. that manipulates symbols on a strip of tape according to a table of rules.Stone 1972:8 states "This "machine" is an abstract mathematical model", also cf. Sipser 2006:137ff that describes the "Turing machine model". Rogers 1987 (1967):13 refers to "Turing's characterization", Boolos Burgess and Jeffrey 2002:25 refers to a "specific kind of idealized machine". Despite the model's simplicity, it is capable of implementing any {{gli|algorithm}}.Sipser 2006:137 "A Turing machine can do everything that a real computer can do".}}
{{term|Turing test}}
{{defn|A test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human, developed by Alan Turing in 1950. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.Turing originally suggested a teleprinter, one of the few text-only communication systems available in 1950. {{Harv|Turing|1950|p=433}} If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.}}
{{term|type system}}
{{defn|In {{gli|programming language|programming languages}}, a set of rules that assigns a property called type to the various constructs of a computer program, such as variables, expressions, functions, or modules.{{sfn|Pierce|2002|p=1|ps=: "A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute."}} These types formalize and enforce the otherwise implicit categories the programmer uses for algebraic data types, data structures, or other components (e.g. "string", "array of float", "function returning boolean"). The main purpose of a type system is to reduce possibilities for bugs in computer programs{{sfn|Cardelli|2004|p=1|ps=: "The fundamental purpose of a type system is to prevent the occurrence of execution errors during the running of a program."}} by defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination of static and dynamic checking. Type systems have other purposes as well, such as expressing business rules, enabling certain compiler optimizations, allowing for multiple dispatch, providing a form of documentation, etc.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
U
{{glossary}}
{{term|unsupervised learning}}
{{defn|A type of self-organized Hebbian learning that helps find previously unknown patterns in data set without pre-existing labels. It is also known as self-organization and allows modeling probability densities of given inputs.{{Cite book |last1=Hinton |first1=Jeffrey |title=Unsupervised Learning: Foundations of Neural Computation |last2=Sejnowski |first2=Terrence |publisher=MIT Press |year=1999 |isbn=978-0262581684}} It is one of the three basic paradigms of {{gli|machine learning}}, alongside {{gli|supervised learning|supervised}} and {{gli|reinforcement learning}}. {{gli|semi-supervised learning|Semi-supervised learning}} has also been described and is a hybridization of supervised and unsupervised techniques.}}
{{glossaryend}}
V
{{glossary}}
{{term|vision processing unit (VPU)}}
{{defn|A type of microprocessor designed to accelerate {{gli|machine vision}} tasks.{{Cite web |url=https://tomshardware.com/news/movidiud-myriad2-vpu-vision-processing-vr,30850.html |title=A third type of processor for AR/VR: Movidius' Myriad 2 VPU |last1=Colaner |first1=Seth |last2=Humrick |first2=Matthew |date=January 3, 2016 |website=Tom's Hardware}}{{Cite web |url=https://digit.in/general/the-rise-of-vpus-giving-eyes-to-machines-29561.html |title=The rise of VPUs: Giving Eyes to Machines |last1=Banerje |first1=Prasid |date=March 28, 2016 |website=Digit.in |access-date=5 July 2022 |archive-date=11 December 2018 |archive-url=https://web.archive.org/web/20181211221429/https://www.digit.in/general/the-rise-of-vpus-giving-eyes-to-machines-29561.html |url-status=dead }}}}
{{term|Value-alignment complete}}
{{defn|{{citation needed span|Analogous to an AI-complete problem, a value-alignment complete problem is a problem where the AI control problem needs to be fully solved to solve it.|date=January 2019}}}}
{{glossaryend}}
W
{{glossary}}
{{term|Watson}}
{{defn|A question-answering computer system capable of answering questions posed in natural language,{{Cite web |url=https://research.ibm.com/deepqa/faq.shtml |title=DeepQA Project: FAQ |website=IBM |access-date=February 11, 2011 |archive-date=5 November 2015 |archive-url=https://web.archive.org/web/20151105125722/https://www.research.ibm.com/deepqa/faq.shtml |url-status=dead }} developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci.{{Cite journal |last1=Ferrucci |first1=David |last2=Levas |first2=Anthony |last3=Bagchi |first3=Sugato |last4=Gondek |first4=David |last5=Mueller |first5=Erik T. |date=2013-06-01 |title=Watson: Beyond Jeopardy! |journal=Artificial Intelligence |volume=199 |pages=93–105 |doi=10.1016/j.artint.2012.06.009 |doi-access=free}} Watson was named after IBM's first CEO, industrialist Thomas J. Watson.{{Cite news |last1=Hale |first1=Mike |url=https://nytimes.com/2011/02/09/arts/television/09nova.html |title=Actors and Their Roles for $300, HAL? HAL! |date=February 8, 2011 |work=The New York Times |access-date=February 11, 2011}}{{Cite web |url=https://research.ibm.com/deepqa/deepqa.shtml |title=The DeepQA Project |website=IBM Research |access-date=February 18, 2011 |archive-date=21 January 2013 |archive-url=https://web.archive.org/web/20130121103239/http://www.research.ibm.com/deepqa/deepqa.shtml |url-status=dead }}}}
{{term|weak AI}}
{{ghat|Also narrow AI.}}
{{defn|{{gli|artificial intelligence|Artificial intelligence}} that is focused on one narrow task.io9.com mentions narrow AI. Published 1 April 2013. Retrieved 16 February 2014: https://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243AI researcher Ben Goertzel explains why he became interested in AGI instead of narrow AI. Published 18 Oct 2013. Retrieved 16 February 2014. https://intelligence.org/2013/10/18/ben-goertzel/TechCrunch discusses AI App building regarding Narrow AI. Published 16 Oct 2015. Retrieved 17 Oct 2015. https://techcrunch.com/2015/10/15/machine-learning-its-the-hard-problems-that-are-valuable/}}
{{term|weak supervision}}
{{defn|See {{gli|semi-supervised learning}}.}}
{{term|word embedding}}
{{defn|A representation of a word in {{gli|natural language processing}}. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning.{{cite book |last1=Jurafsky |first1=Daniel |last2=H. James |first2=Martin |title=Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition |date=2000 |publisher=Prentice Hall |location=Upper Saddle River, N.J. |isbn=978-0-13-095069-7 |url=https://web.stanford.edu/~jurafsky/slp3/}}}}
{{glossaryend}}
X
{{glossary}}
{{term|XGBoost}}
{{defn|Short for eXtreme Gradient Boosting, XGBoost{{cite web |url=https://github.com/dmlc/xgboost |title=GitHub project webpage |website=GitHub |date=June 2022 |access-date=2016-04-05 |archive-date=2021-04-01 |archive-url=https://web.archive.org/web/20210401110045/https://github.com/dmlc/xgboost |url-status=live }} is an open-source software library which provides a {{gli|regularization|regularizing}} {{gli|gradient boosting}} framework for multiple programming languages.}}
{{glossaryend}}
{{Compact TOC|side=yes|center=yes|top=yes|num=yes|extlinks=yes|seealso=yes|refs=yes|nobreak=yes|}}
References
{{reflist}}
=Works cited=
{{refbegin|2}}
- {{cite book |first1=Alain |last1=Abran |first2=James W. |last2=Moore |first3=Pierre |last3=Bourque |first4=Robert |last4=Dupuis |first5=Leonard L. |last5=Tripp |title=Guide to the Software Engineering Body of Knowledge |year=2004 |publisher=IEEE |isbn=978-0-7695-2330-9}}
- {{cite book |first=Luca |last=Cardelli |author-link=Luca Cardelli |editor=Allen B. Tucker |title=CRC Handbook of Computer Science and Engineering |edition=2nd |chapter=Type systems |year=2004 |publisher=CRC Press |chapter-url=http://lucacardelli.name/Papers/TypeSystems.pdf |isbn=978-1584883609}}
- {{cite book |last=Haugeland |first=John |author-link=John Haugeland |year=1985 |title=Artificial Intelligence: The Very Idea |publisher=MIT Press |location=Cambridge, Mass. |isbn=978-0-262-08153-5}}
- {{cite arXiv |last1=Legg |first1=Shane |last2=Hutter |first2=Marcus |date=15 June 2007 |title=A Collection of Definitions of Intelligence |class=cs.AI |eprint=0706.3639}}
- {{Cite book |last=Mitchell |first=Melanie |title=An Introduction to Genetic Algorithms |year=1996 |publisher=MIT Press |location=Cambridge, MA |isbn=9780585030944}}
- {{cite book |last=Nilsson |first=Nils |author-link=Nils Nilsson (researcher) |year=1998 |title=Artificial Intelligence: A New Synthesis |url=https://archive.org/details/artificialintell0000nils |url-access=registration |publisher=Morgan Kaufmann |isbn=978-1-55860-467-4 |access-date=18 November 2019 |archive-date=26 July 2020 |archive-url=https://web.archive.org/web/20200726131654/https://archive.org/details/artificialintell0000nils |url-status=live}}
- {{cite book |first=Benjamin C. |last=Pierce |author-link=Benjamin C. Pierce |year=2002 |title=Types and Programming Languages |publisher=MIT Press |isbn=978-0-262-16209-8}}
- {{cite book |first1=David |last1=Poole |author-link=David Poole (researcher) |first2=Alan |last2=Mackworth |author2-link=Alan Mackworth |first3=Randy |last3=Goebel |author3-link=Randy Goebel |year=1998 |title=Computational Intelligence: A Logical Approach |publisher=Oxford University Press |location=New York |isbn=978-0-19-510270-3 |url=https://archive.org/details/computationalint00pool |access-date=22 August 2020 |archive-date=26 July 2020 |archive-url=https://web.archive.org/web/20200726131436/https://archive.org/details/computationalint00pool |url-status=live}}
- {{Russell Norvig 2003}}
- {{Turing 1950}}
{{refend}}
Notes
{{reflist|group=Note}}
{{Differentiable computing}}
{{Software engineering}}
{{Computer science}}
{{Evolutionary computation}}
{{emerging technologies|topics=yes|infocom=yes}}
{{Robotics}}
{{Glossaries of computers}}
{{Glossaries of science and engineering}}