Unique negative dimension
Unique negative dimension (UND) is a complexity measure for the model of learning from positive examples.
The unique negative dimension of a class of concepts is the size of the maximum subclass such that for every concept , we have is nonempty.
This concept was originally proposed by M. Gereb-Graus in "Complexity of learning from one-side examples", Technical Report TR-20-89, Harvard University Division of Engineering and Applied Science, 1989.{{cite journal |last1=Darnstädt |first1=Malte |last2=Simon |first2=Hans Ulrich |last3=Szörényi |first3=Balázs |title=Supervised learning and Co-training |journal=Theoretical Computer Science |date=30 January 2014 |volume=519 |pages=68–87 |doi=10.1016/j.tcs.2013.09.020 |doi-access=free }}{{cite thesis |last1=Geréb-Graus |first1=Mihály |title=Lower bounds on parallel, distributed and automata computations |date=1989 |oclc=1243704701 |osti=5815133 |id=TR-20-89 }}{{cite journal |last1=Ehrenfeucht |first1=Andrzej |last2=Haussler |first2=David |last3=Kearns |first3=Michael |last4=Valiant |first4=Leslie |title=A general lower bound on the number of examples needed for learning |journal=Information and Computation |date=1 September 1989 |volume=82 |issue=3 |pages=247–261 |doi=10.1016/0890-5401(89)90002-3 |s2cid=1925579 |doi-access=free }}
See also
References
{{Reflist}}
{{DEFAULTSORT:Unique Negative Dimension}}
Category:Computational learning theory
{{Compu-AI-stub}}