language identification in the limit

Language identification in the limit is a formal model for inductive inference of formal languages, mainly by computers (see machine learning and induction of regular languages). It was introduced by E. Mark Gold in a technical report{{cite report | url=https://www.rand.org/pubs/research_memoranda/RM4136.html | first=E. Mark | last=Gold | title=Language identification in the limit | institution=RAND Corporation | type=RAND Research Memorandum RM-4136-PR | year=1964 }} and a journal article{{Cite journal | last1 = Gold | first1 = E. Mark | date = May 1967 | title = Language identification in the limit | journal = Information and Control | volume = 10 | issue = 5 | pages = 447–474 | doi = 10.1016/S0019-9958(67)91165-5 | url=http://web.mit.edu/~6.863/www/spring2009/readings/gold67limit.pdf | doi-access = free }} with the same title.

In this model, a teacher provides to a learner some presentation (i.e. a sequence of strings) of some formal language. The learning is seen as an infinite process. Each time the learner reads an element of the presentation, it should provide a representation (e.g. a formal grammar) for the language.

Gold defines that a learner can identify in the limit a class of languages if, given any presentation of any language in the class, the learner will produce only a finite number of wrong representations, and then stick with the correct representation. However, the learner need not be able to announce its correctness; and the teacher might present a counterexample to any representation arbitrarily long after.

Gold defined two types of presentations:

  • Text (positive information): an enumeration of all strings the language consists of.
  • Complete presentation (positive and negative information): an enumeration of all possible strings, each with a label indicating if the string belongs to the language or not.

Learnability

This model is an early attempt to formally capture the notion of learnability.

Gold's journal articlep.457 introduces for contrast the stronger models

  • Finite identification (where the learner has to announce correctness after a finite number of steps), and
  • Fixed-time identification (where correctness has to be reached after an apriori-specified number of steps).

A weaker formal model of learnability is the Probably approximately correct learning (PAC) model, introduced by Leslie Valiant in 1984.

Examples

class="wikitable" style="float: right"

|+ 4. Complete presentation
by request

Teacher

! colspan="2" | Learner's

GuessQuery
0.

| || || abab

1.

| {{color|green|yes}} || abab || baba

2.

| {{color|green|yes}} || a*(ba)*b* || aa

3.

| {{color|red|no}} || (ab)*(ba)*(ab)*(ba)* || bababa

4.

| {{color|green|yes}} || (ab+ba)* || babb

5.

| {{color|red|no}} || (ab+ba)* || baaa

| ... || ...

class="wikitable" style="float: right"

|+ 3. Complete presentation
by telling

TeacherLearner
1.

| {{color|green|abab}} || abab

2.

| {{color|green|baba}} || a*(ba)*b*

3.

| {{color|red|aa}} || (ab)*(ba)*(ab)*(ba)*

4.

| {{color|green|bababa}} || (ab+ba)*

5.

| {{color|red|babb}} || (ab+ba)*

6.

| {{color|red|baaa}} || (ab+ba)*

7.

| {{color|green|ε}} || (ab+ba)*

| ... || ...

class="wikitable" style="float: right"

|+ 2. Union-guessing
 

TeacherLearner
1.

| abab || abab

2.

| ba || abab+ba

3.

| baba || abab+ba+baba

4.

| ba || abab+ba+baba

5.

| baba || abab+ba+baba

6.

| abab || abab+ba+baba

7.

| ε || abab+ba+baba

| ... || ...

class="wikitable" style="float: right"

|+ 1. Text presentation
 

TeacherLearner
1.

| {{not a typo|abab}} || {{not a typo|abab}}

2.

| {{not a typo|baba}} || {{not a typo|abab}}+{{not a typo|baba}}

3.

| {{not a typo|baabab}} || (b+ε)(ab)*

4.

| {{not a typo|baabab}} || (b+ε)(ab)*+{{not a typo|baabab}}

5.

| {{not a typo|abbaabba}} || (ab)*(ba)*(ab)*(ba)*

6.

| {{not a typo|baabbaab}} || (ab+ba)*

7.

| {{not a typo|bababa}} || (ab+ba)*

| ... || ...

{{clear}}

It is instructive to look at concrete examples (in the tables) of learning sessions the definition of identification in the limit speaks about.

  1. A fictitious session to learn a regular language L over the alphabet {a,b} from text presentation:
    In each step, the teacher gives a string belonging to L, and the learner answers a guess for L, encoded as a regular expression."A+B" contains all strings that are in A or in B; "AB" contains all concatenations of a string in A with a string in B; "A*" contains all repetitions (zero or more times) of strings in A; "ε" denotes the empty string; "a" and "b" denote themselves. For example, the expression "(ab+ba)*" in step 7 denotes the infinite set { ε, ab, ba, abab, abba, baab, baba, ababab, ababba, ... }. In step 3, the learner's guess is not consistent with the strings seen so far; in step 4, the teacher gives a string repeatedly. After step 6, the learner sticks to the regular expression (ab+ba)*. If this happens to be a description of the language L the teacher has in mind, it is said that the learner has learned that language.
    If a computer program for the learner's role would exist that was able to successfully learn each regular language, that class of languages would be identifiable in the limit. Gold has shown that this is not the case.Theorem I.8,I.9, p.470-471
  2. A particular learning algorithm always guessing L to be just the union of all strings seen so far:
    If L is a finite language, the learner will eventually guess it correctly, however, without being able to tell when. Although the guess didn't change during step 3 to 6, the learner couldn't be sure to be correct.
    Gold has shown that the class of finite languages is identifiable in the limit,Theorem I.6, p.469 however, this class is neither finitely nor fixed-time identifiable.
  3. Learning from complete presentation by telling:
    In each step, the teacher gives a string and tells whether it belongs to L ({{color|green|green}}) or not ({{color|red|red, struck-out}}). Each possible string is eventually classified in this way by the teacher.
  4. Learning from complete presentation by request:
    The learner gives a query string, the teacher tells whether it belongs to L ({{color|green|yes}}) or not ({{color|red|no}}); the learner then gives a guess for L, followed by the next query string. In this example, the learner happens to query in each step just the same string as given by the teacher in example 3.
    In general, Gold has shown that each language class identifiable in the request-presentation setting is also identifiable in the telling-presentation setting,Theorem I.3, p.467 since the learner, instead of querying a string, just needs to wait until it is eventually given by the teacher.

Gold's theorem

More formally,{{Cite journal |last=Johnson |first=Kent |date=October 2004 |title=Gold's Theorem and Cognitive Science |url=https://www.cambridge.org/core/journals/philosophy-of-science/article/abs/golds-theorem-and-cognitive-science/046D4E968548BBAF6059FC155AD26FDF |journal=Philosophy of Science |language=en |volume=71 |issue=4 |pages=571–592 |doi=10.1086/423752 |s2cid=5589573 |issn=0031-8248}}

  • a language L is a nonempty set, and its elements are called sentences.
  • a language family is a set of languages.
  • a language-learning environment E for a language L is a stream of sentences from L, such that each sentence in L appears at least once.
  • a language learner is a function f that sends a list of sentences to a language.
  • This is interpreted as saying that, after seeing sentences a_1, a_2..., a_n in that order, the language learner guesses that the language that produces the sentences should be f(a_1, ..., a_n).
  • Note that the learner is not obliged to be correct — it could very well guess a language that does not even contain a_1, ..., a_n.
  • a language learner f learns a language L in environment E = (a_1, a_2, ...) if the learner always guesses L after seeing enough examples from the environment.
  • a language learner f learns a language L if it learns L in any environment E for L.
  • a language family is learnable if there exists a language learner that can learn all languages in the family.

Notes:

  • In the context of Gold's theorem, sentences need only be distinguishable. They need not be anything in particular, such as finite strings (as usual in formal linguistics).
  • Learnability is not a concept for individual languages. Any individual language L could be learned by a trivial learner that always guesses L.
  • Learnability is not a concept for individual learners. A language family is learnable iff there exists some learner that can learn the family. It does not matter how well the learner performs for learning languages outside the family.

{{Math theorem|name=Gold's theorem (1967)|note=Theorem I.8 of (Gold, 1967)|math_statement=

If a language family C contains L_1, L_2, ..., L_\infty, such that

L_1 \subsetneq L_2 \subsetneq \cdots

and L_\infty = \cup_{n=1}^\infty L_n, then it is not learnable.

}}

{{Math proof|title=Proof|proof=

Suppose f is a learner that can learn L_1, L_2, ..., then we show it cannot learn L_\infty, by constructing an environment for L_\infty that "tricks" f.

First, construct environments E_1, E_2, ... for languages L_1, L_2, ....

Next, construct environment E for L_\infty inductively as follows:

  • Present f with E_1 until it outputs L_1.
  • Switch to presenting f with alternating the rest of E_1 and the entirety of E_2. Since L_1 \subset L_2, this concatenated environment is still an environment for L_2, so f must eventually output L_2.
  • Switch to presenting the rest of E_1, E_2 and the entirety of E_3 alternatively.
  • And so on.

By construction, the resulting environment E contains the entirety of E_1, E_2, ..., thus it contains \cup_n E_n = \cup_n L_n = L_\infty, so it is an environment for L_\infty. Since the learner always switches to L_n for some finite n, it never converges to L_\infty.

}}

Gold's theorem is easily bypassed if negative examples are allowed. In particular, the language family \{L_1,L_2, ..., L_\infty\} can be learned by a learner that always guesses L_\infty until it receives the first negative example \neg a_n, where a_n\in L_{n+1} \setminus L_{n}, at which point it always guesses L_n.

Learnability characterization

Dana Angluin gave the characterizations of learnability from text (positive information) in a 1980 paper.{{cite journal| author=Dana Angluin| title=Inductive Inference of Formal Languages from Positive Data| journal=Information and Control| year=1980| volume=45| issue=2| pages=117–135| doi=10.1016/S0019-9958(80)90285-5| url=http://www-personal.umich.edu/~yinw/papers/Angluin80.pdf| doi-access=free}}

If a learner is required to be effective, then an indexed class of recursive languages is learnable in the limit if there is an effective procedure that uniformly enumerates tell-tales for each language in the class (Condition 1).p.121 top It is not hard to see that if an ideal learner (i.e., an arbitrary function) is allowed, then an indexed class of languages is learnable in the limit if each language in the class has a tell-tale (Condition 2).p.123 top

Language classes learnable in the limit

class="wikitable" style="float: right"

|+ Dividing lines between identifiable and nonidentifiable language classesTable 1, p.452, in (Gold 1967)

Learnability modelClass of languages
colspan="2" | Anomalous text presentationi.e. text presentation, where the string given by the teacher is a primitive recursive function of the current step number, and the learner encodes a language guess as a program that enumerates the language
Recursively enumerable
Recursive
colspan="2" | Complete presentation
Primitive recursivei.e. the class of languages that are decidable by primitive recursive functions
Context-sensitive
Context-free
Regular
Superfinitei.e. containing all finite languages and at least one infinite one
colspan="2" | Normal text presentationi.e. text presentation, except for the anomalous text presentation setting
Finite
Singletoni.e. the class of languages consisting of a single string (they are mentioned here only as a common lower bound to finite languages and pattern languages)

The table shows which language classes are identifiable in the limit in which learning model. On the right-hand side, each language class is a superclass of all lower classes. Each learning model (i.e. type of presentation) can identify in the limit all classes below it. In particular, the class of finite languages is identifiable in the limit by text presentation (cf. Example 2 above), while the class of regular languages is not.

Pattern Languages, introduced by Dana Angluin in another 1980 paper,{{cite journal| author=Dana Angluin| title=Finding Patterns Common to a Set of Strings| journal=Journal of Computer and System Sciences| year=1980| volume=21| pages=46–62| doi=10.1016/0022-0000(80)90041-0| doi-access=free}} are also identifiable by normal text presentation; they are omitted in the table, since they are above the singleton and below the primitive recursive language class, but incomparable to the classes in between.incomparable to regular and to context-free language class: Theorem 3.10, p.53{{clarify|reason=I'm not quite sure about context-sensitive and primitive recursive languages; please cross-check.|date=October 2013}}

Sufficient conditions for learnability

Condition 1 in Angluin's paper is not always easy to verify. Therefore, people come up with various sufficient conditions for the learnability of a language class. See also Induction of regular languages for learnable subclasses of regular languages.

=Finite thickness=

A class of languages has finite thickness if every non-empty set of strings is contained in at most finitely many languages of the class. This is exactly Condition 3 in Angluin's paper.p.123 mid Angluin showed that if a class of recursive languages has finite thickness, then it is learnable in the limit.p.123 bot, Corollary 2

A class with finite thickness certainly satisfies MEF-condition and MFF-condition; in other words, finite thickness implies M-finite thickness.{{cite book|author1=Andris Ambainis |author2=Sanjay Jain |author3=Arun Sharma | chapter=Ordinal mind change complexity of language identification| title=Computational Learning Theory| year=1997| volume=1208| pages=301–315| publisher=Springer| series=LNCS| chapter-url=http://www.comp.nus.edu.sg/~sanjay/paps/efs2.pdf}}; here: Proof of Corollary 29

=Finite elasticity=

A class of languages is said to have finite elasticity if for every infinite sequence of strings s_0, s_1, ... and every infinite sequence of languages in the class L_1, L_2, ..., there exists a finite number n such that s_n\not\in L_n implies L_n is inconsistent with \{s_1,...,s_{n-1}\}.

It is shown that a class of recursively enumerable languages is learnable in the limit if it has finite elasticity.

Mind change bound

A bound over the number of hypothesis changes that occur before convergence.

Other concepts

=Infinite cross property=

A language L has infinite cross property within a class of languages \mathcal{L} if there is an infinite sequence L_i of distinct languages in \mathcal{L} and a sequence of finite subset T_i such that:

  • T_1 \sub T_2\sub ...,
  • T_i \in L_i,
  • T_{i+1}\not\in L_i, and
  • \lim_{n=\infty}T_i=L.

Note that L is not necessarily a member of the class of language.

It is not hard to see that if there is a language with infinite cross property within a class of languages, then that class of languages has infinite elasticity.

Relations between concepts

  • Finite thickness implies finite elasticity;{{#tag:ref|Wright, Keith (1989) "[https://www.researchgate.net/publication/221497309_Identification_of_Unions_of_Languages_Drawn_from_an_Identifiable_Class Identification of Unions of Languages Drawn from an Identifiable Class]". Proc. 2nd Workwhop on Computational Learning Theory, 328-333; with correction in:Motoki, Shinohara, and Wright (1991) [https://www.researchgate.net/profile/Takeshi_Shinohara/publication/31900621_Correct_Definition_of_Finite_Elasticity/links/541763930cf2218008bee4d6.pdf?inViewer=true&disableCoverPage=true&origin=publication_detail "The correct definition of finite elasiticity: corrigendum to identification of unions"], Proc. 4th Workshop on Computational Learning Theory, 375-375}} the converse is not true.
  • Finite elasticity and conservatively learnable implies the existence of a mind change bound. [http://citeseer.ist.psu.edu/context/1042497/0]
  • Finite elasticity and M-finite thickness implies the existence of a mind change bound. However, M-finite thickness alone does not imply the existence of a mind change bound; neither does the existence of a mind change bound imply M-finite thickness. [http://citeseer.ist.psu.edu/context/1042497/0]
  • Existence of a mind change bound implies learnability; the converse is not true.
  • If we allow for noncomputable learners, then finite elasticity implies the existence of a mind change bound; the converse is not true.
  • If there is no accumulation order for a class of languages, then there is a language (not necessarily in the class) that has infinite cross property within the class, which in turn implies infinite elasticity of the class.

Open questions

  • If a countable class of recursive languages has a mind change bound for noncomputable learners, does the class also have a mind change bound for computable learners, or is the class unlearnable by a computable learner?

Notes

{{reflist|group=note}}

References