Connectionist Phonotactics Learning with the Adaptive Resonance Theory

Thijs Cotteleer (Rijksuniversiteit Groningen)
Ivelin Stoianov (Rijksuniversiteit Groningen)


Natural Language Phonotactics describes the grammar of the words in a
particular language. One can model it with different approaches, e.g.,
symbolic or connectionist. 

A classical Neural Network model (NN) used for this purpose is the Simple Recurrent Network (e.g., Stoianov & Nerbonne 1999). On another hand, the Adaptive Resonance Theory (ART) developed by Stephen Grossberg is an architecture considered as biologically more plausible. ART is claimed to be useful in speech recognition, visual object recognition, and cognitive information processing. The ART neural network architecture consists of two layers, one containing a representation of the input and another containing categories the input patterns might belong to. This architecture works well with static patterns, but it can not directly classify dynamic data (such as sequentially represented words) and be applied for phonotactics modeling. Therefore, the model had to be extended in order to work with such dynamic patterns, which was done by attaching an external memory that keeps track about the past network states.

Experiments with the new connectionist model trained on the phonotactics of Dutch monosyllables demonstrated that it can learn the task with precision similar to the performance reported in (Stoianov & Nerbonne, 1999), which confirmed our initial hypothesis that the ART model can model phonotactics. We also plan to use it for other Natural Language problems.