Proof-theoretic methods in natural language processing


January/February 2002

Docent Michael Moortgat
moortgat@let.uu.nl
Courseware Richard Moot
Richard.Moot@let.uu.nl


Course description | Credits | Sessions | Courseware | Student papers

A set of course notes with exercises is available for printing. The links to electronic reading materials will be updated soon. Links to Springer Lecture Notes publications are available from within the UU domain.

Course description

This course focuses on computational aspects of type logical grammars and related formalisms, and on the relevance of proposed computational models for human language processing. In a companion course, we have presented grammatical derivations in the Gentzen calculus and in the Natural Deduction format. From a computational point of view, these formats are problematic. Natural Deduction is good for the presentation of proofs once they have been found, but it doesn't offer pleasant proof search facilities. Gentzen calculus does have a nice backward chaining proof search interpretation, but it is inefficient as a result of spurious non-determinism: there can be different derivations that compute one and the same meaning. Moreover, these two formats do not address the left-to-right incrementality of human language processing, and the various performance phenomena related to it.

The proof-net approach of Linear offers interesting features to deal with such processing issues. Proof nets are graphs satisfying certain well-formedness conditions. You can view a proof net as a 'parallellized' derivation: nets capture the essence of a derivation without redundant choice points. The original well-formedness conditions for proof nets check for resource multiplicity: every resource has to be used exactly once. The challenge, in adapting the nets for NLP purposes, is to introduce additional constraints to also check the relevant aspects of grammatical structure. One way of doing that is to decorate the nodes in a proof structure graph with terms taken from an algebra of structural labels. Alternatively, one can build in the structural factors directly in the graph-theoretic characterization of nets.

In the first part of the course (Week 1 to Week 3), we study the technical aspects of proof nets. In the second part of the course, we look at proposals for incremental proof net constructions, and their relevance for human language processing issues. Also we investigate the trade-off between expressivity and processing efficiency for some approximations of the full grammar logic.

Credits

To earn the credits for this course, participants have to fulfill two requirements.

Sessions

A tentative course schedule is given below, together with the reading assignments.

Week 1. Basics

Week 2. Proof nets and semantic readings

Week 3. Proofs nets for structured grammatical resources

There is a mid-course take-home quiz to test your understanding of the technical material so far.

From this point on, the course can move in a number of directions, depending on the interests of the participants. Some possible topics for further exploration:

Proof nets and incremental processing

Incremental parsing for pregroup grammars

Minimalist parsing

Learning

Course-ware

The proof-net facilities of GRAIL serve as course-ware for this module. You need an account on salto.let.uu.nl to make use of the GUI. Here are some useful links.

Student papers

Back to the top