In this paper I explore and compare two methods for solving this problem. One approach is to apply a post-filter to an overgenerating symbolic grammar that uses an n-gram language model to select the most likely output. A more successful alternative method extends the work of Shaw and Hatzivassiloglou (1999) using the Memory-Based Learning techniques of, e.g., Daelemans, van den Bosch, and Weijters (1997). This approach fares better, correctly predicting the order of nearly 90% of the sequences in the test set. Given the variability inherent in the data, this is as well as any method would be able to do.