I am developing a new machine learning technique, called Weighted Probability Distribution Voting (WPDV). During learning, WPDV takes every possible combination of input features in turn. It searches the training data for all instances of each combination and calculates a probability distribution for the co-occurring output features. During classification, WPDV takes all possible input feature combinations that occur in the new input and adds the corresponding probability distributions, each multiplied by a weight factor which increases exponentially with the number of elements in the factor which increases exponentially with the number of elements in the combination. The output feature with the highest sum is then selected.
In this paper, I describe the WPDV technique in detail and evaluate its performance on several NLP tasks.