Previous topic

Feature Learners

Next topic

Sparse Learners

This Page

Ranking Learners

The learners.ranking module contains learners meant for ranking problems. The MLProblems for these learners should be iterators over triplets (input,target,query), where input is a list of document representations and target is a list of associated relevance scores for the given query.

The currently implemented algorithms are:

  • RankingFromClassifier: a ranking model based on a classifier.
  • RankingFromRegression: a ranking model based on a regression model.
  • ListNet: ListNet ranking model.
learners.ranking.err_and_ndcg(output, target, max_score, k=10)[source]

Computes the ERR and NDCG score (taken mostly from here: http://learningtorankchallenge.yahoo.com/evaluate.py.txt)

class learners.ranking.RankingFromClassifier(classifier, merge_document_and_query=<function default_merge at 0x10c567e60>, ranking_measure='expected_score')[source]

A ranking model based on a classifier.

This learner trains a given classifier to predict the target relevance score associated to each document/query pairs found in the training set.

Option classifier is the classifier to train.

The classifier can be used for ranking based on three measures, specified by option ranking_measure:

  • ranking_measure='predicted_score': the predicted relevance score is used (first output of classifier);
  • ranking_measure='expected_score': the distribution over scores (second output) is used to computed the expected score, and a ranking is determined by sorting those expectations;
  • ranking_measure='expected_persistence': the distribution over scores is used to determine the expected persistence ((2**score-1)/max_score). Ranking according to this measure should work well for the ERR ranking error.

To use ranking_measure='predicted_score' as the ranking measure, the classifier can have only one output, i.e. the predicted score. To use the other two ranking measures, the classifier must also output a distribution over possible relevance scores as the second output.

Option merge_document_and_query should be a callable function that takes two arguments (the input document and the query) and outputs a merged representation for the pair which will be fed to the classifier. By default, it is assumed that the document representation already contains query information, and only the document the input document is returned.

Required metadata:

  • 'scores'
train(trainset)[source]

Trains the classifier on the merged documents and queries. Each call to train increments self.stage by 1.

use(dataset)[source]

Outputs a list corresponding to the position (starting at 0) of each document corresponding to its relevance score (from most relevant to least).

For example, ordering [1,3,0,2] means that the first document is the second most relevant, the second document is the fourth most relevant, the third document is the first most relevant and the fourth document is the third most relevant.

Inspired from http://learningtorankchallenge.yahoo.com/instructions.php

test(dataset)[source]

Outputs the document ordering and the associated ERR and NDCG scores.

class learners.ranking.RankingFromRegression(regression, merge_document_and_query=<function default_merge at 0x10c567e60>)[source]

A ranking model based on a regression model.

This learner trains a given regression learner to predict the target relevance score associated to each document/query pairs found in the training set.

Option regression is the regression model to train.

Option merge_document_and_query should be a callable function that takes two arguments (the input document and the query) and outputs a merged representation for the pair which will be fed to the regression model. By default, it is assumed that the document representation already contains query information, and only the document the input document is returned.

Required metadata:

  • 'scores'
train(trainset)[source]

Trains the regression model on the merged documents and queries. Each call to train increments self.stage by 1.

use(dataset)[source]

Outputs a list corresponding to the position (starting at 0) of each document corresponding to its relevance score (from most relevant to least).

For example, ordering [1,3,0,2] means that the first document is the second most relevant, the second document is the fourth most relevant, the third document is the first most relevant and the fourth document is the third most relevant.

Inspired from http://learningtorankchallenge.yahoo.com/instructions.php

test(dataset)[source]

Outputs the document ordering and the associated ERR and NDCG scores.

class learners.ranking.ListNet(n_stages, hidden_size=50, learning_rate=0.01, weight_per_query=False, alpha=1.0, merge_document_and_query=<function default_merge at 0x10c567e60>, seed=1234)[source]

ListNet ranking model.

This implementation only models the distribution of documents appearing first in the ranked list (this is the setting favored in the experiments of the original ListNet paper). ListNet is trained by minimizing the KL divergence between a target distribution derived from the document scores and ListNet’s output distribution.

Option n_stages is the number of training iterations over the training set.

Option hidden_size determines the size of the hidden layer (default = 50).

Option learning_rate is the learning rate for stochastic gradient descent training (default = 0.01).

Option weight_per_query determines whether to weight each ranking example (one for each query) by the number of documents to rank. If True, the effect is to multiply the learning rate by the number of documents for the current query. If False, no weighting is applied (default = False).

Option alpha controls the entropy of the target distribution ListNet is trying to predict: target = exp(alpha * scores)/sum(exp(alpha * scores)) (default = 1.).

Option merge_document_and_query should be a callable function that takes two arguments (the input document and the query) and outputs a merged representation for the pair which will be fed to ListNet. By default, it is assumed that the document representation already contains query information, and only the document the input document is returned.

Option seed determines the seed of the random number generator used to initialize the model.

Required metadata:

  • 'scores'
Reference:
Learning to Rank: From Pairwise Approach to Listwise Approach
Cao, Qin, Liu, Tsai and Li