Initial Thoughts on Yahoo’s Ranking Challenge

Atbrox is startup company providing technology and services for Search and Mapreduce/Hadoop. Our background is from Google, IBM and Research.

Yahoo recently announced the Learning to Rank Challenge – a pretty interesting web search challenge (as the somewhat similar Netflix Prize Challenge also was).

Data and Problem
The data sets contains (to my interpretation) per line:

  1. url – implicitly encoded as line number in the data set file
  2. relevance – low number=high relevance and vice versa
  3. query – represented as an id
  4. features – up to several hundreds

and the problem is to find a function that gives relevance numbers per url per query id.

Initial Observation
In dataset 1 there are ~473k URLs and ~19k queries. At first I thought this meant that there are in average 473/19 ~ 24 relevance numbers for each query (see actual distribution of counts in figure below), i.e. corresponding to search result 1 to 24, but it seems like there are several URLs per unique query that has the same relevance (e.g. URLx and URLy both can have relevance 2 for queryZ). The paper Learning to Rank with Ties seems potentially relevant to deal with this.

Multiple URLs that shares relevance for a unique query can perhaps be due to:

  1. similar/duplicate content between the URLs?
  2. a frequent query (due to sampling of examples?)
  3. uncertainty about which URL to select for particular a relevance and query?
  4. there is a tie, i.e. they are equally relevant

Potential classification approach?
From a classification perspective there are several (perhaps naive?) approaches that could be tried out:

  1. Use relevance levels as classes (nominal regression) and use a multiclass-classifier
  2. Train classifier as binary competition within query, i.e. relevance 1 against 2, 3, .., and relevance n against n+1, .. (probably get some sparsity problems due to this)
  3. Binary competition across queries, but is problematic due to that a relevance of 4 for one query could be more relevant than a relevance of 1 for a another query (and there is no easy way to determine that directly from the data), but if the observation related to multiple URLs per relevance level per query (see above) is caused by uncertainty one could perhaps use 1/(number of URLs per relevance level per query) as a weight to either:
    1. support training across queries, e.g. a URL for a query with relevance 1 is better that another query of relevance 1 with 37 URLs of that relevance, this approach could perhaps be used somehow using regression? The problem is to compare against different relevance levels, e.g. is a relevance 2 for a query with 1 url more confident than one of relevance 1 for a query with 37 URLs?
    2. use a classifier that supports weighing examples and the approach in 1 or 2.

More about ranking with machine learning?
Check out the learning to rank bibliography.

Enhanced by Zemanta

Atbrox on LinkedIn

Best regards,

Amund Tveit, co-founder of Atbrox

This entry was posted in cloud computing and tagged , , , , , , , . Bookmark the permalink.

Comments are closed.