Abstract
Confidence-weighted (CW) learning [6], an online learning method for linear clas-sifiers, maintains a Gaussian distributions over weight vectors, with a covariance matrix that represents uncertainty about weights and correlations. Confidence constraints ensure that a weight vector drawn from the hypothesis distribution correctly classifies examples with a specified probability. Within this framework, we derive a new convex form of the constraint and analyze it in the mistake bound model. Empirical evaluation with both synthetic and text data shows our version of CW learning achieves lower cumulative and out-of-sample errors than commonly used first-order and second-order online methods. 1
Keywords
Affiliated Institutions
Related Publications
Random-Walk Computation of Similarities between Nodes of a Graph with Application to Collaborative Recommendation
This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on...
Item-based top-<i>N</i>recommendation algorithms
The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems ---a personalized information filtering technology u...
Publication Info
- Year
- 2008
- Type
- article
- Volume
- 21
- Pages
- 345-352
- Citations
- 126
- Access
- Closed