Abstract

Confidence-weighted (CW) learning [6], an online learning method for linear clas-sifiers, maintains a Gaussian distributions over weight vectors, with a covariance matrix that represents uncertainty about weights and correlations. Confidence constraints ensure that a weight vector drawn from the hypothesis distribution correctly classifies examples with a specified probability. Within this framework, we derive a new convex form of the constraint and analyze it in the mistake bound model. Empirical evaluation with both synthetic and text data shows our version of CW learning achieves lower cumulative and out-of-sample errors than commonly used first-order and second-order online methods. 1

Keywords

Covariance matrixConstraint (computer-aided design)MathematicsConfidence intervalConfidence regionRegular polygonCovarianceConvex combinationGaussianArtificial intelligenceComputer scienceMistakeApplied mathematicsConvex optimizationMathematical optimizationAlgorithmStatistics

Affiliated Institutions

Related Publications

Publication Info

Year
2008
Type
article
Volume
21
Pages
345-352
Citations
126
Access
Closed

External Links

Citation Metrics

126
OpenAlex

Cite This

Koby Crammer, Mark Dredze, Fernando Pereira (2008). Exact Convex Confidence-Weighted Learning. , 21 , 345-352.