Thursday, March 27, 2008

Optimization for Machine Learning

Mar 2008


(56 min)

S.V.N. Vishwanathan, the Research Scientist, presents:

Regularized risk minimization is at the heart of many machine learning algorithms. The underlying objective function to be minimized is convex, and often non-smooth. Classical optimization algorithms cannot handle this efficiently. In this talk we present two algorithms for dealing with convex non-smooth objective functions.


  1. First, we extend the well known BFGS quasi-Newton algorithm to handle non-smooth functions;
  2. Second, we show how bundle methods can be applied in a machine learning context. We present both theoretical and experimental justification of our algorithms.

No comments: