(56 min)
S.V.N. Vishwanathan, the Research Scientist, presents:
Regularized risk minimization is at the heart of many machine learning algorithms. The underlying objective function to be minimized is convex, and often non-smooth. Classical optimization algorithms cannot handle this efficiently. In this talk we present two algorithms for dealing with convex non-smooth objective functions.
- First, we extend the well known BFGS quasi-Newton algorithm to handle non-smooth functions;
- Second, we show how bundle methods can be applied in a machine learning context. We present both theoretical and experimental justification of our algorithms.
No comments:
Post a Comment