CNN Part 4: Parameter Update & Optimization

For the fourth reading group on the Stanford University Convolutional Neural Networks class, we went through the following slides:

Take-home message:

There are different ways to decay your learning rate, which is recommended to avoid missing your global minima. Momentum is robust to saddle points but moves very quickly. There is no clear way to find the optimal hyperparameters, but they are very important to optimize.

Leave a Reply

Your email address will not be published. Required fields are marked *