Zhiming organized a chinese hot pot and movie night.
For the second reading group on the Stanford University Convolutional Neural Networks class, we went through the following slides:
In practice, use rectified linear or maxout activation functions, which are both piecewise linear. The bigger the network, the better, but regularization might be required.
We made homemade sushis and dumplings at Charles’ place!
For this first reading group on the Stanford University Convolutional Neural Networks class, we went through the following slides:
- Image classification, data-driven approach, k-nearest neighbor
- Linear classification: SVM/Softmax
- Optimization, higher-level representations, image features (first half)
There are two loss functions commonly used: the softmax and the SVM. In order to optimize the weights and train effectively, we will want to minimize the chosen loss function during training.
There were a lot of Belgian beers left, so we had to do a beer tasting part 2! Friends of the lab Charles and Louis-Philippe came along. Charles brought some that he made himself, which were surprisingly good, kudos to him!
Introduction by Zhiming:
Because we will take the CNN course from Stanford University, this reading group will only focus on the shallow image representation, not the deep learning part. The goal of this reading group is to understand the basic idea about Bag of Words, how to import the spatial information to BoW, some advanced encoding methods (VLAD, Improved Fisher Vector), and some SVM kernels.
Continue reading Image Representation
The lab went out to dinner for Sebastien’s birthday at Guacamole y Tequila.
Introduction by Sébastien:
For this first reading group, I propose to go to the basics. The objective is to be able to read ROC and PR plots in order to interpret them correctly. There a thousands of papers about the evaluation of classifiers. It would be impossible to have all the knowledge in a single reading group, so I decided to focus on classifiers with two classes, and putting the emphasis mostly on the signification of the points in these spaces, and also on what can arrive to the classifiers they represent if they are combined or tuned. Curves are also a little bit addressed. The distribution of the readings I propose is the following. I only selected papers published after the year 2000. Moreover, most of the selected papers have been cited nearly one thousand times.
Continue reading Understanding the ROC and PR spaces
We tried to introduce hockey to our international lab members. First, Pierre-Marc gave a crash course on hockey.
Continue reading Hockey Night in Canada