Fine-Tuning ImageNet model for Classification

What am I doing today? I have installed caffe and the required libraries using this really good guide. The aim of my experiment is to fine-tune tha VGG-16 network for classification. The VGG-16 network I use is pre-trained on ImageNet for classification. I will exploit it’s caffemodel to fine-tune the weights for my own purpose. Data-Preparation Getting the data prepared correctly finishes most of your work. Step 1- Go to caffe/data folder and create your own data folder there....

November 16, 2016 · 4 min · Deshana

Fancy PCA (Data Augmentation) with Scikit-Image

Let’s start with the basics! We know that an integer variable is stored in 4 bytes. An integer array would be a consecutive stream of many such 4 bytes. A string of text would store number of bytes proportional to the characters perhaps with a little padding. Storage of numbers and text is understood, but how on earth, would we store an image? How do we turn an image into something that can be processed and stored in memory?...

October 22, 2016 · 9 min · Deshana

Evaluation of Results using Mean Average Precision

There are several reasons why the evaluation of results on datasets like the Pascal-VOC and ILSRVC is hard. It is well described in Pascal VOC 2009 challenge paper. Here are some of these: Images may contain instances of multiple classes so it is not sufficient to simply ask, “Which one of the m classes does this image belong to?” and then use the predicted result to compare with the actual....

June 19, 2016 · 5 min · Deshana

Playing around with RCNN- State of the art visual object detection system.

I was playing around with this implementation of RCNN released in 2015 by Ross Girshik. This method is described in detail in his Faster-RCNN paper resleased in NIPS 2015. (I was there and this groundbreaking unfurling of CNN+RCNN was happening around me which gives me all the more reason to be super excited!). I used the pre-trained VGG 16 net where VGG stands for Virtual Geometry Group and 16 because the network is 16-layered....

June 19, 2016 · 3 min · Deshana