Monday 20 May 2013

Comparing detectors performance with different number of features


In my last entry I hypothesized that the under-performance of my first detector, when compared to the one documented in the original paper, could be explained by the difference in the number of features used. To verify this, I trained a classifier with only 10K features and run tests to compare both detectors. The results are illustrated in the next graph.

As illustrated by the graphic, the classifier trained with 10K features scores ~4 points bellow the one trained with 15K features at the reference value of 0.0001 FPPW. Having this results in mind, it is safe to assume that if the classifier was trained with 30K features, as it was in the original paper, the detector would most likely achieve results similar to the ones documented in the publication.

Regards.

Tuesday 14 May 2013

First results

I've finally finished my first implementation of the pedestrian detection algorithm. Results for the INRIA dataset are shown in the following graph


At the reference value of 0.0001 false positives per window my detector correctly labels ~79.5% pedestrian windows. This is around 10 points bellow the original paper, and for this I offer two explanations.

Firstly and foremost, I've used a pool of 15000 features for learning and classification, when in the original paper 30000 were used.

Secondly, the original paper uses a optimized boosted cascade for decision-making. This type of classification not only speeds up the algorithm by several orders of magnitude, but also leads to a slightly better detection performance, since it is designed to reject most false positives on the first steps of the cascade.

Not being able to implement this type of classification myself due to the lack of time available, I resorted to extract less features to speed up the algorithm, thus sacrificing the results.

Given these explanations, I think the method is validated and the next step is to test it on our own self-obtained dataset. 

Best regards