Videos, Slides, Films

Can Data Placement Be Effective for Neural Networks Classification Tasks? Introducing the Orthogonal Loss

Conferences
ICPR 2020 MAIN CONFERENCE PS T1.5: Pattern Recognition and Neural Network Applications (2021)
Available as
Online
Summary

Traditionally, a Neural Network classification training loss function follows the same principle: minimizing the distance between samples that belong to the same class, while maximizing the distanc...

Traditionally, a Neural Network classification training loss function follows the same principle: minimizing the distance between samples that belong to the same class, while maximizing the distance to the other classes. There are no restrictions on the spatial placement of deep features (last layer input). This paper addresses this issue when dealing with Neural Networks, providing a set of loss functions that are able to train a classifier by forcing the deep features to be projected over a predefined orthogonal basis. Experimental results shows that these `data placement' functions can overcome the training accuracy provided by the classic cross-entropy loss function.

Details

Additional Information