Classification problems solved with deep neural networks (DNNs) typically rely on a closed world paradigm, and optimize over a single objective (e.g., minimization of the cross-entropy loss). This setup dismisses all kinds of supporting signals that can be used to reinforce the existence or absence of particular patterns. The increasing need for models that are interpretable by design makes the inclusion of said contextual signals a crucial necessity. To this end, we introduce the notion of Self-Supervised Autogenous Learning (SSAL). A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task, following architectural principles found in multi-task learning. SSAL branches impose low-level priors into the optimization process (e.g., grouping). The ability of using SSAL branches during inference, allow models to converge faster, focusing on a richer set of class-relevant features. We equip state-of-the-art DNNs with SSAL objectives and report consistent improvements for all of them on CIFAR100 and Imagenet. We show that SSAL models outperform similar state-of-the-art methods focused on contextual loss functions, auxiliary branches and hierarchical priors.
Cite as: Self-Supervised Autogenous Learning \cite{palacio2020contextual} @InProceedings{palacio2020contextual, author = {Sebastian Palacio and Philipp Engler and Joern Hees and Andreas Dengel}, title = {Contextual Classification Using Self-Supervised Auxiliary Models for Deep Neural Networks}, booktitle = {International Conference on Pattern Recognition (ICPR)}, month = {January}, year = {2021} }
This work was supported by the BMBF projects ExplAINN (01IS19074), DeFuseNN (01IW17002) and the NVIDIA AI Lab program.