Continuous dimensional models of human affect have shown to offer a higher accuracy in identifying a broad range of emotions compared to the discrete categorical approaches dealing only with emotion categories such as joy, sadness, anger, etc. Unlike the majority of existing works on dimensional models of human affect (VAD; i.e. Valence-Arousal-Dominance) that rely on training-based approaches, here we propose an unsupervised and novel approach for ranking of continuous emotions in images using canonical polyadic decomposition. To better portray the efficacy of our proposed approach, we provide theoretical and empirical proof that our system is capable of generating a Pearson Correlation Coefficient that outperforms the state of the art by a large margin; i.e. improved from 0.407 to 0.6721 in one experiment and from 0.35 to 0.7143 in another, when our method was applied to valence rank estimation. Towards this aim, we run experiments on four major emotion recognition datasets; i.e. CK+, AFEW-VA, SEMAINE and AffectNet, and provide analysis on the observed results accordingly. Our datasets are selected in a way to include images collected under controlled environments such as a laboratory setting; e.g. CK+ and SEMAINE, images collected from semi-controlled environments; e.g. AFEW-VA, and images collected under uncontrolled environments (from the wild); e.g. AffectNet.