Abstract: The ImageCLEF 2017 lifelog summarization challenge [10, 12] was established to develop a benchmark for summarizing egocentric lifelogging videos based on our daily activities, such as ‘commute to work’ or ‘cooking at home’. In this paper, we propose an iterative approach for summarizing lifelogging activities based on task queries provided by the ImageCLEF 2017 lifelog summarization challenge. YoloV3 image detection, TensorFlow GoogleNet image classification and Places365 environment classification resources are used to generate low level deep learned features from the lifelogging images. A nearest neighbor classifier is used to generate high level descriptors to classify lifelogger activities per image basis, which is also a requirement as provided in the ground truth labels. Finally, key frame images per activity are selected via hierarchical clustering to create an accurate and diverse static storyboard of summarized lifelog activities. Experimental results show the superiority of the proposed approach as compared to the highest reported results achieved in the ImageCLEF 2017 lifelog summarization competition.
|Comments:||Presented at BMVC 2019: Workshop on Applications of Egocentric Vision (EgoApp), Cardiff, UK.|
|Paper:||Paper (PDF): EgoApp2019_1.pdf|
|Supplemntary:||Supplemntary (PDF): EgoApp2019_1_Supp.pdf|