Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We present a novel qualitative, dynamic length sliding window method which enables a mobile robot to temporally segment activities taking place in live RGB-D video. We demonstrate how activities can be learned from observations by encoding qualitative spatio-temporal relationships between entities in the scene. We also show how a Nearest Neighbour model can recognise activities taking place even if they temporally co-occur. Our system is validated on a challenging dataset of daily living activities.