New Problems in Active Sampling for Mobile Robotic Online Learning

Abstract

AI models deployed in real-world tasks (e.g., surveillance, implicit mapping, health care) typically need to be online trained for better modelling of the changing real-world environments and various online training methods (e.g., domain adaptation, few shot learning) are proposed for refining the AI models based on sampled training input from the real world. However, in the whole loop of AI model online training, there is a section rarely discussed: sampling of training input from the real world. In this paper, we show from the perspective of online training of AI models deployed on edge devices (e.g., robots) that several problems in sampling of training input are hindering the effectiveness (e.g., final training accuracy) and efficiency (e.g., online training accuracy gain per epoch) for the online training process. Notably, the online training relies on training input consecutively sampled from the real world and suffers from locality problem: the consecutive samples from nearby states (e.g., position and orientation of a camera) are too similar and would limit the training efficiency; on the other hand, while we can choose to sample more about the inaccurate samples to better final training accuracy, it is costly to obtain the accuracy statistics of samples via traditional ways such as validating, especially for AI models deployed on edge devices. These findings aim to raise research effort for practical online training of AI models, so that they can achieve resiliently and sustainably high performance in real-world tasks.

Publication
In the 6th IEEE International Workshop on Advances in Artificial Intelligence & Machine Learning
Junming Wang
Junming Wang
MPhil Student

My research interests focus on robotic vision and distributed robotic systems.