Knowledge Distillation for Robotics Time Series Classification

Authors

  • Javidan Abdullayev IRIMAS, Universit´e de Haute-Alsace
  • Maxime Devanne IRIMAS, Universit´e de Haute-Alsace
  • Jonathan Weber IRIMAS, Université de Haute-Alsace
  • Germain Forestier IRIMAS, Université de Haute-Alsace

DOI:

https://doi.org/10.60643/urai.v2023p3

Abstract

Recently, deep learning models have shown great success in a variety of fields, especially computer vision, speech recognition and natural language processing. The success of deep learning models motivated researchers to apply them to time series analysis, especially in Time Series Classification (TSC). A trend which we witness in deep learning field, stateof-the-art deep learning models become more complex over time. It is often impractical to deploy very complex deep learning models to embedded systems (edge devices, mobile phones), robots or a production enviorenment due to resource constraints. In deep learning context, knowledge distillation is a model compression technique which is used to transfer knowledge from a heavy model (deep) to a lightweight model. As a result, the lightweight model will require less resources in terms of memory and computation but will deliver competitive performance compared to the heavy model. The purpose of this paper is to introduce and explore the concept of Knowledge Distillation (KD) for time series classification with specific focus on robotics time series classification using state-of-the-art Inception architecture. In light of the fact that deep learning models are employed in the classification of time series, we believe using knowledge distillation is a viable research direction for the future.

Downloads

Published

13.05.2025