Scalable and Accurate Subsequence Transform for Time Series Classification

Abstract

Time series classification using phase-independent subsequences called shapelets is one of the best approaches in the state of the art. This approach is especially characterized by its interpretable property and its fast prediction time. However, given a dataset of n time series of length at most m, learning shapelets requires a computation time of O(n 2 m 4) which is too high for practical datasets. In this paper, we exploit the fact that shapelets are shared by the members of the same class to propose the SAST (Scalable and Accurate Subsequence Transform) algorithm which is interpretable, accurate and more faster than the actual state of the art shapelet algorithm. The experiments we conducted on the UCR archive datasets shown that SAST is more accurate than the state of the art Shapelet Transform algorithm on many datasets, while being significantly more scalable.

Publication
In Conférence sur l’Apprentissage automatique (CAp)
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.

Supplementary notes can be added here, including code and math.

Michael F. MBOUOPDA
Michael F. MBOUOPDA
Machine Learning Researcher

Currently, my main research interests are anomaly detection and explainable artificial intelligence.