Abstract
The outlier detection problem for dynamic systems is formulated as a matrix decomposition problem with low rank and sparse matrices, and further recast as a semidefinite programming problem. A fast algorithm is presented to solve the resulting problem while keeping the solution matrix structure and it can greatly reduce the computational cost over the standard interior-point method. The computational burden is further reduced by proper construction of subsets of the raw data without violating low-rank property of the involved matrix. The proposed method can make exact detection of outliers in case of no or little noise in output observations. In case of significant noise, a novel approach based on under-sampling with averaging is developed to denoise while retaining the saliency of outliers, and so-filtered data enables successful outlier detection with the proposed method while the existing filtering methods fail. Use of recovered 'clean' data from the proposed method can give much better parameter estimation compared with that based on the raw data.
Original language | English |
---|---|
Article number | 7110589 |
Pages (from-to) | 1202-1216 |
Number of pages | 15 |
Journal | IEEE Transactions on Cybernetics |
Volume | 46 |
Issue number | 5 |
DOIs | |
Publication status | Published - May 2016 |
Externally published | Yes |
Keywords
- Denoising
- interior-point methods
- low-rank matrix
- matrix decomposition
- outlier detection
- semidefinite programming (SDP)
- sparsity
- system identification
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering