Electrical Engineering and Systems Science > Systems and Control
[Submitted on 31 Dec 2019]
Title:Towards Improving the Performance of the RNN-based Inversion Model in Output Tracking Control
View PDFAbstract:With the advantages of high modeling accuracy and large bandwidth, recurrent neural network (RNN) based inversion model control has been proposed for output tracking. However, some issues still need to be addressed when using the RNN-based inversion model. First, with limited number of parameters in RNN, it cannot model the low-frequency dynamics accurately, thus an extra linear model has been used, which can become an interference for tracking control at high frequencies. Moreover, the control speed and the RNN modeling accuracy cannot be improved simultaneously as the control sampling speed is restricted by the length of the RNN training set. Therefore, this article focuses on addressing these limitations of RNN-based inversion model control. Specifically, a novel modeling method is proposed to incorporate the linear model in a way that it does not affect the existing high-frequency control performance achieved by RNN. Additionally, an interpolation method is proposed to double the sampling frequency (compared to the RNN training sampling frequency). Analysis on the stability issues which may arise when the proposed new model is used for predictive control is presented along with the instructions on determining the parameters for ensuring the closed-loop stability. Finally, the proposed approach is demonstrated on a commercial piezo actuator, and the experiment results show that the tracking performances can be significantly improved.
Current browse context:
eess.SY
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.