Skip to main content

Rolling bearing fault diagnosis based on quantum LS-SVM

Abstract

Rolling bearing is an indispensable part of the contemporary industrial system, and its working conditions affect the state of the entire industrial system. Therefore, there is great engineering value to researching and improving the fault diagnosis technology of rolling bearings. However, with the involvement of the whole mechanical equipment, we need to have a large quantity of data to support the accuracy of fault diagnosis, while the efficiency of classical machine learning algorithms is poor in processing big data, and huge amount of computing resources is required. To solve this problem, this paper combines the HHL algorithm in quantum computing with the LS-SVM algorithm in machine learning and proposes a fault diagnosis model based on a quantum least square support vector machine (QSVM). Based on experiments simulated on analog quantum computers, we demonstrate that our fault diagnosis based on QSVM is feasible, and it can play a far superior advantage over the classical algorithm in the context of big data.

1 Introduction

With the development of science and technology, the modern industrial system has entered a new era of integration, precision and intelligence, these characteristics not only make the mechanical equipment organically integrated into a whole but also improve the modern industrial system’s higher production efficiency. On the other hand, with the increase in operation time and equipment aging, mechanical malfunctions are always inevitable. The failure of any part of the industrial production line may have a great impact on the entire industrial system, which may bring serious economic losses to enterprises and factories, and even cause major safety accidents in serious cases. Rolling bearing has always been one of the essential key parts of mechanical equipment. According to some research, in rotating machinery, about 30% of mechanical failures are caused by rolling bearings [1]. Therefore, it is very important to conduct research on the fault diagnosis of rolling bearings [2].

In recent years, with the improvement of machine learning theory, more and more researchers have applied these artificial intelligence algorithms to the fault diagnosis of rolling bearings and achieved good results [36]. However, it should also be noted that classical machine learning algorithms have gradually reached a bottleneck in computing power when dealing with high-dimensional and massive data. Finding algorithms that are more efficient in processing big data will be the focus of research on rolling bearing fault diagnosis [7].

Quantum is an important concept in modern physics, in which quantum is the smallest unit that cannot be divided, so the characteristics of quantum are mainly shown in the microscopic world. To describe the laws of physics of the microscopic world, the theory of quantum mechanics was proposed. This theory is often contrary to the experience and common sense of the macroscopic world, such as quantum superposition, quantum entanglement, quantum coherence and so on. The science and technology developed from quantum mechanics is called quantum technology. After decades of development, quantum technology has made great progress and gradually entered the field of interdisciplinary application research [8]. Quantum computing is one of the important branches of quantum technology, and is also the most promising technology, which can be put into practice in the foreseeable future [9]. Compared with classical computing methods, quantum computing can even achieve exponential acceleration in solving specific problems. As soon as its theory was put forward, it attracted the close attention of many scholars [10]. The super-strong computing power makes quantum computing one of the methods which are most likely to break through the existing computing bottleneck. Therefore, using quantum computing to solve the rolling bearing fault diagnosis problem in the context of big data will be one of the development directions in the future.

HHL algorithm is a quantum algorithm for solving linear equations proposed by Harrow, Hassidim and Lloyd in 2008 [11]. Compared with the classical solution methods, the HHL algorithm can achieve exponential acceleration in theory, and the proposal of this algorithm also drives the rapid development of quantum machine learning (QML). It has promoted scholars’ research on quantum machine learning algorithms. Later, Childs et al. improved the HHL algorithm by using chebyshev class method to represent the operator, avoiding the phase estimation process in the original algorithm and enhancing the universality of the algorithm. WieBe et al. first proposed the quantum linear regression algorithm based on HHL algorithm in 2012. HHL algorithm can be divided into three steps to solve the least square support vector machine problem: First, the classical data is represented by quantum bits and stored in the quantum random access memory; Then the phase estimation algorithm is used to solve the parameters of the least square support vector machine, and the corresponding quantum states of the parameters are obtained and applied to the classification of test samples. Finally, the coherent term is used to measure the final quantum state, and the expectation of the coherent term is obtained, and the category of the test sample is judged according to the final expectation value.

SVM is one of the most classical algorithms in traditional machine learning. Its basic principle is to completely separate two types of data through a hyperplane. Different from black-box algorithms such as neural networks, SVM has complete theoretical proof and excellent generalization performance. In recent years, many scholars have applied it to the fault diagnosis of rolling bearings and achieved good results [1215]. The solving process of standard SVM does not involve linear equations, but its derivative algorithm, the Least Square Support Vector Machine (LS-SVM) has computations involving linear equations [16]. In solving small-scale linear equations, the construction of the LS-SVM model is faster, but with the expansion of the scale of equations, it may even be impossible to solve. Therefore, this paper combines the HHL algorithm with the LS-SVM algorithm to propose a fault diagnosis model based on Quantum Support Vector Machine (QSVM).

Since quantum hardware with enough coherence time to demonstrate our proposed QSVM algorithm are not available at present, to verify the feasibility of QSVM, we need to spend much more time and computing resources compared with the classical LS-SVM. However, our research provided theoretical guidance and empirical results that will help further improvement in the theoretical research area and can better guide the development of practical applications soon.

In addition, the contributions of this paper are as follows:

(1) We combine the HHL algorithm with the LS-SVM algorithm to propose a fault diagnosis model based on Quantum Least Square Support Vector Machine (QSVM), which has greater engineering application value.

(2) We use QSVM to realize three-classification fault diagnosis on small-scale data (classical computer simulation of quantum computing is very resource-intensive), achieve 100% fault diagnosis, and show that the fault diagnosis model is based on QSVM is feasible.

2 Theoretical of least square support vector machine

Suppose that the training set contains p samples, denoted as

$$\begin{aligned} \{ x_{i},y_{i} \}_{i = 1}^{p},\quad y_{i} \in \{ - 1,1\}. \end{aligned}$$
(1)

In Eq. (1), \(x_{i} \in R^{q}\), \(x_{i}\) represents the q-dimensional input vector, and \(y_{i}\) is the sample categories, including 1 and −1. SVM tries to find an optimal hyperplane that can completely divide these two types of data. The optimal partition scheme is that the point closest to the hyperplane in the sample is the farthest away from the hyperplane. These points that determine the hyperplane are called support vectors. For each training point, its geometric distance to the hyperplane is

$$\begin{aligned} d_{i} = y_{i} \times \frac{1}{ \Vert W \Vert } ( W \times x_{i} + b ). \end{aligned}$$
(2)

Where \(d_{i}\) is the distance from the i-th training point to the hyperplane, W and b are the parameters of the hyperplane. According to the theory of SVM, we need to find the training point closest to the hyperplane.

$$\begin{aligned} d_{\min} = \min_{i = 1,\ldots,p}d_{i}. \end{aligned}$$
(3)

According to the Eq. (2) and (3), the optimization problem of SVM is transformed into

$$\begin{aligned} &\max_{W,b}\frac{d_{\min}}{ \Vert W \Vert } \\ &\quad\text{s.t. }y_{i}(W \times x_{i} + b) \ge d_{\min},i = 1,2,\ldots,p. \end{aligned}$$
(4)

To facilitate the solution, the Eq. (4) can be rewritten as

$$\begin{aligned} &\max_{W,b}\frac{1}{2} \Vert W \Vert ^{2} \\ &\quad\text{s.t. }y_{i}(W \times x_{i} + b) \ge 1,i = 1,2, \ldots,p. \end{aligned}$$
(5)

LS-SVM transforms the inequality constraint of the Eq. (5) into equality constraint

$$\begin{aligned} &\max_{W,b}\frac{1}{2} \Vert W \Vert ^{2} + \frac{\lambda}{2}\sum_{i = 1}^{m} e_{i}^{2} \\ &\quad\text{s.t. }y_{i}(W \times x_{i} + b) = 1 - e_{i},i = 1,2,\ldots,p. \end{aligned}$$
(6)

Where \(e_{i}\) is the relaxation variable and λ is the regularization parameter. For nonlinear classification problems, training sample \(x_{i}\) can be mapped from the original space to a higher dimensional feature space by the kernel function.

Construct the Lagrange function of the Eq. (6)

$$\begin{aligned} L(W,b,e,\alpha ) = \max \frac{1}{2} \Vert W \Vert ^{2} + \frac{\lambda}{ 2}\sum_{i = 1}^{m} e_{i}^{2} - \sum_{i = 1}^{m} \alpha _{i} \bigl[ y_{i}(W \times x_{i} + b) - 1 + e_{i} \bigr] . \end{aligned}$$
(7)

Where \(\alpha _{i}\) is the Lagrange multiplier corresponding to sample \(x_{i}\). The partial derivative of each variable in the above formula is taken and sorted out, which can be obtained

$$\begin{aligned} \begin{aligned} & \begin{bmatrix} 0 & 1^{T} \\ 1 & K + \lambda ^{ - 1}I \end{bmatrix} \begin{bmatrix} b \\ \alpha \end{bmatrix} = \begin{bmatrix} 0 \\ y \end{bmatrix}, \\ &\alpha = [ \alpha _{1},\alpha _{2},\ldots,\alpha _{p} ]. \end{aligned} \end{aligned}$$
(8)

Where K is the kernel matrix of order p, and the values of α and b can be obtained by solving the linear equation.

LS-SVM needs to use all the training data, so its time complexity is a polynomial order of sample number p and feature number q, denoted as \(O(Ploy(pq))\). When p and q are large, the computational complexity is extremely high. So, we use the HHL algorithm to replace the classical method of solving linear equations.

3 HHL algorithm

3.1 The solution form of HHL algorithm

HHL algorithm is a quantum method to solve linear equations, and is the key of quantum support vector machine to solve linear equations of LSSVM quickly. Firstly, HHL algorithm describes the systems of linear equations using quantum symbols, assuming that A is an Ermi operator in the N-dimensional state space and \(|b\rangle \) is a state vector of this space, so solving the system of linear equations can be expressed as solving the \(|x\rangle \) that satisfies \(A|x\rangle = |b\rangle \).

The two core steps of the algorithm are sparse Hamiltonian simulation and phase estimation. When the data matrix in the algorithm is a sparse Hermitian matrix and the number of conditions is small, the time complexity of HHL algorithm to solve linear equations is \(O(\log N)\), compared with the time complexity of the best known classical algorithm \(O(n)\), achieving exponential acceleration. The HHL algorithm promotes the research of quantum machine learning algorithms, especially for the problems that can be solved by data matrix algebraic operations.

In the HHL algorithm, the solution form of linear equations is expressed as

$$\begin{aligned} A| x \rangle = | b \rangle . \end{aligned}$$
(9)

Where A is the N order matrix of Hermitian, \(|x\rangle \) and \(|b\rangle \) is the column vector of Hilbert space.

The Hermitian matrix can be decomposition:

$$\begin{aligned} A = \sum_{i = 0}^{N - 1} \mu _{i} \vert u_{i} \rangle \langle u_{i} \vert . \end{aligned}$$
(10)

Where \(\mu _{i}\) is the eigenvalues of A, \(|u_{i}\rangle \) is the eigenvector corresponding to \(\mu _{i}\).

Assume \(|b\rangle = [b_{0},b_{1},\dots b_{N-1}]^{\mathrm{T}}\), HHL constructs it

$$\begin{aligned} | b \rangle = \sum_{i = 0}^{N - 1} b_{i} | i \rangle,\qquad \sum_{i = 0}^{N - 1} b_{i}^{2} = 1 . \end{aligned}$$
(11)

Taking \(u_{i}\) as the base vector, we could construct b

$$\begin{aligned} | b \rangle = \sum_{i = 0}^{N - 1} \beta _{i}| u_{i} \rangle. \end{aligned}$$
(12)

According to Eqs. (10) and (12)

$$\begin{aligned} | x \rangle = A^{ - 1}| b \rangle = \sum_{i = 0}^{N - 1} \mu _{i}^{ - 1}\beta _{i}| u_{i} \rangle. \end{aligned}$$
(13)

Where \(|x\rangle \) is the target to be solved by HHL. In the HHL algorithm, we need to use Quantum phase estimation (QPE) and Quantum Fourier transform (QFT).

3.2 Quantum Fourier transform

Similar to classical Fourier transform, QFT also converts a quantum state into another quantum state, and its quantum circuit is (see Fig. 1).

Figure 1
figure 1

Quantum Fourier Transform circuit

In Fig. 1, \(|x_{1}\rangle \) to \(|x_{n}\rangle \) are the basis vectors and they satisfy the conditions

$$\begin{aligned} | x \rangle = | x_{1}x_{2},\ldots,x_{n} \rangle,\quad x = 2^{n - 1}x_{1} + 2^{n - 2}x_{2} + \cdots + 2^{0}x_{n}. \end{aligned}$$
(14)

H stands for Hadamard gate, and it can be expressed

$$\begin{aligned} H = \frac{\sqrt{2}}{2} \begin{bmatrix} 1 & 1 \\ 1 & - 1 \end{bmatrix}. \end{aligned}$$
(15)

In the first qubit \(|x_{1}\rangle \), after applying H-gate, \(|\Psi _{1}\rangle \) is expressed as

$$\begin{aligned} | \Psi _{1} \rangle = \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2}x_{1}} \vert 1 \rangle \bigr] \otimes | x_{2}x_{3},\ldots,x_{n} \rangle. \end{aligned}$$
(16)

\(R_{k}\) stands for controlled rotation gate

$$\begin{aligned} R_{k} = \begin{bmatrix} 1 & 0 \\ 0 & e^{\frac{2\pi i}{2^{k}}} \end{bmatrix} . \end{aligned}$$
(17)

Applies \(R_{k}\)-gate, \(|\Psi _{2}\rangle \) is expressed

$$\begin{aligned} | \Psi _{2} \rangle = \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2}x_{1} + \frac{2\pi i}{2^{2}}x_{2}} \vert 1 \rangle \bigr] \otimes | x_{2}x_{3}, \ldots,x_{n} \rangle. \end{aligned}$$
(18)

Similarly, after the application of \(R_{3}\) to \(R_{n}\), we can get

$$\begin{aligned} | \Psi _{3} \rangle = \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2}x_{1} + \frac{2\pi i}{2^{2}}x_{2} +\cdots + \frac{2\pi i}{2^{n}}x_{n}} \vert 1 \rangle \bigr] \otimes | x_{2}x_{3},\ldots,x_{n} \rangle . \end{aligned}$$
(19)

And according to Eq. (14), \(|\Psi _{3}\rangle \) can be rewritten

$$\begin{aligned} | \Psi _{3} \rangle = \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2^{n}}x} \vert 1 \rangle \bigr] \otimes | x_{2}x_{3},\ldots,x_{n} \rangle. \end{aligned}$$
(20)

Repeat the steps above, we can get

$$\begin{aligned} | \Psi _{4} \rangle = \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2^{n}}x} \vert 1 \rangle \bigr] \otimes \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2^{n - 1}}x} \vert 1 \rangle \bigr] \otimes\cdots \otimes \frac{1}{\sqrt{2}} \bigl[ \vert 0 \rangle + e^{\frac{2\pi i}{2^{1}}x} \vert 1 \rangle \bigr]. \end{aligned}$$
(21)

\(|\Psi _{4}\rangle \) can be rewritten

$$\begin{aligned} | \Psi _{4} \rangle = \frac{1}{\sqrt{2^{n}}} \sum _{k = 0}^{2^{n} - 1} e^{\frac{2\pi ik}{2^{n}}x}| k \rangle . \end{aligned}$$
(22)

Eq. (22) shows that the original quantum state \(|x\rangle \) is transformed into \(|k\rangle \), which completed the QFT. And it can be calculated from the circuit that the total number of quantum gates used by QFT is \(n +(n-1) +\cdots+1=n(n+1)/2\), and the computational complexity is \(O(n^{2})\). In the classical algorithm, the computational complexity of the Fourier transform is \(O(n2^{n})\).

3.3 Quantum phase estimation

In the QPE, the solution form of eigenvalue is expressed as

$$\begin{aligned} A| u \rangle = e^{2\pi i\theta} | u \rangle. \end{aligned}$$
(23)

Where \(e^{2\pi i\theta}\) is the eigenvalue, and the function of QPE is to estimate θ. QPE’s quantum circuit is (see Fig. 2).

Figure 2
figure 2

Quantum Phase Estimation circuit

The first register of QPE contains t-qubits and is all set to \(|0\rangle \). The second register contains the eigenvector \(|u\rangle \) of matrix A. From the Fig. 2, we can get

$$\begin{aligned} | \Omega _{1} \rangle = H^{ \otimes t} \otimes \vert 0 \rangle ^{ \otimes t} \otimes \vert u \rangle = \frac{1}{\sqrt{2^{t}}} \bigl( \vert 0 \rangle + \vert 1 \rangle \bigr)^{ \otimes t} \otimes | u \rangle. \end{aligned}$$
(24)

U is the controlled rotation gate and can be expressed as

U j = 1 0 0 A j .
(25)

After applying U-gate, we can get

$$\begin{aligned} | \Omega _{2} \rangle = \frac{1}{\sqrt{2^{t}}} \bigl( \vert 0 \rangle + e^{2\pi i2^{t - 1}\theta} \vert 1 \rangle \bigr) \otimes\cdots \otimes \bigl( \vert 0 \rangle + e^{2\pi i2^{0}\theta} \vert 1 \rangle \bigr) \otimes | u \rangle. \end{aligned}$$
(26)

Eq. (26) can be rewritten

$$\begin{aligned} | \Omega _{2} \rangle = \frac{1}{\sqrt{2^{t}}} \sum _{k = 0}^{2^{t} - 1} e^{2\pi ik\theta} \vert k \rangle \otimes \vert u \rangle . \end{aligned}$$
(27)

Then applied inverse QFT

$$\begin{aligned} | \Omega _{3} \rangle = | \Omega _{2} \rangle ^{QFT^{ - 1}} = \frac{1}{\sqrt{2^{t}}} \sum_{x,y = 0}^{2^{t} - 1} e^{\frac{ - 2\pi xy}{2^{t}}} e^{2\pi ix\theta} | y \rangle. \end{aligned}$$
(28)

From Eq. (28), it can be seen that the probability amplitude corresponding to \(|2^{t}\theta \rangle \) is the largest, that is, after the measurement of \(|\Omega _{3}\rangle \), the quantum state is most likely to collapse toward \(|2^{t}\theta \rangle \). In the same way, if we take enough measurements of \(|\Omega _{3}\rangle \), the frequency it collapses to \(|2^{t}\theta \rangle \) must be the most. Since 2\(^{t}\theta /2^{t}=\theta \), we can get θ. Note that t represents the number of qubits in register 1, and the larger the t is, the more accurate the θ is. Considering that too many qubits will require a lot of computing resources on classical computers, only three qubits are used in this paper (\(t=3\)).

3.4 HHL algorithm

Take Eq. (9) for example, HHL’s quantum circuit is (see Fig. 3).

Figure 3
figure 3

HHL quantum circuit

In Fig. 3, \(|y\rangle \) as auxiliary qubits, and \(|x_{0}\rangle \) to \(|x_{m-1}\rangle \) are qubits that store eigenvalues. All of these qubits are set to \(|0\rangle \).

$$\begin{aligned} | \Phi _{1} \rangle = \vert yx_{0},\ldots,x_{m - 1} \rangle \otimes \vert b \rangle = \vert 00\cdots 0 \rangle \otimes \vert b \rangle . \end{aligned}$$
(29)

Applies QPE for \(|x_{0}\rangle \) to \(|x_{m-1}\rangle \) and \(|b\rangle \). Where \(|b\rangle \) is expressed in Eq. (12), we can get

$$\begin{aligned} | \Phi _{2} \rangle = \vert y \rangle \otimes \sum _{i = 0}^{m - 1} \beta _{i} \vert \mu _{i} \rangle | u_{i} \rangle. \end{aligned}$$
(30)

Applies SWAP-gate for \(|x_{0}\rangle \) to \(|x_{m-1}\rangle \), and SWAP-gate’s function is to calculate the reciprocal of eigenvalues

$$\begin{aligned} | \Phi _{3} \rangle = \vert y \rangle \otimes \sum _{i = 0}^{m - 1} \beta _{i}\bigl\vert \mu _{i}^{ - 1} \bigr\rangle | u_{i} \rangle. \end{aligned}$$
(31)

Applies Controlled Rotation, and its function is to save the eigenvalue from the \(|\mu _{i}\rangle \) to the probability amplitudes of an auxiliary qubit.

$$\begin{aligned} | \Phi _{4} \rangle = \sum_{i = 0}^{m - 1} \biggl\{ \biggl[ \biggl( 1 - \frac{C^{2}}{\mu _{i}^{2}} \biggr) \vert 0 \rangle + \biggl( \frac{C}{\mu _{i}} \biggr) \vert 1 \rangle \biggr] \otimes \beta _{i} \bigl\vert \mu _{i}^{ - 1}\bigr\rangle \vert u_{i} \rangle \biggr\} . \end{aligned}$$
(32)

Where C is a constant and satisfies \(C \leq \min | \mu _{i} |\).

Applies QPE−1, and its function is to untangle \(|x_{0}, x_{1},\ldots,x_{m-1}\rangle \) from \(|\Phi _{4}\rangle \)

$$\begin{aligned} | \Phi _{5} \rangle = \sum_{i = 0}^{m - 1} \biggl\{ \biggl[ \biggl( 1 - \frac{C^{2}}{\mu _{i}^{2}} \biggr) \vert 0 \rangle + \biggl( \frac{C}{\mu _{i}} \biggr) \vert 1 \rangle \biggr] \otimes \beta _{i} \vert 00\cdots 0 \rangle \vert u_{i} \rangle \biggr\} . \end{aligned}$$
(33)

Measure the ancilla qubit, and if the result is 0, we need to recalculate, until the result is 1. Finally, we can get

$$\begin{aligned} | \Phi _{6} \rangle = \frac{C}{\sqrt{\sum_{i = 0}^{m - 1} \frac{C\beta _{i}}{\mu _{i}}}} \sum _{i = 0}^{m - 1} \mu _{i}^{ - 1}\beta _{i}| u_{i} \rangle . \end{aligned}$$
(34)

We can observe that \(|\Phi _{6}\rangle \) is proportional to \(|x\rangle \) in Eq. (13). Thus, the solution of linear equations is completed.

The mathematical derivation process of the HHL algorithm is extremely complicated. Therefore, it is very difficult to simulate the HHL algorithm with a classical computer. But quantum computers do not involve these complex mathematical operations, just controlling qubits to rotate in Hilbert space. We can prove the superiority of the HHL algorithm by analyzing its time complexity:

The time complexity of the HHL algorithm is \(O(\log(N)s^{2}\kappa ^{2}/\varepsilon )\), where N is the order of the matrix, κ is the number of conditions of the linear equations, s is the sparsity of the matrix, and ε is the precision of the solution. Compared with classical algorithms, HHL can theoretically achieve exponential acceleration, thus greatly improving the efficiency of LS-SVM when dealing with a huge quantity of data.

Finally, there can be errors in solving the HHL algorithm, and the main source of errors is the eigenvalues solved in QPE. As mentioned in Sect. 3.2, the accuracy of eigenvalues depends on the number of qubits, and the increase in the number of qubits will improve the time complexity of the HHL algorithm, and how to balance accuracy and time complexity can be an area for further research of QSVM in fault diagnosis.

4 Rolling bearing fault diagnosis experiment

4.1 Data source

The experimental data selected in this paper came from XJTU-SY Bearing Datasets [17], and the data includes the outer race fault, inner race fault, cage fault and normal state of rolling bearings. The detailed introduction is shown in Table 1.

Table 1 Data introduction

The computer used in the experiments is configured with an i5-9300H CPU, clocked at 2.4 GHz, with a memory of 16 GB, the programming language used is Python, the quantum programming framework is Qiskit, and the quantum simulator is statevector.

4.2 Data preprocessing

Rolling bearing fault diagnosis generally consists of two steps: feature extraction and fault identification. An appropriate and effective feature extraction method can effectively improve the accuracy of fault diagnosis.

In general, we reconstruct the original data into a signal matrix. Horizontal vibration data of the Bearing1_1 dataset and Bearing2_1 dataset was selected in this paper (see Figs. 4 and 5).

Figure 4
figure 4

Horizontal vibration data of Bearing1_1 dataset

Figure 5
figure 5

Horizontal vibration data of Bearing2_1 dataset

The sampling frequency of the Bearing1_1 dataset is 35 KHz, and the sampling time is 123 minutes. It contains 4 million sample points in total, and the outer race fault occurs in about 4896 seconds. (We use the Pauta criteria to calculate the point in time when the fault occurred [18]).

The sampling frequency of the Bearing2_1 dataset is 37.5 Hz, the sampling time is 491 minutes. It contains 16.08 million sample points in total, and the inner race fault occurs in about 28,038 seconds.

Then we reconstructed the above two data sets into two signal matrices. The Bearing1_1 dataset matrix contained a total of 1953 samples, and each sample was composed of 2048 continuously sampled points. The labels of normal samples were denoted as H1, and the labels of outer race fault samples were denoted as H2. In the same way, the Bearing2_1 dataset matrix contained a total of 7851 samples, and each sample was composed of 2048 continuously sampled points. The labels of normal samples were denoted as H1, and the labels of inner race fault samples were denoted as H3 (see Figs. 6 and 7).

Figure 6
figure 6

The normal sample and outer race fault sample in the Bearing1_1 dataset

Figure 7
figure 7

The normal sample and inner race fault sample in the Bearing2_1 dataset

After the sample division is completed, the next step is to extract data features. The commonly used feature extraction methods include time domain, frequency domain and time-frequency domain. Considering that the rolling bearing data used in this paper is collected in the laboratory with less external interference and no complex feature extraction process is required, only the time domain features of the data are extracted in this paper.

Time domain feature extraction refers to the calculation of various time domain statistical parameters from the original vibration signal. The commonly used time-domain statistical parameters include root mean square, crest factor, kurtosis and waveform factor, etc. These statistical parameters will change with the change in the running status of rolling bearings. Therefore, by analyzing these parameters, the running status of rolling bearings can be reflected to a considerable extent.

Kurtosis is one of the most widely used statistical parameters in the field of rolling bearing fault diagnosis. When the rolling bearing runs normally, the amplitude of its vibration signal approximately meets the Gaussian distribution, and the kurtosis value is approximately equal to 3. When the fault occurs, the Gaussian distribution curve will be skewed, and correspondingly, the kurtosis value will increase. The mathematical formula of kurtosis is

$$\begin{aligned} X_{\mathrm{kurt}} = \frac{n\sum_{i = 1}^{n} ( X_{i} - X_{\mathrm{mean}} )^{4}}{ [ \sum_{i = 1}^{n} ( X_{i} - X_{\mathrm{mean}} )^{2} ]^{2}}. \end{aligned}$$
(35)

In Eq. (35), \(X_{i}\) represents the sample, \(X_{\mathrm{mean}}\) represents the average value of the sample in a certain period, and n represents the total number of samples.

The crest factor is sensitive to faults with surface damage and wear, the mathematical formula of the crest factor is

$$\begin{aligned} X_{\mathrm{crest}} = \frac{\max ( X_{1},X_{2}, \ldots,X_{n} )}{X_{\mathrm{rms}}}. \end{aligned}$$
(36)

In Eq. (36), \(X_{\mathrm{rms}}\) represents the root mean square value of the sample in a certain period of time.

Specifically, we calculate the kurtosis and crest factor of all data in each sample and take them as the input features of QSVM. At the same time, to imitate quantum computing, classical computers need to consume massive computing resources and computing time. Therefore, a total of 20 samples were selected from normal, outer race fault and inner race fault in this paper. Among them, 70%(14 samples) were used as training data, and the remaining 30%(6 samples) were used as test data. According to the mathematical derivation in Sect. 2, linear equations are constructed and solved by the HHL algorithm.

Traditional QSVM can only complete two classification tasks, and we take Eq. (8) as an example, after solving α and b, the QSVM classification can be expressed as

$$\begin{aligned} f(x) = \textstyle\begin{cases} - 1, &\text{if }(k^{T} \times \alpha + b) < 0, \\ 1, &\text{if }(k^{T} \times \alpha + b) > 0. \end{cases}\displaystyle \end{aligned}$$
(37)

Where k represents the kernel function of test data and training data.

To implement the classification of the three types of faults, we use training data to train three QSVM classifiers for H1 and H2 (1 and −1), H1 and H3 (1 and −1), H2 and H3 (1 and −1) respectively. Then the test data were input into the above three QSVM classifiers respectively, and the test data were assigned to the corresponding labels according to the results. For example, if the results of the three QSVM classifications are “1”, “1” and “−1” respectively, the test data will be labeled as H1 classes, and so on.

The decision boundary divided by QSVM according to the training data is shown in Fig. 8.

Figure 8
figure 8

Decision boundary divided by QSVM

Then 6 test samples were input into the QSVM after training, and compared with the classical LS-SVM (see Figs. 9 and 10).

Figure 9
figure 9

QSVM classification results

Figure 10
figure 10

Classical LS-SVM classification results

It can be observed from Fig. 9 and Fig. 10 that there is a certain deviation between the decision boundary of QSVM and LS-SVM, and the reasons have been explained in Sect. 3, but the overall deviation is not large, so QSVM can also achieve 100% fault diagnosis accuracy. And it also shows that the fault diagnosis model based on QSVM is feasible and can play a far superior advantage over the classical algorithm in the context of big data.

5 Conclusion

To solve the problem of fault diagnosis in the context of big data, this paper proposes a fault diagnosis model based on QSVM. Compared with traditional algorithms, QSVM can theoretically achieve exponential operation acceleration and solve high-dimensional data that cannot be processed by traditional algorithms. With the rapid development of quantum hardware, QSVM and other quantum machine learning algorithms can be run on quantum computers soon to prove the superiority of quantum. The fault diagnosis algorithm based on quantum machine learning will also be one of the best choices in the context of big data, which has a profound impact on the field of fault diagnosis.

Availability of data and materials

The datasets analyzed during the current study are available in the [https://engineering.case.edu/bearingdatacenter] and [http://biaowang.tech/xjtu-sy-bearing-datasets].

References

  1. Li W, Chen J, Li J et al.. Derivative and enhanced discrete analytic wavelet algorithm for rolling bearing fault diagnosis. Microprocess Microsyst. 2021;82:103872.

    Article  Google Scholar 

  2. Cui H, Guan Y, Rolling CH. Element fault diagnosis based on VMD and sensitivity MCKD. IEEE Access. 2021;9:120297–308.

    Article  Google Scholar 

  3. Mao W, Feng W, Liu Y et al.. A new deep auto-encoder method with fusing discriminant information for bearing fault diagnosis. Mech Syst Signal Process. 2021;150(12):107233.

    Article  Google Scholar 

  4. Viet T, Jaeyoung K, Ali KS et al.. Bearing fault diagnosis under variable speed using convolutional neural networks and the stochastic diagonal Levenberg–Marquardt algorithm. IEEE Sens J. 2017;17(12):1–16.

    Article  Google Scholar 

  5. Wan S, Zhang X. Bearing fault diagnosis based on teager energy entropy and mean-shift fuzzy C-means. Struct Health Monit. 2020;19(14):147592172091071.

    Google Scholar 

  6. Wan L, Gong K, Zhang G et al.. An efficient rolling bearing fault diagnosis method based on spark and improved random forest algorithm. IEEE Access. 2021;9:37866–82.

    Article  Google Scholar 

  7. Wan L, Li H, Chen Y et al.. Rolling bearing fault prediction method based on QPSO-BP neural network and Dempster–Shafer evidence theory. Energies. 2020;13(5):1094.

    Article  Google Scholar 

  8. Acín A, Bloch I, Buhrman H et al.. The European quantum technologies roadmap. 2017.

    Google Scholar 

  9. Liu J, Yuan H, Lu X et al.. Quantum Fisher information matrix and multiparameter estimation. J Phys A, Math Theor. 2019;53(2):023001.

    Article  ADS  MathSciNet  Google Scholar 

  10. Park J, Quanz B, Wood S et al.. Practical application improvement to quantum SVM: theory to practice. 2020.

    Google Scholar 

  11. Harrow Aram W, Avinatan H, Seth L. Quantum algorithm for solving linear systems of equations. Phys Rev Lett. 2008;103(15):150502.

    Article  Google Scholar 

  12. Chen F, Cheng M, Tang B et al.. Pattern recognition of a sensitive feature set based on the orthogonal neighborhood preserving embedding and adaboost_SVM algorithm for rolling bearing early fault diagnosis. Meas Sci Technol. 2020;31:105007.

    Article  ADS  Google Scholar 

  13. Li R, Ran C, Zhang B et al.. Rolling bearings fault diagnosis based on improved complete ensemble empirical mode decomposition with adaptive noise, nonlinear entropy, and ensemble SVM. Appl Sci. 2020;10(16):5542.

    Article  Google Scholar 

  14. Cui M, Wang Y, Lin X et al.. Fault diagnosis of rolling bearings based on an improved stack autoencoder and support vector machine. IEEE Sens J. 2021;21(4):4927–37.

    Article  ADS  Google Scholar 

  15. Zhang J, Zhang J, Zhong M et al.. A Goa-MSVM based strategy to achieve high fault identification accuracy for rotating machinery under different load conditions. Measurement. 2020;163:108067.

    Article  Google Scholar 

  16. Wei J, Huang H, Yao L et al.. New imbalanced fault diagnosis framework based on cluster-MWMOTE and MFO-optimized LS-SVM using limited and complex bearing data. Eng Appl Artif Intell. 2020;96:103966.

    Article  Google Scholar 

  17. Wang B, Lei Y, Li N et al.. A hybrid prognostics approach for estimating remaining useful life of rolling element bearings. IEEE Trans Reliab. 2018;69:401–12.

    Article  Google Scholar 

  18. Duan A, Guo L, Gao H et al.. Deep focus parallel convolutional neural network for imbalanced classification of machinery fault diagnostics. IEEE Trans Instrum Meas. 2020;69(11):8680–9.

    Article  Google Scholar 

Download references

Acknowledgements

We thank Shanghai University of Engineering Science, Kunfeng Quantum Technology Co., Ltd and Yiwei quantum Technology Co., Ltd for assisting our work.

Funding

This article is supported by the National Key R&D Program of China (Grant No. 2020AAA0109300) of Shanghai University of Engineering Science and Pudong New Area Science and Technology Development Fund (Grant No. PKX2020-R17) of Shanghai University of Engineering Science.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, YL and ZF; methodology, YL and HX; software, LS and QS; validation, QS and XL; formal analysis, WY; investigation, ZF; resources, YL; data curation, LS and QS; writing----original draft preparation, LS and QS; writing—review and editing, WY and XL; visualization, WY and XL; supervision, WY and XL; project administration, YL and HX; funding acquisition, HX. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Yuanyuan Li.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Song, L., Sun, Q. et al. Rolling bearing fault diagnosis based on quantum LS-SVM. EPJ Quantum Technol. 9, 18 (2022). https://doi.org/10.1140/epjqt/s40507-022-00137-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1140/epjqt/s40507-022-00137-y

Keywords