When historical data acquisition has been organized satisfactorily, training data can be collected and then classified by assigning labels to the data such as “normal operation,” “error type 1,” “error type 2,” etc.; this is a job that often is still performed “manually.”
When it is hard to obtain a sufficient amount of relevant training data, i.e. when fault-related signals or incidents have a low frequency of occurrence, so-called synthetic data can be used. Such data can be generated using a model of the asset to be monitored. With time series data, an HMM can be used for this purpose. Training then becomes a kind of two-stage process, starting with training an HMM, which can then generate synthetic sequences to be used for training the RNN model in the second stage.
However, nothing beats the real time data, as an HMM trained on a small set of error instances could not generate statistically significant differences in training samples.
AI in predictive maintenance depends on the application and the sensors used, training should be aimed at achieving a prediction accuracy of 90% or more. In addition to this, we should minimize the risk of a false positive, signaling an imminent failure while everything is fine, or even worse, a false negative, when a real failure is imminent but not detected.
Prediction accuracy can be estimated in advance by using a subset of training data for validating the model. When data are collected to serve as a training set, some of these data can be kept aside, for testing the trained data set using a set of unknown data. For RNNs, training is computationally very intensive, because long data sequences are required to obtain an acceptably high accuracy.
Once the model has been trained, its deployment requires only a little computational effort, which makes RNNs ideal for edge computing, preventing time and money as well as costs for data communication, cloud computing, and storage.
Building and training a RNN model requires the joint effort of a domain expert and a data scientist. While it is the domain expert’s role to determine which predictions make physical sense, and to oversee interpretation of the model outcomes, the data scientist’s job is to decide which model to deploy and how to configure it, i.e. determine the training procedure, the model parameters to be used, and the data preprocessing required.
So, careful model building for ai in predictive maintenance may take some time, but the effort is worthwhile, as time-series analysis can reveal a wealth of information on asset condition, detect anomalies on time to provide maintenance when needed.
Do you want to start with ai in predictive maintenance and need advice on how to take the first steps? Get in touch with us.