“Predictive” means that maintenance is performed on time, based on predictions of imminent failures, before they actually occur. So, time is an important element in predictive maintenance, and hence in the AI algorithms used. In this blog, we will explore the use of time series, or data sequences, in AI.
Predictive maintenance is based on the intelligent use of data to make predictions that indicate the need for interventions. Basically, two types of data are used: images and physical quantities. These data can be one-time or temporal; in the latter case we refer to them as video or time-series data, respectively.
For the first type of data, image analysis or computer vision is a well-established field of deep learning, based on the use of what are known as convolutional neural networks (CNNs). CNNs are used to “classify” or “label” images; i.e. to interpret what is observed. Facial recognition is a popular application, but CNNs can also be used for recognizing products, and in particular, features that “measure” product condition and quality. If the degradation of an asset’s condition can be visualized, for example through traces of wear, then CNNs can be selected for predictive maintenance algorithms.
For predictive maintenance, however, the second data type, temporal sequences of physical quantities, is predominant, because maintenance is usually concerned with the timeline of processes, especially changes that occur over time.
Think here of decreasing performance, the onset of wear, material transformations, overheating, etc., for which quantities such as position, frequency, temperature, and pressure can serve as indications. Time series, or sequences, are then the logical data from which to extract predictions, by detecting fault-related patterns or anomalies within them.
A long-time popular algorithm for time-series analysis is what is known as the hidden Markov model (HMM). HMMs are widely used in, for example, speech recognition, for translating a sequence of spoken words into text. An HMM is a statistical model in which the system being modeled is assumed to be a Markov process (or chain) with unobserved (hidden) states; in the maintenance context, these states can be related to the condition and behavior of the asset concerned.
The advantage of an HMM is that it cannot only analyze sequences, but can also generate them. There are, however, limitations, as the general structure of the hidden Markov model must be set by the data scientist ahead of time, which implies some knowledge about the sequences one is trying to learn.
As an alternative, machine learning can be used to produce a model that takes sensor data input to generate predictions without recourse to explicit physical knowledge. Machine learning comes in a variety of implementations, one of which is the so-called neural network. To some extent, a neural network mimics the workings of the human brain. It consists of a network of nodes, the artificial neurons, in which each connection between neurons is provided with a specific weight that has either a positive (excitatory) or a negative (inhibitory) value. Sensor data is fed as input to the model, which then has to generate a prediction about the behavior of the machine (“normal” or “abnormal” in the simplest, binary version) as its output.
As always, the biggest challenge in setting up predictive maintenance is getting the data: firstly determining which data are required for making the relevant predictions, and then implementing the correct sensor solutions.
In rotating equipment, a major class of maintenance-intensive assets that includes power tools, motors, and vehicles, vibrations are highly informative phenomena, and hence their frequencies are relevant data in which anomalies can indicate faulty behavior.
To measure these vibrations, an inertial measuring unit, a combination of an accelerometer and a gyroscope, is used in many cases. But motor signals, such as currents and other quantities that can be derived directly from the asset control system, may also be suitable data candidates.
When data acquisition has been organized satisfactorily, training data can be collected and then classified by assigning labels to the data such as “normal operation,” “error type 1,” “error type 2,” etc.; this is a job that often is still performed “manually.”
When it is hard to obtain a sufficient amount of relevant training data, i.e. when fault-related signals or incidents have a low frequency of occurrence, so-called synthetic data can be used. Such data can be generated using a model of the asset to be monitored. With time series data, an HMM can be used for this purpose. Training then becomes a kind of two-stage process, starting with training an HMM, which can then generate synthetic sequences to be used for training the RNN model in the second stage.
However, nothing beats the real data, as an HMM trained on a small set of error instances could not generate statistically significant differences in training samples.
Depending on the application and the sensors used, training should be aimed at achieving a prediction accuracy of 90% or more. In addition to this, we should minimize the risk of a false positive, signaling an imminent failure while everything is fine, or even worse, a false negative, when a real failure is imminent but not detected.
Prediction accuracy can be estimated in advance by using a subset of training data for validating the model. When data are collected to serve as a training set, some of these data can be kept aside, for testing the trained data set using a set of unknown data. For RNNs, training is computationally very intensive, because long data sequences are required to obtain an acceptably high accuracy.
Once the model has been trained, its deployment requires only a little computational effort, which makes RNNs ideal for edge computing, preventing costs for data communication, cloud computing, and storage.
Building and training a RNN model requires the joint effort of a domain expert and a data scientist. While it is the domain expert’s role to determine which predictions make physical sense, and to oversee interpretation of the model outcomes, the data scientist’s job is to decide which model to deploy and how to configure it, i.e. determine the training procedure, the model parameters to be used, and the data preprocessing required.
So, careful model building for predictive maintenance may take some time, but the effort is worthwhile, as time-series analysis can reveal a wealth of information on asset condition and the need for maintenance.
Do you want to start with predictive maintenance and need advice on how to take the first steps? Get in touch with us.