The deep neural network is designed to approximate strongly nonlinear dependencies, such as the dependencies of the device state on selected measured device parameters. A back propagation error algorithm was used to estimate the parameters of the deep neural network (training). The training is performed in two phases, i.e, first training the network in the forward direction using autoencoders (unsupervised learning) and then training the network in the backward direction (supervised learning), thus eliminating the damping of backpropagation error. An autoencoder is a multilayer perceptron with an equal number of input and output neurons with an odd number of layers, where the middle layer is referred to as the splitting layer. The autoencoder learns an autoassociative function on a set of input patterns, i.e. the pattern presented to the input is reproduced at the output.
Downloads:
The software has been developed with a financial support of the Technology Agency of the Czech Republic under the research project No. TK04020003.
Savings achieved by early diagnostics of equipment condition by preventing subsequent losses.