The computational cost of ML models is different in the training and deployment phase. In this case, the model is expected to be updated monthly so that training costs are not a problem. Training such a model takes about a half an hour with one GTX 2080 GPU. The model consists of an encoder pz | x and a decoder px |,z i^h {^h which are parameterized by an LSTM network of parameters, i and {, respectively. The LSTM-based VAE is trained to minimize the following loss function: T ||| lE logpx z =+ / t =1 Dq zx pz ~| zq zx h 6 i^ tt i tt ^ KL^^ hhh { ^ tt | h@ " DEVELOPMENT OF THE TWIN SHIP PROVIDES A UNIFIED SOLUTION THAT SAVES DIFFERENT TYPES OF DATA OF CRITICAL MACHINERY IN A DATABASE. „ at time ;tht 1^ h ^ and ct 1where the first term in the loss function is the expected negative log likelihood, which can be replaced by the meansquare error between the inputs x and reconstructed inputs xt .