View Categories

AI Training Screen

To start training the AI, adjustments can be made to the parameters that define the structure of the neural network. Below, you will find an explanation of each parameter:

Number of iterations: #

This determines the number of models to be created. In neural network models, the initial network weights are assigned randomly. Consequently, even with the same hyperparameters, the accuracy of the model may vary each time it’s created. By creating multiple models, you can obtain the average error of the neural network under a given set of hyperparameters. This also allows for ensemble analysis using multiple models.

Number of outputs: #

The number of output parameters (non-editable).

Number of epochs: #

The number of times the training data is repeatedly used for learning. A higher value generally results in lower prediction errors on the training data, but excessively high values may lead to overfitting.

Number of hidden layers: #

The number of hidden layers in the neural network.

Number of neurons in hidden layers: #

The number of neurons in the hidden layers.

Dropout of input layer: #

A regularization process that probabilistically ignores the outputs of the input layer during training.

Dropout of other layers: #

A regularization process that probabilistically ignores the outputs of layers other than the input layer, during training.

Size of training data: #

The size of the data used for AI training when constructing a machine learning model.

Size of batch: #

Number of training examples processed in each forward and backward pass during training.

Number of patience: #

The maximum consecutive epochs with no improvement before early stopping, which prevents overfitting.

Number of split: #

The proportion of data to split for early stopping judgment.

Batch normalization: #

   A process that normalizes the input to each layer of the neural network.

Activation function except output layer: #

The activation function for layers other than the output layer, which converts the sum of input values to the output value for each node.

Activation function for the output layer: #

The activation function for the output layer, which converts the sum of input values to the output value for each node in the output layer.

Check random: #

A process that shuffles the training data randomly before AI training. This ensures that different training data are used each time, preventing bias in validation data errors when creating multiple models.

Loss function: #

The function used to measure the error between predicted values of the model and actual values.

Optimizer: #

The optimization process that updates the weights and biases of each node to minimize the error between predicted and actual values.

Number of epochs when learning stopped: #

The number of times the training data is repeatedly used, before learning is stopped.

Cumulated distance between validation data: #

Cumulative distance between validation data points. A higher value indicates greater diversity in the validation data.

RMSE after preprocessing: #

Root Mean Square Error (RMSE) on the scale of the preprocessed data.

RMSE before preprocessing: #

RMSE when the data is reverted to its original scale before preprocessing.

Relative error: #

Relative error calculated as 
Σ|(Actual Value – Predicted Value) / Predicted Value| / Number of Predicted Data
in Multi-Sigma.

Correlation between prediction and actual: #

Correlation coefficient between predicted values and actual values. Values close to 1 indicate a positive correlation, close to -1 indicate a negative correlation, and close to 0 indicate no correlation.