1.

What Are Batch, Incremental, On-line, Off-line, Deterministic, Stochastic, Adaptive, Instantaneous, Pattern, Constructive, And Sequential Learning?

Answer»

There are many ways to categorize learning methods. The distinctions are overlapping and can be confusing, and the terminology is used very inconsistently. This answer ATTEMPTS to impose some ORDER on the chaos, probably in vain.

Batch VS. Incremental Learning (also Instantaneous, Pattern, and Epoch)

Batch learning proceeds as follows:

Initialize the weights. Repeat the following steps: Process all the training data. Update the weights.

Incremental learning proceeds as follows:

Initialize the weights. Repeat the following steps: Process one training case. Update the weights.

In the above sketches, the exact MEANING of "Process" and "Update" DEPENDS on the particular training algorithm and can be quite complicated for methods such as Levenberg-Marquardt Standard backprop (see What is backprop?) is quite simple, though. Batch standard backprop (without momentum) proceeds as follows:

Initialize the weights W. Repeat the following steps: Process all the training data DL to compute the gradient of the average error function AQ(DL,W). Update the weights by subtracting the gradient times the learning rate.

There are many ways to categorize learning methods. The distinctions are overlapping and can be confusing, and the terminology is used very inconsistently. This answer attempts to impose some order on the chaos, probably in vain.

Batch vs. Incremental Learning (also Instantaneous, Pattern, and Epoch)

Batch learning proceeds as follows:

Initialize the weights. Repeat the following steps: Process all the training data. Update the weights.

Incremental learning proceeds as follows:

Initialize the weights. Repeat the following steps: Process one training case. Update the weights.

In the above sketches, the exact meaning of "Process" and "Update" depends on the particular training algorithm and can be quite complicated for methods such as Levenberg-Marquardt Standard backprop (see What is backprop?) is quite simple, though. Batch standard backprop (without momentum) proceeds as follows:

Initialize the weights W. Repeat the following steps: Process all the training data DL to compute the gradient of the average error function AQ(DL,W). Update the weights by subtracting the gradient times the learning rate.



Discussion

No Comment Found