What is "batch normalizaiton"? why using it? how does it affect prediction?

3.7k Views Asked by At

Recently, many deep architectures use "batch normalization" for training.

What is "batch normalization"? What does it do mathematically? In what way does it help the training process?

How is batch normalization used during training? is it a special layer inserted into the model? Do I need to normalize before each layer, or only once?

Suppose I used batched normalization for training. Does this affect my test-time model? Should I replace the batch normalization with some other/equivalent layer/operation in my "deploy" network?


This question about batch normalization only covers part of this question, I was aiming and hoping for a more detailed answer. More specifically, I would like to know how training with batch normalization affect test time prediction, i.e., the "deploy" network and the TEST phase of the net.

4

There are 4 best solutions below

1
On BEST ANSWER

The batch normalization is for layers that can suffer from deleterious drift. The math is simple: find the mean and variance of each component, then apply the standard transformation to convert all values to the corresponding Z-scores: subtract the mean and divide by the standard deviation. This ensures that the component ranges are very similar, so that they'll each have a chance to affect the training deltas (in back-prop).

If you're using the network for pure testing (no further training), then simply delete these layers; they've done their job. If you're training while testing / predicting / classifying, then leave them in place; the operations won't harm your results at all, and barely slow down the forward computations.

As for Caffe specifics, there's really nothing particular to Caffe. The computation is a basic stats process, and is the same algebra for any framework. Granted, there will be some optimizations for hardware that supports vector and matrix math, but those consist of simply taking advantage of the chip's built-in operations.


RESPONSE TO COMMENT

If you can afford a little extra training time, yes, you'd want to normalize at every layer. In practice, inserting them less frequently -- say, every 1-3 inceptions -- will work just fine.

You can ignore these in deployment because they've already done their job: when there's no back-propagation, there's no drift of weights. Also, when the model handles only one instance in each batch, the Z-score is always 0: every input is exactly the mean of the batch (being the entire batch).

0
On

As a complement to Prune's answer, during testing, batch normalization layer will use the average mean/variance/scale/shift values from different training iterations to normalize its input(subtract mean and divide by the standard deviation).

And the original google's batch normalization paper only said that it should be a moving average method and no more thorough explanation was provided though. Both caffe and tensorflow use an exponential moving average method.

In my experience, a simple moving average method usually better than an exponential moving average method, as far as to the validation accuracy(Maybe it need more experiments). For a compare, you can refer to here(I tried the two moving average methods implementations in channel_wise_bn_layer, compared with the batch norm layer in BVLC/caffe).

6
On

For what it's worth this link has an example of using "BatchNorm" layers in cifar10 classification net.

Specifically, it splits the layer between TRAIN and TEST phases:

layer {
  name: "bn1"
  type: "BatchNorm"
  bottom: "pool1"
  top: "bn1"
  batch_norm_param {
    use_global_stats: false
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  include {
    phase: TRAIN
  }
}
layer {
  name: "bn1"
  type: "BatchNorm"
  bottom: "pool1"
  top: "bn1"
  batch_norm_param {
    use_global_stats: true
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  include {
    phase: TEST
  }
}
0
On

Batch normalization solves a problem called "internal covariate shift". To understand why it helps, you’ll need to first understand what covariate shift actually is.

“Covariates” is just another name for the input “features”, often written as X. Covariate shift means the distribution of the features is different in different parts of the training/test data, breaking the i.i.d assumption used across most of ML. This problem occurs frequently in medical data (where you have training samples from one age group, but you want to classify something coming from another age group), or finance (due to changing market conditions).

"Internal covariate shift" refers to covariate shift occurring within a neural network, i.e. going from (say) layer 2 to layer 3. This happens because, as the network learns and the weights are updated, the distribution of outputs of a specific layer in the network changes. This forces the higher layers to adapt to that drift, which slows down learning.

BN helps by making the data flowing between intermediate layers of the network look like whitened data, this means you can use a higher learning rate. Since BN has a regularizing effect it also means you can often remove dropout (which is helpful as dropout usually slows down training).