Cross Entropy
An Information Theory Perspective
It is very similar to entropy, if we think entropy as the lowest expectation of the encoding length, Cross entropy deals with two different sets of probability distributions. The optimal encoding scheme for the two distributions should be different to reach the lowest expectation of the encoding length. If we use the encoding scheme for one distribution but the random variable follows the other probability distribution, cross entropy measures the expectation of the encoding length in this case.
If one probability distribution is p and the encoding is optimized for distribution q, the cross entropy can be written as the following.
Loss Function
Cross entropy is usually used as a loss function for classification tasks. Minimizing the cross entropy is actually maximizing the likelihood. We can consider the target y of each training sample is a drawn from a different Bernoulli distribution. The likelihood for a sample to occur can be expressed as
where p is the probability for y=1. The overall likelihood for all samples is as follows.
Therefore, the loss function is:
It is the same as maximize its averaged log form, which is also called the log-likelihood.
It is the same as minimizing the negative of the above function.
Now, we have the exact same form for the cross entropy for classification loss.