Entropy is uncertainty/ randomness in the data, the more the randomness the higher will be the entropy. Information gain uses entropy to make decisions. If the entropy is less, information will be more.
Information gain is used in decision trees and random forest to decide the best split. Thus, the more the information gain the better the split and this also means lower the entropy.
The entropy of a dataset before and after a split is used to calculate information gain.
Entropy is the measure of uncertainty in the data. The effort is to reduce the entropy and maximize the information gain. The feature having the most information is considered important by the algorithm and is used for training the model.
By using Information gain you are actually using entropy.