Introduction - If you have any usage issues, please Google them yourself
It is difficult to overcome the problem of fitting a deep neural network in the face of such a large network in testing.
Dropout can solve this problem well. The performance of the neural network is improved by preventing the common action of the feature detector. The key step of this method is that the node unit of the random loss network consists of the network weights connected to it during the training. In training, the Dropout method can make the network more compact. In the test phase, the network trained by Dropout can predict the output of the network more accurately. This method effectively reduces the over fitting problem of the network, and has a more obvious improvement than the other regularization methods.
In this paper, a simple experiment is used to compare the performance and performance of the Dropout method.