One popular learning algorithm for feedforward neural networks is the back-propagation (BP) algorithm which includes parameters: learning rate (), momentum factor (a) and steepness parameter (l). The appropriate selections of these parameters have a large effect on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase speed of convergence. In this paper, we shall present several classes of learning automata based solutions to the problem of adaptation of BP algorithm parameters. By interconnection of learning automata to the feedforward neural networks, we use learning automata schemes for adjusting the parameters , a, and l based on the observation of random response of the neural networks. One of the important aspects of proposed scheme is its ability to escape from local minima with high possibility during the training period. The feasibility of the proposed methods are shown through the simulations on several problems.