Difference between revisions of "Back Propagation"

From Rodinia
Jump to: navigation, search
(New page: Back Propagation is a machine-learning algorithm that trains the weights of connecting nodes on a layered neural network. The application is comprised of two phases: the Forward Phase, in ...)
 
 
(7 intermediate revisions by 4 users not shown)
Line 5: Line 5:
 
backwards to adjust the weights and bias values. In each layer, the processing
 
backwards to adjust the weights and bias values. In each layer, the processing
 
of all the nodes can be done in parallel.
 
of all the nodes can be done in parallel.
 +
 +
Our code implementation is an excerpt from backpropgation described in this [http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/faces.html link](Machine Learning, Tom Mitchell, McGraw Hill, 1997), and implements CUDA/OCL versions of bpnn_train kernel.

Latest revision as of 18:18, 8 December 2011

Back Propagation is a machine-learning algorithm that trains the weights of connecting nodes on a layered neural network. The application is comprised of two phases: the Forward Phase, in which the activations are propagated from the input to the output layer, and the Backward Phase, in which the error between the observed and requested values in the output layer is propagated backwards to adjust the weights and bias values. In each layer, the processing of all the nodes can be done in parallel.

Our code implementation is an excerpt from backpropgation described in this link(Machine Learning, Tom Mitchell, McGraw Hill, 1997), and implements CUDA/OCL versions of bpnn_train kernel.