This paper introduces FPGA implementation of learning hardware for a neural network. The proposed learning hardware is designed using CMOS invertible logic that realizes probabilistic bidirectional (forward and backward) operations with basic CMOS logic gates. The backward operation based on CMOS invertible logic makes hardware-based learning possible because the loss function is not required. For a simple case study, the proposed learning hardware trains using simplified a MNIST data set for a 25-input binarized perceptron. Our FPGA implementation on Digilent Genesys 2 achieves around 100 x faster operating speed than that using a traditional learning algorithm on software while maintaining the same recognition accuracy of 99%.