Word2vec is a word embedding method that converts words into vectors in such a way that the semantically and syntactically relevant words are closed to each other in the vector space. FPGAs can be used to design low-power accelerators for Word2vec. FPGAs use highly parallel computations which require parallel data access. Since FPGAs generally have a small external memory access bandwidth compared to CPUs and GPUs, the processing speed is often restricted. We evaluate the trade-off between bandwidth and accuracy using different fixed-point formats, and propose a memory-bandwidth-efficient FPGA accelerator by utilizing 19-bit fixed-point data. We have implemented the proposed accelerator on an Intel Arria 10 FPGA using OpenCL, and achieved upto 28% bandwidth reduction without any degradation to the computation accuracy. Since the reduced bandwidth allows us to access more data without any data access bottleneck, it is possible to increase the processing speed by increasing the degree of parallelism.