Today, CUDA is the de facto standard programming framework to exploit the computational power of graphics processing units (GPUs) to accelerate various kinds of applications. For efficient use of a large GPU-accelerated system, one important mechanism is checkpoint-restart that can be used not only to improve fault tolerance but also to optimize node/slot allocation by suspending a job on one node and migrating the job to another node. Although several checkpoint-restart implementations have been developed so far, they do not support CUDA applications or have some severe limitations for CUDA support. Hence, we present a checkpoint-restart library for CUDA that first deletes all CUDA resources before checkpointing and then restores them right after checkpointing. It is necessary to restore each memory chunk at the same memory address. To this end, we propose a novel technique that replays memory-related API calls. The library supports both CUDA runtime API and CUDA driver API. Moreover, the library is transparent to applications; it is not necessary to recompile the applications for checkpointing. This paper demonstrates that the proposed library can achieve checkpoint-restart of various applications at acceptable overheads, and the library also works for MPI applications such as HPL.