Vector quantization (VQ) is a well-known method for image compression but its encoding process is very heavy computationally. In order to speed up VQ encoding, it is most important to avoid unnecessary Euclidean distance computations (k-D) as much as possible by a lighter (no multiplication operation) difference check fast that uses simpler features (low dimensional) while the searching is going on. Sum (1-D) and partial sums (2-D) are proposed as the appropriate features in this paper because they are the first two simplest features of a vector. Then, Manhattan distance (no multiplication operation but k dimensional computation) is used as a better difference check that basically benefits from no extra memory requirement for codewords at all. Sum difference, partial sum difference and Manhattan distance are computed as the multi estimations of Euclidean distance and they are connected to each other by the Cauchy-Schwarz inequality so as to reject a lot of unlikely codewords. For typical standard images with very different details (Lena, F-16, Pepper and Baboon), the fmal must-do Euclidean distance computation using the proposed method can be reduced to a great extent compared to full search (FS) meanwhile keeping the PSNR not degraded.
- Euclidean distance estimation
- Fast encoding
- Vector quantization
ASJC Scopus subject areas
- Theoretical Computer Science
- Computational Theory and Mathematics
- Artificial Intelligence