Abstract
Vector quantization (VQ) is a well-known method for image compression but its encoding process is very heavy computationally. In order to speed up VQ encoding, it is most important to avoid unnecessary Euclidean distance computations (k-D) as much as possible by a lighter (no multiplication operation) difference check fast that uses simpler features (low dimensional) while the searching is going on. Sum (1-D) and partial sums (2-D) are proposed as the appropriate features in this paper because they are the first two simplest features of a vector. Then, Manhattan distance (no multiplication operation but k dimensional computation) is used as a better difference check that basically benefits from no extra memory requirement for codewords at all. Sum difference, partial sum difference and Manhattan distance are computed as the multi estimations of Euclidean distance and they are connected to each other by the Cauchy-Schwarz inequality so as to reject a lot of unlikely codewords. For typical standard images with very different details (Lena, F-16, Pepper and Baboon), the fmal must-do Euclidean distance computation using the proposed method can be reduced to a great extent compared to full search (FS) meanwhile keeping the PSNR not degraded.
Original language | English |
---|---|
Pages (from-to) | 167-174 |
Number of pages | 8 |
Journal | Intelligent Automation and Soft Computing |
Volume | 10 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2004 Jan |
Keywords
- Euclidean distance estimation
- Fast encoding
- Feature
- Vector quantization
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Computational Theory and Mathematics
- Artificial Intelligence