Learning Entropy (LE) was initially introduced as a measure for sample point novelty by unusually large learning effort of an online learning system. The key concept is that LE is based on pre-Training and further online learning, and the novelty measure is not necessarily correlated to the prediction error. Most recently, the idea of LE was revised as a novel non-probabilistic, i.e., machine-learning-based information measure. This measure is high when a learning system is not familiar with a given data point, so the learning activity to learn novel data points is unusual (regardless of the prediction error), i.e., the learning increments display unusual patterns during adaptation. In this paper, we propose concepts of the learning state and the learning state space so that LE can be approximated via neighbourhood analysis in the learning space. Further, two novel clustering-based techniques for approximation of sample point LE are proposed. The first one is based on the sum of K nearest neighbour distances. The second one is based on multiscale neighbourhood cumulative sum. Also, we preprocess the learning space with dimensionality reduction that is promising for research of LE even with neural networks and potentially with deep neural networks. The performance of novelty detection with the clustering-based sample point LE with dimensionality reduction is compared to the original algorithms of LE, and its potentials are discussed.