TY - GEN
T1 - A cache-aware thread scheduling policy for multi-core processors
AU - Sato, Masayuki
AU - Kotera, Isao
AU - Egawa, Ryusuke
AU - Takizawa, Hiroyuki
AU - Kobayashi, Hiroaki
PY - 2009/12/1
Y1 - 2009/12/1
N2 - A modern high-performance multi-core processor has large shared cache memories. However, simultaneously running threads do not always require the entire capacities of the shared caches. Besides, some threads cause severe performance degradation by inter-thread cache conflicts and shortage of capacity on the shared cache. To achieve high performance processing on multi-core processors, effective usage of shared cache memories plays important role. In this paper, we propose a cache-aware thread scheduling policy for multi-core processors with multiple shared cache memories. The total processor performance becomes more sensitive to the cache capacity shortage, as larger caches are requested by the threads sharing one cache. The proposed policy can prevent multiple threads requesting a large cache capacity from sharing one cache. As a result, the policy can prevent inter-thread resource conflicts and hence severe performance degradation. Experimental results clearly demonstrate that the policy assists the cache partitioning mechanisms and avoids unfair performance degradation among threads. Thread scheduling based on the proposed policy can improve the performance by up to 10% and an average of 5% compared with thread scheduling without the proposed policy.
AB - A modern high-performance multi-core processor has large shared cache memories. However, simultaneously running threads do not always require the entire capacities of the shared caches. Besides, some threads cause severe performance degradation by inter-thread cache conflicts and shortage of capacity on the shared cache. To achieve high performance processing on multi-core processors, effective usage of shared cache memories plays important role. In this paper, we propose a cache-aware thread scheduling policy for multi-core processors with multiple shared cache memories. The total processor performance becomes more sensitive to the cache capacity shortage, as larger caches are requested by the threads sharing one cache. The proposed policy can prevent multiple threads requesting a large cache capacity from sharing one cache. As a result, the policy can prevent inter-thread resource conflicts and hence severe performance degradation. Experimental results clearly demonstrate that the policy assists the cache partitioning mechanisms and avoids unfair performance degradation among threads. Thread scheduling based on the proposed policy can improve the performance by up to 10% and an average of 5% compared with thread scheduling without the proposed policy.
KW - Dynamic cache partitioning
KW - Multi-core processors
KW - Parallel computing systems
KW - Thread scheduling
UR - http://www.scopus.com/inward/record.url?scp=74549184043&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=74549184043&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:74549184043
SN - 9780889867840
T3 - Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks, PDCN 2009
SP - 109
EP - 114
BT - Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks, PDCN 2009
T2 - IASTED International Conference on Parallel and Distributed Computing and Networks, PDCN 2009
Y2 - 16 February 2009 through 18 February 2009
ER -