Guiding the Optimization of Parallel Codes on Multicores Using an Analytical Cache Model
Use this link to cite
http://hdl.handle.net/2183/34393Collections
- Investigación (FIC) [1615]
Metadata
Show full item recordTitle
Guiding the Optimization of Parallel Codes on Multicores Using an Analytical Cache ModelDate
2018Citation
Andrade, D., Fraguela, B.B., Doallo, R. (2018). Guiding the Optimization of Parallel Codes on Multicores Using an Analytical Cache Model. In: Shi, Y., et al. Computational Science – ICCS 2018. ICCS 2018. Lecture Notes in Computer Science(), vol 10862. Springer, Cham. https://doi.org/10.1007/978-3-319-93713-7_32
Is version of
https://doi.org/10.1007/978-3-319-93713-7_32
Abstract
[Abstract]:
Cache performance is particularly hard to predict in modern multicore processors as several threads can be concurrently in execution, and private cache levels are combined with shared ones. This paper presents an analytical model able to evaluate the cache performance of the whole cache hierarchy for parallel applications in less than one second taking as input their source code and the cache configuration. While the model does not tackle some advanced hardware features, it can help optimizers to make reasonably good decisions in a very short time. This is supported by an evaluation based on two modern architectures and three different case studies, in which the model predictions differ on average just 5.05% from the results of a detailed hardware simulator and correctly guide different optimization decisions.
Keywords
Analytical Cache Model
Multicore processors
Cache performance
Optimization
Multicore processors
Cache performance
Optimization
Description
Versión final aceptada de: https://doi.org/10.1007/978-3-319-93713-7_32 This is a post-peer-review, pre-copyedit version of an article published in Lecture Notes
on Computer Science (ICCS 2018 proceedings). The final authenticated version is available
online at: http://dx.doi.org/10.1007/978-3-319-93713-7_32
Editor version
Rights
Todos os dereitos reservados. All rights reserved.