Data cache organization for accurate timing analysis

Martin Schoeberl, Benedikt Huber, Wolfgang Puffitsch

Research output: Contribution to journalJournal articleResearchpeer-review

706 Downloads (Pure)

Abstract

Caches are essential to bridge the gap between the high latency main memory and the fast processor pipeline. Standard processor architectures implement two first-level caches to avoid a structural hazard in the pipeline: an instruction cache and a data cache. For tight worst-case execution times it is important to classify memory accesses as either cache hit or cache miss. The addresses of instruction fetches are known statically and static cache hit/miss classification is possible for the instruction cache. The access to data that is cached in the data cache is harder to predict statically. Several different data areas, such as stack, global data, and heap allocated data, share the same cache. Some addresses are known statically, other addresses are only known at runtime. With a standard cache organization all those different data areas must be considered by worst-case execution time analysis. In this paper we propose to split the data cache for the different data areas. Data cache analysis can be performed individually for the different areas. Access to an unknown address in the heap does not destroy the abstract cache state for other data areas. Furthermore, we propose to use a small, highly associative cache for the heap area. We designed and implemented a static analysis for this cache, and integrated it into a worst-case execution time analysis tool.
Original languageEnglish
JournalReal-Time Systems
Volume49
Issue number1
Pages (from-to)1-28
ISSN0922-6443
DOIs
Publication statusPublished - 2013

Bibliographical note

The original publication is available at www.springerlink.com.

Keywords

  • WCET analysis
  • Data caches
  • Time-predictable computer architecture

Fingerprint Dive into the research topics of 'Data cache organization for accurate timing analysis'. Together they form a unique fingerprint.

Cite this