Analysis of preemption costs for the stack cache

Amine Naji*, Sahar Abbaspour, Florian Brandner, Mathieu Jan

*Corresponding author for this work

    Research output: Contribution to journalJournal articleResearchpeer-review

    90 Downloads (Pure)

    Abstract

    The design of tailored hardware has proven a successful strategy to reduce the timing analysis overhead for (hard) real-time systems. The stack cache is an example of such a design that was shown to provide good average-case performance, while remaining easy to analyze. So far, however, the analysis of the stack cache was limited to individual tasks, ignoring aspects related to multitasking. A major drawback of the original stack cache design is that, due to its simplicity, it cannot hold the data of multiple tasks at the same time. Consequently, the entire cache content needs to be saved and restored when a task is preempted. We propose (a) an analysis exploiting the simplicity of the stack cache to bound the overhead induced by task preemption, (b) preemption mechanisms for the stack cache exploiting the previous analysis and, finally, (c) an extension of the design that allows to (partially) hide the overhead by virtualizing stack caches.
    Original languageEnglish
    JournalReal-Time Systems
    Volume54
    Issue number3
    Pages (from-to)700–744
    ISSN0922-6443
    DOIs
    Publication statusPublished - 2018

    Keywords

    • Cache-related preemption delays
    • Program analysis
    • Real-time systems
    • Stack cache

    Cite this

    Naji, A., Abbaspour, S., Brandner, F., & Jan, M. (2018). Analysis of preemption costs for the stack cache. Real-Time Systems, 54(3), 700–744. https://doi.org/10.1007/s11241-018-9298-7