Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleNovember 2024
Cache-aware Task Decomposition for Efficient Intermittent Computing Systems
DAC '24: Proceedings of the 61st ACM/IEEE Design Automation ConferenceArticle No.: 167, Pages 1–6https://doi.org/10.1145/3649329.3657382Energy harvesting offers a scalable and cost-effective power solution for IoT devices, but it introduces the challenge of frequent and unpredictable power failures due to the unstable environment. To address this, intermittent computing has been proposed,...
- research-articleJune 2023
Reducing Loss of Service for Mixed-Criticality Systems through Cache- and Stress-Aware Scheduling
RTNS '23: Proceedings of the 31st International Conference on Real-Time Networks and SystemsPages 188–199https://doi.org/10.1145/3575757.3593654Hardware resources found in modern processor architecture, such as the memory hierarchy, can improve the performance of a task by anticipating its needs based on its execution history and behaviour. Interleaved jobs, belonging to other tasks with ...
- research-articleMay 2022
Improving CRPD analysis for EDF scheduling: trading speed for precision
SAC '22: Proceedings of the 37th ACM/SIGAPP Symposium on Applied ComputingPages 481–490https://doi.org/10.1145/3477314.3507027Cache Related Preemption Delay (CRPD) analysis is a methodology for bounding the cost of cache reloads due to preemptions. Many techniques have been proposed to estimate upper bounds to the CRPD for Fixed Priority (FP) and Earliest Deadline First (EDF) ...
- research-articleNovember 2019
On the Complexity of Cache Analysis for Different Replacement Policies
Journal of the ACM (JACM), Volume 66, Issue 6Article No.: 41, Pages 1–22https://doi.org/10.1145/3366018Modern processors use cache memory, a memory access that “hits” the cache returns early, while a “miss” takes more time. Given a memory access in a program, cache analysis consists in deciding whether this access is always a hit, always a miss, or is a ...
- research-articleOctober 2016
Cache-related preemption delay analysis for multi-level inclusive caches
EMSOFT '16: Proceedings of the 13th International Conference on Embedded SoftwareArticle No.: 16, Pages 1–10https://doi.org/10.1145/2968478.2968481Cache-related preemption delay (CRPD) analysis is crucial when designing embedded control systems that employ preemptive scheduling. CRPD analysis for single-level caches has been studied extensively based on useful cache blocks (UCBs). As high-...
- research-articleMay 2013
Precise timing analysis for direct-mapped caches
DAC '13: Proceedings of the 50th Annual Design Automation ConferenceArticle No.: 148, Pages 1–10https://doi.org/10.1145/2463209.2488917Safety-critical systems require guarantees on their worst-case execution times. This requires modelling of speculative hardware features such as caches that are tailored to improve the average-case performance, while ignoring the worst case, which ...
- ArticleNovember 2012
Impact of extending side channel attack on cipher variants: a case study with the HC series of stream ciphers
SPACE'12: Proceedings of the Second international conference on Security, Privacy, and Applied Cryptography EngineeringPages 32–44https://doi.org/10.1007/978-3-642-34416-9_3Side channel attacks are extremely implementation specific. An attack is tailor-made for a specific cipher algorithm implemented in a specific model. A natural question is: what is the effect of a side channel technique on a variant of the cipher ...
- ArticleApril 2012
WCET Analysis with MRU Caches: Challenging LRU for Predictability
RTAS '12: Proceedings of the 2012 IEEE 18th Real Time and Embedded Technology and Applications SymposiumPages 55–64https://doi.org/10.1109/RTAS.2012.31Most previous work in cache analysis for WCET estimation assumes a particular replacement policy called LRU. In contrast, much less work has been done for non-LRU policies, since they are generally considered to be very "unpredictable". However, most ...
- ArticleSeptember 2007
Optimal task placement to improve cache performance
EMSOFT '07: Proceedings of the 7th ACM & IEEE international conference on Embedded softwarePages 259–268https://doi.org/10.1145/1289927.1289968Most recent embedded systems use caches to improve their average performance. Current timing analyses are able to compute safe timing guarantees for these systems, if tasks are running to completion. If preemptive scheduling is enabled, the previously ...
- ArticleJune 2007
Modeling the function cache for worst-case execution time analysis
DAC '07: Proceedings of the 44th annual Design Automation ConferencePages 471–476https://doi.org/10.1145/1278480.1278603Static worst-case execution time (WCET) analysis is done by modeling the hardware behavior. In this paper we describe a WCET analysis technique to analyze systems with function caches, a special kind of instruction cache that caches whole functions ...
- articleApril 2007
METRIC: Memory tracing via dynamic binary rewriting to identify cache inefficiencies
ACM Transactions on Programming Languages and Systems (TOPLAS), Volume 29, Issue 2Pages 12–eshttps://doi.org/10.1145/1216374.1216380With the diverging improvements in CPU speeds and memory access latencies, detecting and removing memory access bottlenecks becomes increasingly important. In this work we present METRIC, a software framework for isolating and understanding such ...
- articleDecember 2006
Analysis of cache-coherence bottlenecks with hybrid hardware/software techniques
ACM Transactions on Architecture and Code Optimization (TACO), Volume 3, Issue 4Pages 390–423https://doi.org/10.1145/1187976.1187978Application performance on high-performance shared-memory systems is often limited by sharing patterns resulting in cache-coherence bottlenecks. Current approaches to identify coherence bottlenecks incur considerable run-time overhead and do not scale. ...
- ArticleAugust 2006
Extended hidden number problem and its cryptanalytic applications
SAC'06: Proceedings of the 13th international conference on Selected areas in cryptographyPages 114–133Since its formulation in 1996, the Hidden Number Problem (HNP) plays an important role in both cryptography and cryptanalysis. It has a strong connection with proving security of Diffie-Hellman and related schemes as well as breaking certain ...
- ArticleJune 2005
A hybrid hardware/software approach to efficiently determine cache coherence Bottlenecks
ICS '05: Proceedings of the 19th annual international conference on SupercomputingPages 21–30https://doi.org/10.1145/1088149.1088153High-end computing increasingly relies on shared-memory multiprocessors (SMPs), such as clusters of SMPs, nodes of chip-multiprocessors (CMP) or large-scale single-system image (SSI) SMPs. In such systems, performance is often affected by the sharing ...
- ArticleJune 2004
Detailed cache coherence characterization for OpenMP benchmarks
ICS '04: Proceedings of the 18th annual international conference on SupercomputingPages 287–297https://doi.org/10.1145/1006209.1006250Past work on studying cache coherence in shared-memory symmetric multiprocessors (SMPs) concentrates on studying aggregate events, often from an architecture point of view. However, this approach provides insufficient information about the exact sources ...
- ArticleJune 2002
Associative caches in formal software timing analysis
DAC '02: Proceedings of the 39th annual Design Automation ConferencePages 622–627https://doi.org/10.1145/513918.514076Precise cache analysis is crucial to formally determine program running time. As cache simulation is unsafe with respect to the conservative running time bounds for real-time systems, current cache analysis techniques combine basic block level cache ...
- articleOctober 2001
- ArticleNovember 2000
Data flow based cache prediction using local simulation
HLDVT '00: Proceedings of the IEEE International High-Level Validation and Test Workshop (HLDVT'00)Page 155Accurate cache modeling and analysis are crucial to formally determine program execution time. Current cache analysis techniques combine basic block level cache modeling with explicit or implicit program path analysis. We show how to extend program and ...
- ArticleDecember 1999
A Method to Improve the Estimated Worst-Case Performance of Data Caching
This paper presents a method for tight prediction of worst-case performance of data caches in high-performance real-time systems. Our approach is to distinguish between data structures that exhibit a predictable versus unpredictable cache behavior. ...