default search action
IEEE Computer Architecture Letters, Volume 21
Volume 21, Number 1, January - June 2022
- Xinfeng Xie, Peng Gu, Jiayi Huang, Yufei Ding, Yuan Xie:
MPU-Sim: A Simulator for In-DRAM Near-Bank Processing Architectures. 1-4 - Mo Zou, Mingzhe Zhang, Rujia Wang, Xian-He Sun, Xiaochun Ye, Dongrui Fan, Zhimin Tang:
Accelerating Graph Processing With Lightweight Learning-Based Data Reordering. 5-8 - Kristin Barber, Moein Ghaniyoun, Yinqian Zhang, Radu Teodorescu:
A Pre-Silicon Approach to Discovering Microarchitectural Vulnerabilities in Security Critical Applications. 9-12 - Dusol Lee, Duwon Hong, Wonil Choi, Jihong Kim:
MQSim-E: An Enterprise SSD Simulator. 13-16 - Benjamin J. Lucas, Ali Alwan, Marion Murzello, Yazheng Tu, Pengzhou He, Andrew J. Schwartz, David Guevara, Ujjwal Guin, Kyle Juretus, Jiafeng Xie:
Lightweight Hardware Implementation of Binary Ring-LWE PQC Accelerator. 17-20 - Haiyang Lin, Mingyu Yan, Xiaocheng Yang, Mo Zou, Wenming Li, Xiaochun Ye, Dongrui Fan:
Characterizing and Understanding Distributed GNN Training on GPUs. 21-24 - Hamin Jang, Taehun Kang, Joonsung Kim, Jaeyong Cho, Jae-Eon Jo, Seungwook Lee, Wooseok Chang, Jangwoo Kim, Hanhwi Jang:
LSim: Fine-Grained Simulation Framework for Large-Scale Performance Evaluation. 25-28 - Hang Xiao, Haobo Xu, Ying Wang, Yujie Wang, Yinhe Han:
LINAC: A Spatially Linear Accelerator for Convolutional Neural Networks. 29-32
Volume 21, Number 2, July - December 2022
- Yongwon Shin, Juseong Park, Jeongmin Hong, Hyojin Sung:
Runtime Support for Accelerating CNN Models on Digital DRAM Processing-in-Memory Hardware. 33-36 - Hoyong Jin, Donghun Jeong, Taewon Park, Jong Hwan Ko, Jungrae Kim:
Multi-Prediction Compression: An Efficient and Scalable Memory Compression Framework for GP-GPU. 37-40 - Argyris Kokkinis, Dionysios Diamantopoulos, Kostas Siozios:
Dynamic Optimization of On-Chip Memories for HLS Targeting Many-Accelerator Platforms. 41-44 - Sungmin Yun, Byeongho Kim, Jaehyun Park, Hwayong Nam, Jung Ho Ahn, Eojin Lee:
GraNDe: Near-Data Processing Architecture With Adaptive Matrix Mapping for Graph Convolutional Networks. 45-48 - Rui Ma, Evangelos Georganas, Alexander Heinecke, Sergey Gribok, Andrew Boutros, Eriko Nurvitadhi:
FPGA-Based AI Smart NICs for Scalable Distributed AI Training Systems. 49-52 - Fazal Hameed, Asif Ali Khan, Sébastien Ollivier, Alex K. Jones, Jerónimo Castrillón:
DNA Pre-Alignment Filter Using Processing Near Racetrack Memory. 53-56 - Ling Yang, Libo Huang, Run Yan, Nong Xiao, Sheng Ma, Li Shen, Weixia Xu:
Stride Equality Prediction for Value Speculation. 57-60 - Jeongmin Hong, Sungjun Cho, Gwangsun Kim:
Overcoming Memory Capacity Wall of GPUs With Heterogeneous Memory Stack. 61-64 - Luca Piccolboni, Davide Giri, Luca P. Carloni:
Accelerators & Security: The Socket Approach. 65-68 - Mingyu Yan, Mo Zou, Xiaocheng Yang, Wenming Li, Xiaochun Ye, Dongrui Fan, Yuan Xie:
Characterizing and Understanding HGNNs on GPUs. 69-72 - Cecil Accetti, Rendong Ying, Peilin Liu:
Structured Combinators for Efficient Graph Reduction. 73-76 - Yu Omori, Keiji Kimura:
Open-Source Hardware Memory Protection Engine Integrated With NVMM Simulator. 77-80 - Minjae Kim, Bryan S. Kim, Eunji Lee, Sungjin Lee:
A Case Study of a DRAM-NVM Hybrid Memory Allocator for Key-Value Stores. 81-84 - Zhengrong Wang, Christopher Liu, Tony Nowatzki:
Infinity Stream: Enabling Transparent and Automated In-Memory Computing. 85-88 - Lingxi Wu, Rasool Sharifi, Ashish Venkat, Kevin Skadron:
DRAM-CAM: General-Purpose Bit-Serial Exact Pattern Matching. 89-92 - Salonik Resch, Ulya R. Karpuzcu:
On Variable Strength Quantum ECC. 93-96 - Peter Salvesen, Magnus Jahre:
LMT: Accurate and Resource-Scalable Slowdown Prediction. 97-100 - Gyeongcheol Shin, Junsoo Kim, Joo-Young Kim:
OpenMDS: An Open-Source Shell Generation Framework for High-Performance Design on Xilinx Multi-Die FPGAs. 101-104 - Majid Jalili, Mattan Erez:
Managing Prefetchers With Deep Reinforcement Learning. 105-108 - Marzieh Lenjani, Alif Ahmed, Kevin Skadron:
Pulley: An Algorithm/Hardware Co-Optimization for In-Memory Sorting. 109-112 - Yongye Zhu, Shijia Wei, Mohit Tiwari:
Revisiting Browser Performance Benchmarking From an Architectural Perspective. 113-116 - Donghyun Gouk, Seungkwan Kang, Miryeong Kwon, Junhyeok Jang, Hyunkyu Choi, Sangwon Lee, Myoungsoo Jung:
PreGNN: Hardware Acceleration to Take Preprocessing Off the Critical Path in Graph Neural Networks. 117-120 - Yinshen Wang, Wenming Li, Tianyu Liu, Liangjiang Zhou, Bingnan Wang, Zhihua Fan, Xiaochun Ye, Dongrui Fan, Chibiao Ding:
Characterization and Implementation of Radar System Applications on a Reconfigurable Dataflow Architecture. 121-124 - Xiaofeng Hou, Cheng Xu, Jiacheng Liu, Xuehan Tang, Lingyu Sun, Chao Li, Kwang-Ting Cheng:
Characterizing and Understanding End-to-End Multi-Modal Neural Networks on GPUs. 125-128 - Jared Nye, Omer Khan:
SSE: Security Service Engines to Accelerate Enclave Performance in Secure Multicore Processors. 129-132 - Gino A. Chacon, Charles Williams, Johann Knechtel, Ozgur Sinanoglu, Paul V. Gratz:
Hardware Trojan Threats to Cache Coherence in Modern 2.5D Chiplet Systems. 133-136 - Lieven Eeckhout:
A First-Order Model to Assess Computer Architecture Sustainability. 137-140 - Ranyang Zhou, Sepehr Tabrizchi, Arman Roohi, Shaahin Angizi:
LT-PIM: An LUT-Based Processing-in-DRAM Architecture With RowHammer Self-Tracking. 141-144 - Jongwon Park, Jinkyu Jeong:
Speculative Multi-Level Access in LSM Tree-Based KV Store. 145-148 - Marjan Fariborz, Mahyar Samani, Terry O'Neill, Jason Lowe-Power, S. J. Ben Yoo, Venkatesh Akella:
A Model for Scalable and Balanced Accelerators for Graph Processing. 149-152 - Jianming Huang, Yu Hua:
Ensuring Data Confidentiality in eADR-Based NVM Systems. 153-156 - Sejin Kim, Jungwoo Kim, Yongjoo Jang, Jaeha Kung, Sungjin Lee:
SEMS: Scalable Embedding Memory System for Accelerating Embedding-Based DNNs. 157-160
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.