default search action
IEEE Computer Architecture Letters, Volume 23
Volume 23, Number 1, January - June 2024
- João Vieira, Nuno Roma, Gabriel Falcão, Pedro Tomás:
gem5-accel: A Pre-RTL Simulation Toolchain for Accelerator Architecture Validation. 1-4 - Atiyeh Gheibi-Fetrat, Negar Akbarzadeh, Shaahin Hessabi, Hamid Sarbazi-Azad:
Tulip: Turn-Free Low-Power Network-on-Chip. 5-8 - Yosuke Ueno, Yuna Tomida, Teruo Tanimoto, Masamitsu Tanaka, Yutaka Tabuchi, Koji Inoue, Hiroshi Nakamura:
Inter-Temperature Bandwidth Reduction in Cryogenic QAOA Machines. 6-9 - Hyeseong Kim, Yunjae Lee, Minsoo Rhu:
FPGA-Accelerated Data Preprocessing for Personalized Recommendation Systems. 7-10 - Christodoulos Peltekis, Vasileios Titopoulos, Chrysostomos Nicopoulos, Giorgos Dimitrakopoulos:
DeMM: A Decoupled Matrix Multiplication Engine Supporting Relaxed Structured Sparsity. 17-20 - Caden Corontzos, Eitan Frachtenberg:
Direct-Coding DNA With Multilevel Parallelism. 21-24 - Ramin Ayanzadeh, Moinuddin K. Qureshi:
Enhancing the Reach and Reliability of Quantum Annealers by Pruning Longer Chains. 25-28 - Courtney Golden, Dan Ilan, Caroline Huang, Niansong Zhang, Zhiru Zhang, Christopher Batten:
Supporting a Virtual Vector Instruction Set on a Commercial Compute-in-SRAM Accelerator. 29-32 - Samuel Thomas, Kidus Workneh, Ange-Thierry Ishimwe, Zack McKevitt, Phaedra S. Curlin, R. Iris Bahar, Joseph Izraelevitz, Tamara Lehman:
Baobab Merkle Tree for Efficient Secure Memory. 33-36 - Minsik Cho, Keivan Alizadeh-Vahid, Qichen Fu, Saurabh Adya, Carlo C. del Mundo, Mohammad Rastegari, Devang Naik, Peter Zatloukal:
eDKM: An Efficient and Accurate Train-Time Weight Clustering for Large Language Models. 37-40 - Yanggon Kim, Yunki Han, Jaekang Shin, Junkyum Kim, Lee-Sup Kim:
Accelerating Deep Reinforcement Learning via Phase-Level Parallelism for Robotics Applications. 41-44 - Yuxin Yang, Xiaoming Chen, Yinhe Han:
JANM-IK: Jacobian Argumented Nelder-Mead Algorithm for Inverse Kinematics and its Hardware Acceleration. 45-48 - Mohammad Hafezan, Ehsan Atoofian:
Improving Energy-Efficiency of Capsule Networks on Modern GPUs. 49-52 - Mahita Nagabhiru, Gregory T. Byrd:
Achieving Forward Progress Guarantee in Small Hardware Transactions. 53-56 - Rui Ma, Jia-Ching Hsu, Ali Mansoorshahi, Joseph Garvey, Michael Kinsner, Deshanand P. Singh, Derek Chiou:
Primate: A Framework to Automatically Generate Soft Processors for Network Applications. 57-60 - Loïc France, Florent Bruguier, David Novo, Maria Mushtaq, Pascal Benoit:
Reducing the Silicon Area Overhead of Counter-Based Rowhammer Mitigations. 61-64 - Leonid Yavits:
DRAMA: Commodity DRAM Based Content Addressable Memory. 65-68 - Deepanjali Mishra, Konstantinos Kanellopoulos, Ashish Panwar, Akshitha Sriraman, Vivek Seshadri, Onur Mutlu, Todd C. Mowry:
Address Scaling: Architectural Support for Fine-Grained Thread-Safe Metadata Management. 69-72 - Changmin Shin, Taehee Kwon, Jaeyong Song, Jae Hyung Ju, Frank Liu, YeonKyu Choi, Jinho Lee:
A Case for In-Memory Random Scatter-Gather for Fast Graph Processing. 73-77 - Lieven Eeckhout:
R.I.P. Geomean Speedup Use Equal-Work (Or Equal-Time) Harmonic Mean Speedup Instead. 78-82 - Zuher Jahshan, Leonid Yavits:
MajorK: Majority Based kmer Matching in Commodity DRAM. 83-86 - Shiyan Yi, Yudi Qiu, Lingfei Lu, Guohao Xu, Yong Gong, Xiaoyang Zeng, Yibo Fan:
GATe: Streamlining Memory Access and Communication to Accelerate Graph Attention Network With Near-Memory Processing. 87-90 - Mrinmay Sasmal, Tresa Joseph, T. S. Bindiya:
Approximate Multiplier Design With LFSR-Based Stochastic Sequence Generators for Edge AI. 91-94 - Varun Gohil, Sundar Dev, Gaurang Upasani, David Lo, Parthasarathy Ranganathan, Christina Delimitrou:
The Importance of Generalizability in Machine Learning for Systems. 95-98 - Nikhil Agarwal, Mitchell Fream, Souradip Ghosh, Brian C. Schwedock, Nathan Beckmann:
UDIR: Towards a Unified Compiler Framework for Reconfigurable Dataflow Architectures. 99-103 - Kyriaki Tsantikidou, Nicolas Sklavos:
An Area Efficient Architecture of a Novel Chaotic System for High Randomness Security in e-Health. 104-107 - Yongmo Park, Subhankar Pal, Aporva Amarnath, Karthik Swaminathan, Wei D. Lu, Alper Buyuktosunoglu, Pradip Bose:
Dramaton: A Near-DRAM Accelerator for Large Number Theoretic Transforms. 108-111 - Haocong Luo, Yahya Can Tugrul, F. Nisa Bostanci, Ataberk Olgun, Abdullah Giray Yaglikçi, Onur Mutlu:
Ramulator 2.0: A Modern, Modular, and Extensible DRAM Simulator. 112-116 - Hyungyo Kim, Gaohan Ye, Nachuan Wang, Amir Yazdanbakhsh, Nam Sung Kim:
Exploiting Intel Advanced Matrix Extensions (AMX) for Large Language Model Inference. 117-120 - Tianzheng Li, Enfang Cui, Yuting Wu, Qian Wei, Yue Gao:
TeleVM: A Lightweight Virtual Machine for RISC-V Architecture. 121-124 - Yingjie Qi, Jianlei Yang, Ao Zhou, Tong Qiao, Chunming Hu:
Architectural Implications of GNN Aggregation Programming Abstractions. 125-128 - Asif Ali Khan, Fazal Hameed, Taha Shahroodi, Alex K. Jones, Jerónimo Castrillón:
Efficient Memory Layout for Pre-Alignment Filtering of Long DNA Reads Using Racetrack Memory. 129-132 - Saurav Maji, Kyungmi Lee, Anantha P. Chandrakasan:
SparseLeakyNets: Classification Prediction Attack Over Sparsity-Aware Embedded Neural Networks Using Timing Side-Channel Information. 133-136 - Seyyed Hossein Seyyedaghaei Rezaei, Parham Zilouchian Moghaddam, Mehdi Modarressi:
Smart Memory: Deep Learning Acceleration in 3D-Stacked Memories. 137-141
Volume 23, Number 2, July - December 2024
- Hossein Katebi, Navidreza Asadi, Maziar Goudarzi:
FullPack: Full Vector Utilization for Sub-Byte Quantized Matrix-Vector Multiplication on General Purpose CPUs. 142-145 - Erika S. Alcorta, Mahesh Madhav, Richard Afoakwa, Scott Tetrick, Neeraja J. Yadwadkar, Andreas Gerstlauer:
Characterizing Machine Learning-Based Runtime Prefetcher Selection. 146-149 - Andreas Kosmas Kakolyris, Dimosthenis Masouros, Sotirios Xydis, Dimitrios Soudris:
SLO-Aware GPU DVFS for Energy-Efficient LLM Inference Serving. 150-153 - Dongho Yoon, Taehun Kim, Jae W. Lee, Minsoo Rhu:
A Quantitative Analysis of State Space Model-Based Large Language Model: Study of Hungry Hungry Hippos. 154-157 - Mohammadamin Ajdari, Behrang Montazerzohour, Kimia Abdi, Hossein Asadi:
Empirical Architectural Analysis on Performance Scalability of Petascale All-Flash Storage Systems. 158-161 - Ali Mohammadpur-Fard, Sina Darabi, Hajar Falahati, Negin Mahani, Hamid Sarbazi-Azad:
Exploiting Direct Memory Operands in GPU Instructions. 162-165
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.