skip to main content
10.1145/3289602.3293989acmconferencesArticle/Chapter ViewAbstractPublication PagesfpgaConference Proceedingsconference-collections
poster
Public Access

An Energy-Efficient FPGA Implementation of an LSTM Network Using Approximate Computing

Published: 20 February 2019 Publication History

Abstract

Long Short-Term Memory (LSTM) Recurrent Neural network (RNN) is known for its capability in modeling temporal aspects of data and has been shown to produce promising results in sequence learning tasks such as language modeling. However, due to the large number of model parameters and compute-intensive operations, existing FPGA implementations of LSTM cells are not sufficiently energy-efficient as they require large area and exhibit high power consumption. This work describes a substantially different hardware implementation of an LSTM which includes several architectural innovations to achieve high throughput and energy-efficiency. This paper includes extensive exploration of the design trade-offs and demonstrates the advantages for one common application - language modeling. Implementation of the design on a Xilinx Zynq XC7Z030 FPGA for language modeling shows significant improvements in throughput and energy-efficiency as compared to the state-of-the-art designs. It is worth mentioning that the proposed LSTM hardware architecture is also applicable to other applications that use LSTM as part of the neural network model (e.g., CNN-RNN models) or in whole (e.g., RNN models).

Cited By

View all

Index Terms

  1. An Energy-Efficient FPGA Implementation of an LSTM Network Using Approximate Computing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FPGA '19: Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
    February 2019
    360 pages
    ISBN:9781450361378
    DOI:10.1145/3289602
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 February 2019

    Check for updates

    Author Tags

    1. fpga
    2. hardware acceleration
    3. long short-term memory
    4. recurrent neural network

    Qualifiers

    • Poster

    Funding Sources

    Conference

    FPGA '19
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 125 of 627 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 05 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media