skip to main content
10.1145/3331553.3342617acmconferencesArticle/Chapter ViewAbstractPublication PagesicfpConference Proceedingsconference-collections
research-article

Compositional deep learning in Futhark

Published: 18 August 2019 Publication History

Abstract

We present a design pattern for composing deep learning networks in a typed, higher-order fashion. The exposed library functions are generically typed and the composition structure allows for networks to be trained (using back-propagation) and for trained networks to be used for predicting new results (using forward-propagation). Individual layers in a network can take different forms ranging over dense sigmoid layers to convolutional layers. The paper discusses different typing techniques aimed at enforcing proper use and composition of networks. The approach is implemented in Futhark, a data-parallel functional language and compiler targeting GPU architectures, and we demonstrate that Futhark's elimination of higher-order functions and modules leads to efficient generated code.

References

[1]
James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David WardeFarley, and Yoshua Bengio. 2010. Theano: A CPU and GPU Math Compiler in Python. In Procs. of the 9th Python in Science Conference, Stéfan van der Walt and Jarrod Millman (Eds.). 3 – 10.
[2]
Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics) . Springer-Verlag, Berlin, Heidelberg.
[3]
Guy E. Blelloch and John Greiner. 1996. A Provable Time and Space Efficient Implementation of NESL. In Proceedings of the First ACM SIGPLAN International Conference on Functional Programming (ICFP ’96) . ACM, New York, NY, USA, 213–225.
[4]
Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. cuDNN: Efficient Primitives for Deep Learning. CoRR abs/1410.0759 (2014).
[5]
Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. 2011. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPS Workshop .
[6]
Martin Elsman, Troels Henriksen, Danil Annenkov, and Cosmin E. Oancea. 2018. Static Interpretation of Higher-order Modules in Futhark: Functional GPU Programming in the Large. Proceedings of the ACM on Programming Languages 2, ICFP, Article 97 (July 2018), 30 pages.
[7]
Martin Elsman, Troels Henriksen, and Niels Gustav Westphal Serup. 2019. Data-parallel Flattening by Expansion. In Proceedings of the 6th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY 2019) . ACM, New York, NY, USA, 14–24.
[8]
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning . MIT Press. http://www.deeplearningbook.org .
[9]
Troels Henriksen, Martin Dybdal, Henrik Urms, Anna Sofie Kiehn, Daniel Gavin, Hjalte Abelskov, Martin Elsman, and Cosmin Oancea. 2016. APL on GPUs: A TAIL from the Past, Scribbled in Futhark. In Procs. of the 5th Int. Workshop on Functional High-Performance Computing (FHPC’16) . ACM, New York, NY, USA, 38–43.
[10]
Troels Henriksen, Martin Elsman, and Cosmin E. Oancea. 2014. Size Slicing: A Hybrid Approach to Size Inference in Futhark. In Proceedings of the 3rd ACM SIGPLAN Workshop on Functional High-performance Computing (FHPC ’14) . ACM, New York, NY, USA, 31–42.
[11]
Troels Henriksen, Ken Friis Larsen, and Cosmin E. Oancea. 2016. Design and GPGPU Performance of Futhark’s Redomap Construct. In Proceedings of the 3rd ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming (ARRAY 2016) . ACM, New York, NY, USA, 17–24.
[12]
Troels Henriksen and Cosmin Eugen Oancea. 2013. A T2 Graphreduction Approach to Fusion. In Proceedings of the 2Nd ACM SIGPLAN Workshop on Functional High-performance Computing (FHPC ’13) . ACM, New York, NY, USA, 47–58.
[13]
Troels Henriksen, Niels G. W. Serup, Martin Elsman, Fritz Henglein, and Cosmin E. Oancea. 2017. Futhark: Purely Functional GPU-programming with Nested Parallelism and In-place Array Updates. In Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2017) . ACM, New York, NY, USA, 556–571.
[14]
Troels Henriksen, Frederik Thorøe, Martin Elsman, and Cosmin Oancea. 2019. Incremental Flattening for Nested Data Parallelism. In Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming (PPoPP ’19) . ACM, New York, NY, USA, 53–67.
[15]
Anders Kiel Hovgaard, Troels Henriksen, and Martin Elsman. 2018. High-performance defunctionalization in Futhark. In Symposium on Trends in Functional Programming (TFP’18) . Springer-Verlag.
[16]
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross B. Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. CoRR abs/1408.5093 (2014). arXiv: 1408.5093 http://arxiv.org/abs/1408.5093
[17]
Jefkine Kafunah. 2016. Backpropagation in Convolutional Neural Networks . Stanford. https://canvas.stanford. edu/files/1041875/download?download_frd=1&verifier= tFv4Jc7bCezxJg9rG2yhEKEERi70zJ3ScmFbNlbN .
[18]
H. Kim, H. Nam, W. Jung, and J. Lee. 2017. Performance analysis of CNN frameworks for GPUs. In 2017 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS) . 55–64.
[19]
Andrew Lavin. 2015. Fast Algorithms for Convolutional Neural Networks. CoRR abs/1509.09308 (2015). arXiv: 1509.09308
[20]
Chris Leary and Todd Wang. 2017. XLA: TensorFlow, compiled! TensorFlow Development Summit 2017.
[21]
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1, 4 (Dec. 1989), 541–551.
[22]
NVIDIA. 2018. Deep Learning SDK documentation. https://docs. nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html .
[23]
Benoît Pradelle, Benoît Meister, Muthu Baskaran, Jonathan Springer, and Richard Lethin. 2019. Polyhedral Optimization of TensorFlow Computation Graphs. In Programming and Performance Visualization Tools, Abhinav Bhatele, David Boehme, Joshua A. Levine, Allen D. Malony, and Martin Schulz (Eds.). Springer International Publishing, Cham, 74–89.
[24]
Sudeep Raja. 2017. A Derivation of Backpropagation in Matrix Form. https://sudeepraja.github.io/Neural/ .
[25]
David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning representations by back-propagating errors. Nature 323 (Oct. 1986), 533–.
[26]
Sven-Bodo Scholz. 2003. Single Assignment C: Efficient Support for High-level Array Operations in a Functional Setting. J. Funct. Program. 13, 6 (Nov. 2003), 1005–1059.
[27]
Nicolas Vasilache, Jeff Johnson, Michaël Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun. 2014. Fast Convolutional Nets With fbfft: A GPU Performance Evaluation. CoRR abs/1412.7580 (2014).
[28]
Artjoms Šinkarovs, Robert Bernecky, and Sven-Bodo Scholz. 2019. Convolutional Neural Networks in APL. In Proceedings of the 6th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming (ARRAY 2019) . ACM, New York, NY, USA, 69– 79.
[29]
Dong Yu, Kaisheng Yao, and Yu Zhang. 2015. The Computational Network Toolkit. IEEE Signal Processing Magazine (November 2015), 123–126.

Cited By

View all
  • (2023)U-Net CNN in APL: Exploring Zero-Framework, Zero-Library Machine LearningProceedings of the 9th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming10.1145/3589246.3595371(22-35)Online publication date: 6-Jun-2023
  • (2021)Array languages make neural networks fastProceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming10.1145/3460944.3464312(39-50)Online publication date: 17-Jun-2021

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FHPNC 2019: Proceedings of the 8th ACM SIGPLAN International Workshop on Functional High-Performance and Numerical Computing
August 2019
59 pages
ISBN:9781450368148
DOI:10.1145/3331553
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 August 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. data-parallelism
  2. deep learning
  3. functional languages

Qualifiers

  • Research-article

Conference

ICFP '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 18 of 25 submissions, 72%

Upcoming Conference

ICFP '25
ACM SIGPLAN International Conference on Functional Programming
October 12 - 18, 2025
Singapore , Singapore

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)10
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)U-Net CNN in APL: Exploring Zero-Framework, Zero-Library Machine LearningProceedings of the 9th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming10.1145/3589246.3595371(22-35)Online publication date: 6-Jun-2023
  • (2021)Array languages make neural networks fastProceedings of the 7th ACM SIGPLAN International Workshop on Libraries, Languages and Compilers for Array Programming10.1145/3460944.3464312(39-50)Online publication date: 17-Jun-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media