skip to main content
10.1145/3123266.3127907acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Brain2Image: Converting Brain Signals into Images

Published: 23 October 2017 Publication History

Abstract

Reading the human mind has been a hot topic in the last decades, and recent research in neuroscience has found evidence on the possibility of decoding, from neuroimaging data, how the human brain works. At the same time, the recent rediscovery of deep learning combined to the large interest of scientific community on generative methods has enabled the generation of realistic images by learning a data distribution from noise. The quality of generated images increases when the input data conveys information on visual content of images. Leveraging on these recent trends, in this paper we present an approach for generating images using visually-evoked brain signals recorded through an electroencephalograph (EEG). More specifically, we recorded EEG data from several subjects while observing images on a screen and tried to regenerate the seen images. To achieve this goal, we developed a deep-learning framework consisting of an LSTM stacked with a generative method, which learns a more compact and noise-free representation of EEG data and employs it to generate the visual stimuli evoking specific brain responses.
OurBrain2Image approach was trained and tested using EEG data from six subjects while they were looking at images from 40 ImageNet classes. As generative models, we compared variational autoencoders (VAE) and generative adversarial networks (GAN). The results show that, indeed, our approach is able to generate an image drawn from the same distribution of the shown images. Furthermore, GAN, despite generating less realistic images, show better performance than VAE, especially as concern sharpness. The obtained performance provides useful hints on the fact that EEG contains patterns related to visual content and that such patterns can be used to effectively generate images that are semantically coherent to the evoking visual stimuli.

References

[1]
U Rajendra Acharya, S Vinitha Sree, G Swapna, Roshan Joy Martis, and Jasjit S Suri. 2013. Automated EEG analysis of epilepsy: a review. Knowledge-Based Systems 45 (2013), 147--165.
[2]
T. Carlson, D. A. Tovar, A. Alink, and N. Kriegeskorte. 2013. Representational dynamics of object vision: the first 1000 ms. Journal of Vision 13, 10 (2013).
[3]
T. A. Carlson, H. Hogendoorn, R. Kanai, J. Mesik, and J. Turret. 2011. High temporal resolution decoding of object position and category. Journal of Vision 11, 10 (2011).
[4]
Hubert Cecotti and Axel Graser. 2011. Convolutional neural networks for P300 detection with application to brain-computer interfaces. IEEE transactions on pattern analysis and machine intelligence 33, 3 (2011), 433--445.
[5]
K. Das, B. Giesbrecht, and M. P. Eckstein. 2010. Predicting variations of perceptual performance across individuals from neural activity using pattern classifiers. Neuroimage 51, 4 (Jul 2010), 1425--1437.
[6]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. 2672--2680.
[7]
Andrea M Green and John F Kalaska. 2011. Learning to move machines with the mind. Trends in neurosciences 34, 2 (2011), 61--75.
[8]
Christoph Guger, Werner Harkam, Carin Hertnaes, and Gert Pfurtscheller. 1999. Prosthetic control by an EEG-based brain-computer interface (BCI). In Proc. aaate 5th european conference for the advancement of assistive technology. 3--6.
[9]
John-Dylan Haynes and Geraint Rees. 2006. Decoding mental states from brain activity in humans. Nature reviews. Neuroscience 7, 7 (2006), 523.
[10]
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[11]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
[12]
Z. Kourtzi and N. Kanwisher. 2000. Cortical regions involved in perceiving object shape. J. Neurosci. 20, 9 (May 2000), 3310--3318.
[13]
Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2015. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300 (2015).
[14]
Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann, and Jyh-Horng Chen. 2010. EEG-based emotion recognition in music listening. IEEE Transactions on Biomedical Engineering 57, 7 (2010), 1798--1806.
[15]
Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).
[16]
Gernot R Muller-Putz and Gert Pfurtscheller. 2008. Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Transactions on Biomedical Engineering 55, 1 (2008), 361--364.
[17]
Masaki Nakanishi, Yijun Wang, Yu-Te Wang, Yasue Mitsukura, and Tzyy- Ping Jung. 2014. A high-speed brain speller using steady-state visual evoked potentials. International journal of neural systems 24, 06 (2014), 1450019.
[18]
Thomas Naselaris, Ryan J Prenger, Kendrick N Kay, Michael Oliver, and Jack L Gallant. 2009. Bayesian reconstruction of natural images from human brain activity. Neuron 63, 6 (2009), 902--915.
[19]
Shinji Nishimoto, An T Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu, and Jack L Gallant. 2011. Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology 21, 19 (2011), 1641--1646.
[20]
H. P. Op de Beeck, K. Torfs, and J. Wagemans. 2008. Perceived shape similarity among unfamiliar objects and the organization of the human object vision pathway. J. Neurosci. 28, 40 (Oct 2008), 10111--10123.
[21]
M. V. Peelen and P. E. Downing. 2007. The neural basis of visual body perception. Nat. Rev. Neurosci. 8, 8 (Aug 2007), 636--648.
[22]
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In Proceedings of The 33rd International Conference on Machine Learning, Vol. 3.
[23]
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems. 2226--2234.
[24]
Andrew B Schwartz, X Tracy Cui, Douglas J Weber, and Daniel W Moran. 2006. Brain-controlled interfaces: movement restoration with neural pros-thetics. Neuron 52, 1 (2006), 205--220.
[25]
Pradeep Shenoy and Desney Tan. 2008. Human-Aided Computing: Utilizing Implicit Human Processing to Classify Images. In CHI 2008 Conference on Human Factors in Computing Systems. http://research.microsoft.com/ apps/pubs/default.aspx?id=64271
[26]
Concetto Spampinato, Simone Palazzo, Isaak Kavasidis, Daniela Giordano, Nasim Souly, and Mubarak Shah. 2017. Deep Learning Human Mind for Automated Visual Classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017).
[27]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1--9.
[28]
C. Wang, S. Xiong, X. Hu, L. Yao, and J. Zhang. 2012. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects. J Neural Eng 9, 5 (Oct 2012), 056013.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '17: Proceedings of the 25th ACM international conference on Multimedia
October 2017
2028 pages
ISBN:9781450349062
DOI:10.1145/3123266
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 October 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. eeg
  2. image generation
  3. variational autoencoder

Qualifiers

  • Research-article

Conference

MM '17
Sponsor:
MM '17: ACM Multimedia Conference
October 23 - 27, 2017
California, Mountain View, USA

Acceptance Rates

MM '17 Paper Acceptance Rate 189 of 684 submissions, 28%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)442
  • Downloads (Last 6 weeks)28
Reflects downloads up to 26 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media