skip to main content
10.1145/3209889.3209890acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
short-paper
Public Access

Learning State Representations for Query Optimization with Deep Reinforcement Learning

Published: 15 June 2018 Publication History

Abstract

We explore the idea of using deep reinforcement learning for query optimization. The approach is to build queries incrementally by encoding properties of subqueries using a learned representation.
In this paper, we focus specifically on the state representation problem and the formation of the state transition function. We show preliminary results and discuss how we can use the state representation to improve query optimization using reinforcement learning.

References

[1]
Martín Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorfow.org.
[2]
Ron Avnur et al. Eddies: Continuously adaptive query processing. SIGMOD Record, 2000.
[3]
Ian Goodfellow et al. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
[4]
Tim Kraska et al. The case for learned index structures. CoRR, 2017.
[5]
Viktor Leis et al. How good are query optimizers, really? Proc. VLDB Endow., 2015.
[6]
Henry Liu et al. Cardinality estimation using neural networks. In CASCON 2015.
[7]
Ryan Marcus et al. Deep reinforcement learning for join order enumeration. CoRR, 2018.
[8]
Volodymyr Mnih et al. Human-level control through deep reinforcement learning. Nature, 2015.
[9]
David Silver. UCL Course on Reinforcement Learning, 2015.
[10]
Michael Stillger et al. Leo - db2's learning optimizer. In VLDB 2001.
[11]
Richard S. Sutton et al. Reinforcement learning I: Introduction, 2016.
[12]
Csaba Szepesvari. Algorithms for reinforcement learning. Morgan and Claypool Publishers, 2009.
[13]
Kostas Tzoumas et al. A reinforcement learning approach for adaptive query processing. In A DB Technical Report, 2008.
[14]
Wei Wang et al. Database Meets Deep Learning: Challenges and Opportunities. SIGMOD Record, 2016.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
DEEM'18: Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning
June 2018
63 pages
ISBN:9781450358286
DOI:10.1145/3209889
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 June 2018

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Short-paper
  • Research
  • Refereed limited

Funding Sources

Conference

SIGMOD/PODS '18
Sponsor:

Acceptance Rates

DEEM'18 Paper Acceptance Rate 10 of 16 submissions, 63%;
Overall Acceptance Rate 44 of 67 submissions, 66%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)286
  • Downloads (Last 6 weeks)42
Reflects downloads up to 30 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media