Repository logo
 

Data Models, Query Execution, and Storage for Traditional & Immersive Video Data Management

Loading...
Thumbnail Image

Date

Authors

Haynes, Brandon

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The proliferation of cameras deployed throughout our world is enabling and accelerating exciting new applications such as virtual and augmented reality (VR and AR), autonomous driving, drone analytics, and smart infrastructure. However, these cameras collectively produce staggering quantities of video data. VR spherical (360°) video is up to 20x larger in size than its 2D counterparts. Closed-circuit television camera networks, consisting of tens of thousands of cameras, generate petabytes of video data per day. A single autonomous vehicle can generate tens of terabytes of video data per hour. Due to these massive data sizes and the complexity involved with reasoning about large numbers of cameras, developing applications that use real world video data remains challenging. Developers must be cognizant of the low-level storage intricacies of video formats and compression. They need expertise in device-specific programming (e.g., GPUs), and, to maximize performance, they must be able to balance execution across heterogeneous, possibly distributed hardware. In this thesis, we describe several video data management systems designed to simplify application development, optimize execution, evaluate performance, and move forward the state of the art in video data management. The first system, LightDB, presents a simple, declarative interface for VR and AR video application development. It implements powerful query optimization techniques, an efficient storage manager, and a suite of novel physical optimizations. To further improve the performance of video applications, we next introduce a new video file system (VFS), which can serve as a storage manager for video data management systems (such as LightDB and others) or can be used as a standalone system. It is designed to decouple video application design from a video's underlying physical layout and compressed format. Finally, analogous to standardized benchmarks for other areas of data management research, we develop a new benchmark--Visual Road--aimed specifically at evaluating the performance and scalability of video-oriented data management systems and frameworks. By exposing declarative interfaces, LightDB and VFS automatically produce efficient execution strategies that include leveraging heterogeneous hardware, operating directly on the compressed representation of video data, and improving video storage performance. Visual Road reproducibly and objectively measures how well a video system or framework executes a battery of microbenchmarks real-world video-oriented workloads. Collectively through these systems, we show that the application of fundamental data management principles to this space vastly improves runtime performance (by up to 500x), storage performance (up to 45% decrease in file sizes), and greatly decreases application development complexity (decreasing lines of code by up to 90%).

Description

Thesis (Ph.D.)--University of Washington, 2020

Keywords

Augmented reality, Query optimization, Video analytics, Video data management, Video storage, Virtual reality, Computer science

Citation

DOI