MASH: Adaptive Streaming of Multiview Videos over HTTP

From NSL

Jump to: navigation, search

Contents

People

Overview

Multiview videos offer unprecedented experience by allowing users to explore scenes from different angles and perspectives. Thus, such videos have been gaining substantial interest from major content providers such as Google and Facebook. Adaptive streaming of multiview videos is, however, challenging because of the Internet dynamics and the diversity of users interests and network conditions. To address this challenge, we propose a novel rate adaptation algorithm for multiview videos (called MASH). Streaming multiview videos is more user centric than single-view videos, because it heavily depends on how users interact with the different views. To efficiently support this interactivity, MASH constructs probabilistic view switching models that capture the switching behavior of the user in the current session, as well as the aggregate switching behavior across all previous sessions of the same video. MASH then utilizes these models to dynamically assign relative importance to different views. Furthermore, MASH uses a new buffer-based approach to request video segments of various views at different qualities, such that the quality of the streamed videos is maximized while the network bandwidth is not wasted. We have implemented a multiview video player and integrated MASH in it. We compare MASH versus the state-of-the-art algorithm used by YouTube for streaming multiview videos. Our experimental results show that MASH can produce much higher and smoother quality than the algorithm used by YouTube, while it is more efficient in using the network bandwidth. In addition, we conduct large- scale experiments with up to 100 concurrent multiview streaming sessions, and we show that MASH maintains fairness across competing sessions, and it does not overload the streaming server.

Details

Figure 1 shows a high-level overview of MASH, which runs at the client side. MASH combines the outputs of the global and local view switching models to produce a relative importance factor \beta_i for each view V_i . MASH also constructs a buffer-rate function f_i for each view V_i, which maps the current buffer occupancy to the segment quality to be requested. The buffer-rate functions are dynamically updated during the session; whenever a view switch happens. MASH strives to produce smooth and high quality playback for all views, while not wasting bandwidth by carefully prefetching views that will likely be watched.


Fig. 1: High-level overview of MASH.
Fig. 1: High-level overview of MASH.


View Switching Models

MASH combines the outputs of two stochastic models (local and global) to estimate the likelihood of different views being watched. We define each view switching model as a discrete-time Markov chain (DTMC) with N (number of views) states. View switching is allowed at discrete time steps of length \Delta. The time step \Delta is the physical constraint on how fast the user can interact with the video.

Local Model: It captures the user activities during the current streaming session, and it evolves with time. That is, the model is dynamic and is updated with every view switching event that happens in the session. The model maintains a count matrix M(t) of size N \times N , where M_{ij}(t) is proportional to the number of times the user switched from view V_i to V_j, from the beginning of the session up to time t. The count matrix M(t) is initialized to all ones. Whenever a view switching occurs, the corresponding element in M(t) is incremented. The count matrix is used to compute the probability transition matrix of the local model L(t).

Global Model: This model aggregates users activities across all streaming sessions that have been served by the server so far. At beginning of the streaming session, the client downloads the global model parameters from the server. We use G to denote the transition matrix of the global model, where G_{ij} = p(V_j |V_i ) is the probability of switching to V_j given V_i. If this is the fist streaming session, G_{ij} is initialized to 1/N for every i and j.

Combined Model: The local and global model complement each other in predicting the (complex) switching behavior of users during watching multiview videos. For example, in some streaming sessions, the user activity may significantly deviate from the global model expectations, because the user is exploring the video from different viewing angles than most previous users have. Or the multiview video may be new, and the global model has not captured the expected view switching pattern yet. On the other hand, the local model may not be very helpful when the user has not had enough view switches yet, e.g., at the beginning of a streaming session. We combine the local and global models to compute an importance factor \beta_ifor each view V_i by linearly combining G and L(t) using weight factor \alpha_i. This weight factor is carefully computed to dynamically adjust the relative weights of the global and local models.

MASH: The Proposed Algorithm

MASH is a buffer-based rate adaptation algorithm for multiview videos, which means it determines the requested segment quality based on the buffer occupancy level, and it does not need to estimate the network capacity.

Rate adaptation for multiview videos is far more complex, as it needs to handle many views of different importance, while not wasting network bandwidth or resulting in many stalls during playback for re-buffering. To handle this complexity, we propose employing a family of buffer-rate functions, which considers the relative importance of the active and inactive views and how this relative view importance dynamically changes during the streaming session. Specifically, we define a function f_i (B_i(t)) for each view V_i, which maps the buffer level B_i(t) of that view to a target quality Q_i(t) based on its importance factor \beta_i at time t. We use \beta_i to limit the maximum buffer occupancy level for view V_i as: B_{max,i} = \beta_i \times B_{max}. Since we set \beta_i = 1 for the active view, the algorithm can request segments up to the maximum quality Q_{max,i}. For inactive views, MASH can request segments for up to a fraction of their maximum qualities. Figure 2 illustrates the buffer-rate functions for two views V_i and V_j. V_i is the active view, so B_{max,i} = B_{max}. The figure shows when the requests stop for both V_i and V_j, and the maximum bitrate difference to reflect the importance of each view.


Fig. 2: Proposed buffer-rate functions of active and inactive views.
Fig. 2: Proposed buffer-rate functions of views V_i (active) and V_j (inactive)


Note: We show that the global and local view switching models converge to their corresponding stationary distributions. We also calculated the number of steps to converge (the details are in the paper).

Evaluation

We assess the performance of MASH through actual implementation and comparisons against the state-of-the-art algorithm in the industry (used by YouTube). We also analyze the fairness of our algorithm across concurrent multiview video sessions and compare its performance against two variations of current rate-adaptation algorithms that could be used to support multiview videos.

We have implemented a complete multiview video player in about 4,000 lines of Java code. It consists of HTTP client, decoders, renderer and rate adapter. Each view has its own decoder and frame buffer. The rate adapter decides on the segments to be requested and their quality levels. Then, segments are fetched from the HTTP server. Once a segment is fetched, it is decoded to frames and stored in the corresponding frame buffer. The renderer has references to all frame buffers, and it renders the corresponding active view.

Our testbed, which consists of multiple virtual machines (VMs) running on the Amazon cloud. We chose high-end VMs with 1 Gbps links, so that the shared cloud environment does not interfere much with our network setup. The HTTP server is YouTube, when we actually run the YouTube multiview client. In other experiments, we install and use the nginx as our HTTP server. Users run our multiview player with different rate adaptation algorithms. When we compare against YouTube, we use the player embedded in the Google Chrome web browser. The bandwidth and latency of the network links connecting VMs with the server are controlled using the Linux Traffic Control \texttt{tc}. We experiment with multiple network conditions to stress our algorithm. We use a multiview video released by YouTube. The video is for a concert and it has four different views shot by four cameras. The views cover the singer, band, stage, and fans. The user is allowed to switch among the four views at any time. The video is about 350 sec long, and it has four quality levels Q = \{0.5, 1, 1.6, 2.8\} Mbps. YouTube did not release other multiview videos, and we can not import multiview videos to YouTube because of the proprietary nature of its multiview player.

MASH vs YouTube Multiview Rate Adaptation

XYZ
XYZ
XYZ
Rendering Quality (high is better) Rate of Buffering (low is better) Prefetching Efficiency (high is better)


Our experiments showed that MASH can produce much higher (up to 3X) and smoother quality than YouTube. They also show that unlike YouTube, MASH does not suffer from any playback interruptions even in presence of frequent user activities and dynamic bandwidth changes. Moreover, MASH is more efficient in using the network bandwidth, with a prefetching efficiency up to 2X higher than that of YouTube.


Fairness and Comparisons vs. others

XYZ
XYZ
XYZ
Server Load (low is better) Rendering Quality (high is better) Prefetching Efficiency (high is better)

Our experiments showed that MASH achieves fairness across multiple competing multiview streaming sessions (Jain Index = 0.93), and it does not overload the streaming server. Moreover, MASH outperforms other rate adaptation algorithms, which are derived from current adaptation algorithms for single-view videos.

Publications

  • K. Diab and M. Hefeeda, MASH: A Rate Adaptation Algorithm for Multiview Video Streaming over HTTP, In Proc. of IEEE INFOCOM 2017, Atlanta, GA, May 2017. (Acceptance: 21%)