SENSEI: Aligning Video Streaming Quality with Dynamic User Sensitivity

https://people.cs.uchicago.edu/~junchenj/docs/sensei.pdf

Talk

  • Network bandwidth is insufficient for desirable QoE

  • Goal: better QoE for more users given limited bandwidth

  • Conventional wisdom

    • Treat video chunks equally when the player choose bitrate for chunks

    • Key insight: users have different quality sensitivity to the chunks

  • Different quality tolerance to rebuffering

  • Quality sensitivity varies with video content! (different degree of attention to different parts of videos)

  • Roadmap

    • Demonstrate high variability of quality sensitivity in real videos

      • QoE drop could vary > 110% for 50% videos!

      • Opportunity: large variability enables us to trade off insensitive chunks for sensitive ones

    • Quantify this quality sensitivity reliably

    • Leverage this quality sensitivity to improve adaptive video streaming

  • Incorporating quality sensitivity into a QoE model

    • Traditional QoE model: sum of QoE estimate of individual chunks

    • In SENSEI: reweight the chunks by their quality sensitivity in a QoE model

    • How to capture content-dependent quality sensitivity?

      • Strawman: directly use video saliency models

        • Pixel-motion-based models, e.g., AMVM

        • Interestingness score models, e.g., Video2Gif, DSN

        • However, the purposes of the saliency models do not align with quality sensitivity

  • Idea: directly ask for quality sensitivity by crowdsourcing

    • Pros

      • Directly link video quality to QoE

      • Worth the cost for popular on-demand videos

    • Cons

      • High cost to evaluate every chunk and every type of low-quality event

        • Idea: coarse quality sensitivity

          • Group chunks that might have similar quality sensitivity

          • Zoom in the representative chunks in each group

        • Two step scheduling

          • Step 1: identify chunks that share weights

          • Step 2: zoom in the representative chunks to get the weight

      • Response reliability affecting the QoE model accuracy

        • Challenge: crowdsourcing workers might provide random responses

        • Quality control scheme:

          • Engagement test, control questions, randomized video order, use Master Turkers

          • More reliable responses make higher accuracy of the QoE model

      • Not support live-video streaming

  • Protect quality sensitive video chunks

    • New action: lower the quality of insensitivity chunks to get high quality for sensitive chunks

  • Evaluation

    • Dataset: 16 videos

    • Baseline ABR algorithms: Fugu, Pensive, BBA (buffer-based)

    • Sensei achieves higher QoE

    • Sensei can save bandwidth

  • Summary

    • Observation: for viewers, quality sensitivity varies as video content changes

    • Key idea: embrace variability of quality sensitivity by sensitivity weights obtained via per-video crowdsourcing

    • SENSEI improves video QoE by 15.1% or save bandwidth by 26.8% on average with a cost of $31.4 per min video

Some take-aways and questions:

  • The key insight is the content-dependent dynamic quality sensitivity

  • Approach: separate crowdsourcing experiment for each video to derive the quality sensitivity of users at different parts of the video

    • Dynamically align higher (lower) quality with higher (lower) sensitivity period

  • Compare to prior works

    • Recent adaptive bitrate (ABR) algorithms: near-optimal balance between bitrate and rebuffering events

    • Recent video codecs: improve encoding efficiency but require more compute power

    • New trends: better tradeoffs between bandwidth usage and user-perceived QoE

Last updated