World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.
Special Issue — Best Papers from 2016 IEEE International Symposium on Multimedia (ISM 2016) — Part 1; Guest Editors: G. Zhang and Phillip C.-Y. SheuNo Access

User-Generated Video Composition Based on Device Context Measurements

    https://doi.org/10.1142/S1793351X17400049Cited by:1 (Source: Crossref)

    Instant sharing of user-generated video recordings has become a widely used service on platforms such as YouNow, Facebook.Live or uStream. Yet, providing such services with a high QoE for viewers is still challenging, given that mobile upload speed and capacities are limited, and the recording quality on mobile devices greatly depends on the users’ capabilities. One proposed solution to address these issues is video composition. It allows to switch between multiple recorded video streams, selecting the best source at any given time, for composing a live video with a better overall quality for the viewers. Previous approaches have required an in-depth visual analysis of the video streams, which usually limited the scalability of these systems. In contrast, our work allows the stream selection to be realized solely on context information, based on video- and service-quality aspects from sensor and network measurements.

    The implemented monitoring service for a context-aware upload of video streams is evaluated in different network conditions, with diverse user behavior, including camera shaking and user mobility. We have evaluated the system’s performance based on two studies. First, in a user study, we show that a higher efficiency for the video upload as well as a better QoE for viewers can be achieved when using our proposed system. Second, by examining the overall delay for the switching between streams based on sensor readings, we show that a composition view change can efficiently be achieved in approximately four seconds.