Stream encoder identification in green video context

Mohamed Allouche, Elliot Cole, Mateo Zoughebi, Carl De Sousa Trias, Mihai Mitrea

Research output: Contribution to journalConference articlepeer-review

Abstract

Video streaming hits more than 80% of the carbon emissions generated by worldwide digital technologies consumption that, in their turn, account for 5% of worldwide carbon emissions. Hence, green video encoding emerges as a research field devoted to reducing the size of the video streams and the complexity of the decoding/encoding operations, while keeping a preestablished visual quality. Having the specific view of tracking green encoded video streams, the present paper studies the possibility of identifying the last video encoder considered in the case of multiple reencoding distribution scenarios. To this end, classification solutions backboned by the VGG, ResNet and MobileNet families are considered to discriminate among MPEG-4 AVC stream syntax elements, such as luma/chroma coefficients or intra prediction modes. The video content sums-up to 2 hours and is structured in two databases. Three encoders are alternatively studied, namely a proprietary green-encoder solution, and the two by-default encoders available on a large video sharing platform and on a popular social media, respectively. The quantitative results show classification accuracy ranging between 75% to 100%, according to the specific architecture, sub-set of classified elements, and dataset.

Original languageEnglish
Article numberIPAS-234
JournalIS and T International Symposium on Electronic Imaging Science and Technology
Volume37
Issue number10
DOIs
Publication statusPublished - 1 Jan 2025
EventIS and T International Symposium on Electronic Imaging 2025: 23rd Image Processing: Algorithms and Systems, IPAS 2025 - Burlingame, United States
Duration: 2 Feb 20256 Feb 2025

Fingerprint

Dive into the research topics of 'Stream encoder identification in green video context'. Together they form a unique fingerprint.

Cite this