ADAPTIVE VIDEO ENHANCEMENT BASED ON BLIND DEGRADATION ESTIMATION
DOI:
https://doi.org/10.31891/csit-2025-2-21Keywords:
video enhancement, blind restoration, degradation estimation, conditional processing, deep learning, super-resolution, temporal consistency, perceptual quality.Abstract
Video enhancement aims to restore high-quality video from degraded inputs affected by noise, blur, compression artifacts, or resolution loss. Most existing models assume a fixed degradation during training, limiting their robustness to real-world scenarios with unknown and varying distortions. In this paper, we propose a quality-aware video enhancement framework that explicitly estimates the input degradation level and conditions the restoration process accordingly.
Our method consists of a lightweight degradation level estimation module that predicts a quality score for each frame, and a conditional enhancement network that dynamically adjusts restoration strength based on the estimated degradation. Unlike static models trained for a single degradation type, our system adapts to diverse distortions, applying appropriate enhancement strategies for different quality levels.
Extensive experiments on standard datasets such as Vimeo-90K and REDS demonstrate that our method consistently outperforms strong baselines, including BasicVSR, EDVR, and others, particularly under blind degradations. The proposed framework improves PSNR, SSIM, and LPIPS scores, while maintaining temporal consistency and introducing only minor computational overhead. These results highlight the potential of explicit quality estimation for achieving robust and perceptually faithful video restoration across varying real-world conditions.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Микола МАКСИМІВ, Тарас РАК

This work is licensed under a Creative Commons Attribution 4.0 International License.