TY - JOUR
T1 - Transformer-based image and video inpainting
T2 - current challenges and future directions
AU - Elharrouss, Omar
AU - Damseh, Rafat
AU - Belkacem, Abdelkader Nasreddine
AU - Badidi, Elarbi
AU - Lakas, Abderrahmane
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2025/4
Y1 - 2025/4
N2 - Image inpainting is currently a hot topic within the field of computer vision. It offers a viable solution for various applications, including photographic restoration, video editing, and medical imaging. Deep learning advancements, notably convolutional neural networks (CNNs) and generative adversarial networks (GANs), have significantly enhanced the inpainting task with an improved capability to fill missing or damaged regions in an image or a video through the incorporation of contextually appropriate details. These advancements have improved other aspects, including efficiency, information preservation, and achieving both realistic textures and structures. Recently, Vision Transformers (ViTs) have been exploited and offer some improvements to image or video inpainting. The advent of transformer-based architectures, which were initially designed for natural language processing, has also been integrated into computer vision tasks. These methods utilize self-attention mechanisms that excel in capturing long-range dependencies within data; therefore, they are particularly effective for tasks requiring a comprehensive understanding of the global context of an image or video. In this paper, we provide a comprehensive review of the current image/video inpainting approaches, with a specific focus on Vision Transformer (ViT) techniques, with the goal to highlight the significant improvements and provide a guideline for new researchers in the field of image/video inpainting using vision transformers. We categorized the transformer-based techniques by their architectural configurations, types of damage, and performance metrics. Furthermore, we present an organized synthesis of the current challenges, and suggest directions for future research in the field of image or video inpainting.
AB - Image inpainting is currently a hot topic within the field of computer vision. It offers a viable solution for various applications, including photographic restoration, video editing, and medical imaging. Deep learning advancements, notably convolutional neural networks (CNNs) and generative adversarial networks (GANs), have significantly enhanced the inpainting task with an improved capability to fill missing or damaged regions in an image or a video through the incorporation of contextually appropriate details. These advancements have improved other aspects, including efficiency, information preservation, and achieving both realistic textures and structures. Recently, Vision Transformers (ViTs) have been exploited and offer some improvements to image or video inpainting. The advent of transformer-based architectures, which were initially designed for natural language processing, has also been integrated into computer vision tasks. These methods utilize self-attention mechanisms that excel in capturing long-range dependencies within data; therefore, they are particularly effective for tasks requiring a comprehensive understanding of the global context of an image or video. In this paper, we provide a comprehensive review of the current image/video inpainting approaches, with a specific focus on Vision Transformer (ViT) techniques, with the goal to highlight the significant improvements and provide a guideline for new researchers in the field of image/video inpainting using vision transformers. We categorized the transformer-based techniques by their architectural configurations, types of damage, and performance metrics. Furthermore, we present an organized synthesis of the current challenges, and suggest directions for future research in the field of image or video inpainting.
KW - Image inpainting
KW - Video inpainting
KW - Vision transformers
UR - http://www.scopus.com/inward/record.url?scp=85218261640&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85218261640&partnerID=8YFLogxK
U2 - 10.1007/s10462-024-11075-9
DO - 10.1007/s10462-024-11075-9
M3 - Article
AN - SCOPUS:85218261640
SN - 0269-2821
VL - 58
JO - Artificial Intelligence Review
JF - Artificial Intelligence Review
IS - 4
M1 - 124
ER -