Loading

Specific Area Style Transfer on Real-Time Video
Geun Tae Kim1, Hyunmin Kim2, Hyung-Hwa Ko3

1Hyung-Hwa Ko, Dept. of Electronics and Communications Eng, Kwangwoon Univ, Seoul, Korea.
2GeunTae Kim, ABH Inc, Ulsan, Republic of Korea.
3Hyunmin Kim, Telecons Inc, Seoul, Republic of Korea.

Manuscript received on February 06, 2021. | Revised Manuscript received on March 14, 2021. | Manuscript published on March 30, 2021. | PP: 50-56 | Volume-10 Issue-5, March 2021 | Retrieval Number: 100.1/ijitee.E86890310521| DOI: 10.35940/ijitee.E8689.0310521
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Since deep learning applications in object recogni-tion, object detection, segmentation, and image generation are needed increasingly, related research has been actively conducted. In this paper, using segmentation and style transfer together, a method of producing desired images in the desired area in real-time video is proposed. Two deep neural networks were used to enable as possible as in real-time with the trade-off relationship between speed and accuracy. Modified BiSe Net for segmentation and Cycle GAN for style transfer were processed on a desktop PC equipped with two RTX-2080-Ti GPU boards. This enables real-time processing over SD video in decent level. We obtained good results in subjective quality to segment Road area in city street video and change into the Grass style at no less than 6(fps). 
Keywords: Deep Learning, GAN, Semantic Segmentation, Style Transfer.