Deep Edge-Aware Interactive Colorization
against Color Bleeding Effects

ICCV (Oral presentation)

Eungyeup Kim*
eykim94@kaist.ac.kr
KAIST
Sanghyeon Lee*
shlee6825@kaist.ac.kr
KAIST
Jeonghoon Park*
jeonghoon_park@kaist.ac.kr
KAIST
Somi Choi
smchoi257@kaist.ac.kr
KAIST
Choonghyun Seo
choonghyun.seo@webtoonscorp.com
NAVER WEBTOON Corp.
Jaegul Choo
jchoo@kaist.ac.kr
KAIST
Responsive image
Qualitative results of edge enhancement given user-given scribbles in the proposed framework.

Abstract

Deep image colorization networks often suffer from the color-bleeding artifact, a problematic color spreading near the boundaries between adjacent objects. The color-bleeding artifacts debase the reality of generated outputs, limiting the applicability of colorization models on a practical application. Although previous approaches have tackled this problem in an automatic manner, they often generate imperfect outputs because their enhancements are available only in limited cases, such as having a high contrast of gray-scale value in an input image. Instead, leveraging user interactions would be a promising approach, since it can help the edge correction in the desired regions. In this paper, we propose a novel edge-enhancing framework for the regions of interest, by utilizing user scribbles that indicate where to enhance. Our method requires minimal user effort to obtain satisfactory enhancements. Experimental results on various datasets demonstrate that our interactive approach has outstanding performance in improving color-bleeding artifacts against the existing baselines.


Paper

[Arxiv]

Eungyeup Kim, Sanghyeon Lee, Jeonghoon Park, Somi Choi, Choonghyun Seo, and Jaegul Choo. "Deep Edge-Aware Interactive Colorization against Color-Bleeding Effects"


Method overview

Responsive image
An overview of our proposed method.

Additional Results

Responsive image
Figure: Qualitative results of edge-enhanced colorization results from our method applied to Zhang et al.
Responsive image
Figure: Qualitative results of edge-enhanced colorization results from our method applied to Su et al.
Responsive image
Figure: Qualitative comparisons of our method against existing colorization baselines.