-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
271 lines (248 loc) · 12.2 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description"
content="Video Colorization with Pre-trained Text-to-Image Diffusion Models">
<meta name="keywords" content="Colorization, Video Colorization, Stable Diffusion">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Video Colorization with Pre-trained Text-to-Image Diffusion Models</title>
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-9Y37TBFM0D"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-9Y37TBFM0D');
</script>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="./static/css/bulma.min.css">
<link rel="stylesheet" href="./static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="./static/css/bulma-slider.min.css">
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="./static/css/index.css">
<link rel="icon" type="image/png" href="./static/images/favicon.png">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
<script src="./static/js/bulma-carousel.min.js"></script>
<script src="./static/js/bulma-slider.min.js"></script>
<script src="./static/js/index.js"></script>
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<base target="_blank">
</head>
<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
<div class="navbar-brand">
<a role="button" class="navbar-burger" aria-label="menu" aria-expanded="false">
<span aria-hidden="true"></span>
<span aria-hidden="true"></span>
<span aria-hidden="true"></span>
</a>
</div>
</nav>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">Video Colorization with Pre-trained Text-to-Image Diffusion Models</h1>
<div class="is-size-5 publication-authors">
<span class="author-block">
Anonymous Authors
</span>
</div>
<div class="is-size-5 publication-authors">
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- PDF Link. -->
<span class="link-block">
<a href="https://arxiv.org/abs/2306.01732"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>arXiv</span>
</a>
</span>
<!-- <span class="link-block">
<a href=""
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-desktop"></i>
</span>
<span>Demo (coming soon)</span>
</a>
</span> -->
<!-- Code Link. -->
<span class="link-block">
<a href="https://github.com/ColorDiffuser/ColorDiffuser"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code (coming soon)</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<video id="teaser" autoplay muted loop playsinline preload height="100%">
<source src="./static/videos/teaser-h265.mp4"
type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/teaser-h264.mp4"
type="video/mp4">
<source src="./static/videos/teaser.webm"
type="video/webm">
</video>
<!-- <h2 class="subtitle has-text-centered"> -->
<!-- </h2> -->
</div>
</div>
</section>
<section class="section hero is-light">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Video colorization is a challenging task that involves inferring plausible and temporally consistent colors for grayscale frames. In this paper, we present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization. With the proposed adapter-based approach, we repropose the pre-trained text-to-image model to accept input grayscale video frames, with the optional text description, for video colorization. To enhance the temporal coherence and maintain the vividness of colorization across frames, we propose two novel techniques: the <i>Color Propagation Attention</i> and <i>Alternated Sampling Strategy</i>. Color Propagation Attention enables the model to refine its colorization decision based on a reference latent frame, while Alternated Sampling Strategy captures spatiotemporal dependencies by using the next and previous adjacent latent frames alternatively as reference during the generative diffusion sampling steps. This encourages bidirectional color information propagation between adjacent video frames, leading to improved color consistency across frames. We conduct extensive experiments on benchmark datasets, and the results demonstrate the effectiveness of our proposed framework. The evaluations show that ColorDiffuser achieves state-of-the-art performance in video colorization, surpassing existing methods in terms of color fidelity, temporal consistency, and visual quality.
</p>
</div>
</div>
</div>
<!--/ Abstract. -->
<!-- Paper video. -->
<!-- <div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Video</h2>
<div class="publication-video">
<iframe src=""
frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
</div>
</div>
</div> -->
<!--/ Paper video. -->
</div>
</section>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<h2 class="title is-3">Colorization Results</h2>
<h3 class="title is-4" id="video-hint" hidden>Playing low-quality webm version as your browser does not support H.265</h3>
<div id="results-carousel" class="carousel results-carousel is-vcentered">
<div class="item">
<video poster="" autoplay controls muted loop height="100%">
<source src="./static/videos/results/Vineyard.mp4" type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/results/Vineyard.webm" type="video/webm">
Your browser does not support H.265 video.
</video>
</div>
<div class="item">
<video poster="" autoplay controls muted loop height="100%">
<source src="./static/videos/results/gold-fish.mp4" type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/results/gold-fish.webm" type="video/webm">
Your browser does not support H.265 video.
</video>
</div>
<div class="item">
<video poster="" autoplay controls muted loop height="100%">
<source src="./static/videos/results/TimeSquareTraffic.mp4" type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/results/TimeSquareTraffic.webm" type="video/webm">
Your browser does not support H.265 video.
</video>
</div>
<div class="item">
<video poster="" autoplay controls muted loop height="100%">
<source src="./static/videos/results/Cycling.mp4" type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/results/Cycling.webm" type="video/webm">
Your browser does not support H.265 video.
</video>
</div>
<div class="item">
<video poster="" autoplay controls muted loop height="100%">
<source src="./static/videos/results/goat.mp4" type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/results/goat.webm" type="video/webm">
Your browser does not support H.265 video.
</video>
</div>
<div class="item">
<video poster="" autoplay controls muted loop height="100%">
<source src="./static/videos/results/Koala.mp4" type='video/mp4; codecs="hev1.1.6.L93.B0"'>
<source src="./static/videos/results/Koala.webm" type="video/webm">
Your browser does not support H.265 video.
</video>
</div>
</div>
</div>
</div>
</section>
<section class="section">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column is-full-width">
<h2 class="title is-3">Method</h2>
<div class="content has-text-justified">
<p>
<img src="./static/images/overview.png"></img>
</p>
<p>
<ul>
<li>We extend the pre-trained Stable Diffusion model to a reference-based frame colorization model. With the adapter-based mechanism, we obtain a conditional text-to-image latent diffusion model that leverages the power of the pre-trained Stable Diffusion model to render colors in the latent space \(z_c\), according to the visual semantics of the grayscale input \(g=\mathcal{E_g}(I_g)\), the text input, and the reference color latent \(z_{\texttt{ref},t}\).
</li>
<li>During the inference, for each frame in the input grayscale video, we perform a parallel sampling process. Each sampling step for a particular frame is conditioned on the latent information from the previous sampling step of an adjacent frame. Essentially, the <i>Color Propagation Attention</i> and the <i>Alternated Sampling Strategy</i> coordinate the reverse diffusion process and enable bidirectional propagation of color information between adjacent frames to ensure consistency in colorization over time.
</li>
</ul>
</p>
</div>
</div>
</div>
</div>
</section>
<!-- <section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>
</code></pre>
</div>
</section> -->
<footer class="footer">
<div class="container">
<!-- <div class="content has-text-centered">
<a class="icon-link"
href="">
<i class="fas fa-file-pdf"></i>
</a>
<a class="icon-link" href="" class="external-link" disabled>
<i class="fab fa-github"></i>
</a>
</div> -->
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
This website is licensed under a <a rel="license"
href="http://creativecommons.org/licenses/by-sa/4.0/">Creative
Commons Attribution-ShareAlike 4.0 International License</a>. The source code is based on <a
href="https://github.com/nerfies/nerfies.github.io">nerfies</a> project page.
</p>
</div>
</div>
</div>
</div>
</footer>
</body>
</html>