教你如何使用GAN为口袋妖怪上色
阿丽66
发表于 2023-1-14 11:50:51
224
0
0
在之前的Demo中,我们使用了条件GAN来生成了手写数字图像。那么除了生成数字图像以外我们还能用神经网络来干些什么呢?
在本案例中,我们用神经网络来给口袋妖怪的线框图上色。
第一步: 导入使用库" {. }: H9 W- r: V* x5 a9 P6 _
from __future__ import absolute_import, division, print_function, unicode_literals( j; f S8 Q9 j( S' K
import tensorflow as tf. g4 |. ~9 W0 n1 E- a9 C
tf.enable_eager_execution()
import numpy as np
import pandas as pd- I7 v* ]0 G, u9 [
import os
import time
import matplotlib.pyplot as plt
from IPython.display import clear_output
口袋妖怪上色的模型训练过程中,需要比较大的显存。为了保证我们的模型能在2070上顺利的运行,我们限制了显存的使用量为90%, 来避免显存不足的引起的错误。
config = tf.compat.v1.ConfigProto()) ]! N" `0 P' b2 x3 B- @' X) A( M2 Z# ~
config.gpu_options.per_process_gpu_memory_fraction = 0.9
session = tf.compat.v1.Session(config=config)2 b8 g- }/ m$ \. ~$ i2 y& A( m
定义需要使用到的常量。! v) j' J& o" Q* j/ l* U
BUFFER_SIZE = 400% V3 n) U# g# U# ?! C1 Y
BATCH_SIZE = 1+ p* |2 S! V4 m( t. V0 X# l
IMG_WIDTH = 256
IMG_HEIGHT = 2567 u* d0 T. Y. b8 i
PATH = 'dataset/'" A" w3 i9 O+ s3 d: m8 d& S
OUTPUT_CHANNELS = 3( d, I5 r1 n7 I, [
LAMBDA = 100. n7 z( Y2 A. W% d1 D$ A
EPOCHS = 10
第二步: 定义需要使用的函数
图片数据加载函数,主要的作用是使用Tensorflow的io接口读入图片,并且放入tensor的对象中,方便后续使用
def load(image_file):; I1 F$ c4 {4 H' [
image = tf.io.read_file(image_file)
image = tf.image.decode_jpeg(image)
w = tf.shape(image)[1]* J ? n; T/ j+ E+ ?
w = w // 2
input_image = image[:, :w, :]
real_image = image[:, w:, :]
input_image = tf.cast(input_image, tf.float32)
real_image = tf.cast(real_image, tf.float32)6 B0 j" G2 n# O& r6 f8 o
return input_image, real_image
tensor对象转成numpy对象的函数
在训练过程中,我会可视化一些训练的结果以及中间状态的图片。Tensorflow的tensor对象无法直接在matplot中直接使用,因此我们需要一个函数,将tensor转成numpy对象。
def tensor_to_array(tensor1): m/ @ @- l$ ~ t. z t5 k
return tensor1.numpy()2 p8 q- R2 z' A' S
第三步: 数据可视化6 H O. p9 F! l* O8 I4 J5 L) I
我们先来看下我们的训练数据长成什么样。
我们每张数据图片分成了两个部分,左边部分是线框图,我们用来作为输入数据,右边部分是上色图,我们用来作为训练的目标图片。& y* v& } C9 y( C. ^
我们使用上面定义的load函数来加载一张图片看下
input, real = load(PATH+'train/114.jpg')
plt.figure()
plt.imshow(tensor_to_array(input)/255.0)
plt.figure()
plt.imshow(tensor_to_array(real)/255.0)
第四步: 数据增强8 X7 M$ q% }3 R1 v
由于我们的训练数据不够多,我们使用数据增强来增加我们的样本。从而让小样本的数据也能达到更好的效果。# i+ A7 O2 l8 P. z8 Q
我们采取如下的数据增强方案:& `4 a7 k" N" j0 J
图片缩放, 将输入数据的图片缩放到我们指定的图片的大小随机裁剪数据归一化左右翻转
: ~; r* \7 }* r; x
def resize(input_image, real_image, height, width):9 r1 x+ T+ A& | z9 S) C: e+ c* O
input_image = tf.image.resize(input_image, [height, width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR), _' H2 j% |, t! d; g: i h7 I
real_image = tf.image.resize(real_image, [height, width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
return input_image, real_image5 e( D3 r* ~6 Q& G3 I
def random_crop(input_image, real_image):% Q8 s* U% D, G6 @' Y/ _- _& o
stacked_image = tf.stack([input_image, real_image], axis=0) ^* C' z: X1 `2 T% W4 P
cropped_image = tf.image.random_crop(stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])
return cropped_image[0], cropped_image[1]0 ?2 f& a/ X' [
def random_crop(input_image, real_image):
stacked_image = tf.stack([input_image, real_image], axis=0)8 q; {# K6 R* u3 y6 D' c. _
cropped_image = tf.image.random_crop(stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])* _( \, w" Y; `
return cropped_image[0], cropped_image[1]
我们将上述的增强方案做成一个函数,其中左右翻转是随机进行
@tf.function()* N k8 i/ l7 R' g3 ]6 q
def random_jitter(input_image, real_image):
input_image, real_image = resize(input_image, real_image, 286, 286)% q/ W4 T/ U8 [. o' a
input_image, real_image = random_crop(input_image, real_image). k! C9 T1 b0 c. ]
if tf.random.uniform(()) > 0.5: A/ x: G' f2 ^( R) H ?" P
input_image = tf.image.flip_left_right(input_image)+ T5 ^2 {, G3 U4 \/ l
real_image = tf.image.flip_left_right(real_image)* l' ?; e/ I* G2 y: C
return input_image, real_image
数据增强的效果) r2 N8 {( C7 M9 J
plt.figure(figsize=(6, 6))1 L5 L/ D% ~+ j& |* L4 c
for i in range(4):" a( r5 u$ O2 _6 D/ E
input_image, real_image = random_jitter(input, real)
plt.subplot(2, 2, i+1)
plt.imshow(tensor_to_array(input_image)/255.0)
plt.axis('off')
plt.show()
第五步: 训练数据的准备
定义训练数据跟测试数据的加载函数9 J# e- Y/ L/ h, L5 j: P
def load_image_train(image_file):5 ?: P# q6 Z, E! Y3 d9 J
input_image, real_image = load(image_file)) w5 Y/ z* Q* G% b4 @$ K5 D9 ?7 {) O
input_image, real_image = random_jitter(input_image, real_image)
input_image, real_image = normalize(input_image, real_image)
return input_image, real_image
def load_image_test(image_file):
input_image, real_image = load(image_file)
input_image, real_image = resize(input_image, real_image, IMG_HEIGHT, IMG_WIDTH)6 v: ]8 |$ }# z5 g( }$ J
input_image, real_image = normalize(input_image, real_image)1 ]; B2 e7 ] \+ m/ w0 T
return input_image, real_image
使用tensorflow的DataSet来加载训练和测试数据, 定义我们的训练数据跟测试数据集对象1 X. i3 `) C( \3 b0 x
train_dataset = tf.data.Dataset.list_files(PATH+'train/*.jpg')# Z9 b7 `1 T( I' V2 o" |% A" `
train_dataset = train_dataset.map(load_image_train, num_parallel_calls=tf.data.experimental.AUTOTUNE). J9 X% ~" I1 J
train_dataset = train_dataset.cache().shuffle(BUFFER_SIZE) q1 T L8 n$ T
train_dataset = train_dataset.batch(1)5 ]* h9 B; ~& Z6 [
test_dataset = tf.data.Dataset.list_files(PATH+'test/*.jpg')
test_dataset = test_dataset.map(load_image_test)# v; J/ O) R [% K- V( y
test_dataset = test_dataset.batch(1); s$ s% }5 j1 Y& w5 i+ |3 D
第六步: 定义模型, B% U2 F& ], I+ W! M4 I t' N
口袋妖怪的上色,我们使用的是GAN模型来训练, 相比上个条件GAN生成手写数字图片,这次的GAN模型的复杂读更加的高。+ y$ O; }" Z( x2 U3 x2 |( E$ R
我们先来看下生成网络跟判别网络的整体结构
生成网络
生成网络使用了U-Net的基本框架,编码阶段的每一个Block我们使用, 卷积层->BN层->LeakyReLU的方式。解码阶段的每一个Block我们使用, 反卷积->BN层->Dropout或者ReLU。其中前三个Block我们使用Dropout, 后面的我们使用ReLU。每一个编码层的Block输出还连接了与之对应的解码层的Block. 具体可以参考U-Net的skip connection.' C8 N; t* N, D
定义编码Block
def downsample(filters, size, apply_batchnorm=True):
initializer = tf.random_normal_initializer(0., 0.02)* C" y" [4 U1 R3 n, ^6 `$ p6 Z
result = tf.keras.Sequential()) P5 f1 c9 Z+ u; C4 N* U2 L. |# B
result.add(tf.keras.layers.Conv2D(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False))
if apply_batchnorm:4 N c2 `1 S; C% _! H* c
result.add(tf.keras.layers.BatchNormalization())& Z+ t5 S3 }8 I
result.add(tf.keras.layers.LeakyReLU())
return result
down_model = downsample(3, 4)5 R. C9 ^( a' d/ Q7 l" [9 n. _! y$ {
定义解码Block
def upsample(filters, size, apply_dropout=False):' H9 D" v% k4 J: j8 D, m
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential(): z! `" c( y0 b" \! @1 t- _) [
result.add(tf.keras.layers.Conv2DTranspose(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False))
result.add(tf.keras.layers.BatchNormalization())) Z: `/ x4 z( S1 f! h+ ^% q
if apply_dropout:0 P, T: `* Z3 w) c% e( }
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.ReLU())
return result$ f& l1 t" `6 d, r3 H$ S" L; ?4 Q
up_model = upsample(3, 4)7 |* t, ], Z- g3 A1 ?% Y# w
定义生成网络模型, U2 i9 |- L; o1 F, w' H; M
def Generator():3 y6 |4 ^5 c1 M& {; I: l
down_stack = [# ]6 ]5 k: [+ W2 u* V' h: X9 S+ B
downsample(64, 4, apply_batchnorm=False), # (bs, 128, 128, 64)& }$ S- e$ { L h/ b5 ]* ^' u; q$ `6 G3 J
downsample(128, 4), # (bs, 64, 64, 128)
downsample(256, 4), # (bs, 32, 32, 256)7 m) g; K& a) D7 C7 u) b
downsample(512, 4), # (bs, 16, 16, 512)
downsample(512, 4), # (bs, 8, 8, 512)1 V( ^# q. ^ }
downsample(512, 4), # (bs, 4, 4, 512)
downsample(512, 4), # (bs, 2, 2, 512)
downsample(512, 4), # (bs, 1, 1, 512)
] D% m8 U/ q: M1 D
up_stack = [
upsample(512, 4, apply_dropout=True), # (bs, 2, 2, 1024)9 L( E& {; g% N; C1 i# }* Z6 C& _
upsample(512, 4, apply_dropout=True), # (bs, 4, 4, 1024)
upsample(512, 4, apply_dropout=True), # (bs, 8, 8, 1024)4 Y8 u" D7 a/ D, a1 }$ m
upsample(512, 4), # (bs, 16, 16, 1024)/ r$ T9 Y$ o( _, ^) ~8 m
upsample(256, 4), # (bs, 32, 32, 512)9 [+ x- M" d9 M, |, f6 E
upsample(128, 4), # (bs, 64, 64, 256)
upsample(64, 4), # (bs, 128, 128, 128)
]9 A. T% _8 K3 H; E; ?) O: E8 f) q
initializer = tf.random_normal_initializer(0., 0.02)
last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4," m2 E5 g1 L% ^7 }. s4 C
strides=2,, U3 N$ |/ s0 ^3 `' x* y& L
padding='same',
kernel_initializer=initializer,& |; E, b1 d- t$ r4 X8 |" m; N; u% N
activation='tanh') # (bs, 256, 256, 3), u) U ?: u. u% C
concat = tf.keras.layers.Concatenate()
inputs = tf.keras.layers.Input(shape=[None,None,3])3 Q; f2 H+ m( [ D1 f9 ^# G9 V
x = inputs
skips = []
for down in down_stack:
x = down(x)
skips.append(x)
skips = reversed(skips[:-1]). W! E" `1 c5 y' e1 P- G0 f5 a8 H
for up, skip in zip(up_stack, skips):0 o" \# e9 Q; n- g
x = up(x)
x = concat([x, skip])& u1 l; Q6 t0 r$ y9 t5 h
x = last(x)6 [. e: J: ^% h9 D( P. d; ] a
return tf.keras.Model(inputs=inputs, outputs=x)9 G/ E* k! |5 g4 Q4 `
generator = Generator()
判别网络
判别网络我们使用PatchGAN, PatchGAN又称之为马尔可夫判别器。传统的基于CNN的分类模型有很多都是在最后引入了一个全连接层,然后将判别的结果输出。然而PatchGAN却不一样,它完全由卷积层构成,最后输出的是一个纬度为N的方阵。然后计算矩阵的均值作真或者假的输出。从直观上看,输出方阵的每一个输出,是模型对原图中的一个感受野,这个感受野对应了原图中的一块地方,也称之为Patch,因此,把这种结构的GAN称之为PatchGAN。7 M5 R- r( P& m1 D) q
PatchGAN中的每一个Block是由卷积层->BN层->Leaky ReLU组成的。
在我们的这个模型中,最后一层我们的输出的纬度是(Batch Size, 30, 30, 1), 其中1表示图片的通道。! G0 i7 f& X6 G/ o6 W
每个30x30的输出对应着原图的70x70的区域。详细的结构可以参考这篇论文。
$ q H1 W1 L N9 [$ B
def Discriminator():
initializer = tf.random_normal_initializer(0., 0.02)6 |1 ?7 u9 u- z! e( y) X+ ?9 ?
inp = tf.keras.layers.Input(shape=[None, None, 3], name='input_image')
tar = tf.keras.layers.Input(shape=[None, None, 3], name='target_image')) O3 D# b$ ]' d, ?: B
# (batch size, 256, 256, channels*2)8 |1 R0 z. ~4 `1 a/ n( D
x = tf.keras.layers.concatenate([inp, tar])2 ]7 e) i0 H0 P% @9 U. X2 _( G
# (batch size, 128, 128, 64)+ `& i( K. m$ g1 R. _
down1 = downsample(64, 4, False)(x)
# (batch size, 64, 64, 128)
down2 = downsample(128, 4)(down1)6 A# Q. G. @) `1 C# w: @8 e9 r* R
" {" \5 Y& T" s0 a4 y' z' `
# (batch size, 32, 32, 256)8 d0 r; ?+ g u4 L( Z/ P9 @
down3 = downsample(256, 4)(down2)
# (batch size, 34, 34, 256)) H+ a% F3 D, y1 B7 W0 Q
zero_pad1 = tf.keras.layers.ZeroPadding2D()(down3)* b6 K3 [ J3 V* h5 e
# (batch size, 31, 31, 512) G$ o8 N7 Q. O6 o0 P& P
conv = tf.keras.layers.Conv2D(512, 4, strides=1, kernel_initializer=initializer, use_bias=False)(zero_pad1)- o6 p; V- T! _
batchnorm1 = tf.keras.layers.BatchNormalization()(conv)
leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1)/ w3 n+ p1 g- s& Z# @9 y% b
# (batch size, 33, 33, 512)
zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu)
# (batch size, 30, 30, 1)0 D c5 z$ F3 @0 c6 j
last = tf.keras.layers.Conv2D(1, 4, strides=1, kernel_initializer=initializer)(zero_pad2)$ U; n4 ?: t# u0 {0 t4 r/ N
return tf.keras.Model(inputs=[inp, tar], outputs=last)
discriminator = Discriminator()& g: s" g) j, L7 Y* ]: U- i+ _
第七步: 定义损失函数和优化器
**
**
loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)4 w2 X4 q4 l3 D0 j7 X# t/ p
**/ v: i- q5 b) _! H. h( g
+ {6 F; a5 [. S' q
def discriminator_loss(disc_real_output, disc_generated_output):9 F% p( F5 ^, e% n* U
real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_output)5 L0 C( E3 t- v) _
generated_loss = loss_object(tf.zeros_like(disc_generated_output), disc_generated_output)
total_disc_loss = real_loss + generated_loss0 g- [! e* N0 }; G9 u, [
return total_disc_loss
def generator_loss(disc_generated_output, gen_output, target):2 ]5 C. u* Y# ?; F {6 v
gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output)- c* P- G0 U9 P% ~' D
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))5 M5 e' Q9 p% x4 S* g) c: h4 E
total_gen_loss = gan_loss + (LAMBDA * l1_loss)
return total_gen_loss
generator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
: @* w) D4 a/ R
第八步: 定义CheckPoint函数
由于我们的训练时间较长,因此我们会保存中间的训练状态,方便后续加载继续训练
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,7 i& F% }2 b+ O- j+ K
generator=generator,
discriminator=discriminator)
如果我们保存了之前的训练的结果,我们加载保存的数据。然后我们应用上次保存的模型来输出下我们的测试数据。
def generate_images(model, test_input, tar):% ~% z8 }9 K- I) x9 V8 s2 m7 t
prediction = model(test_input, training=True)
plt.figure(figsize=(15,15))
display_list = [test_input[0], tar[0], prediction[0]] k) q6 ^/ {% H+ t3 u
title = ['Input', 'Target', 'Predicted']) X* ~: ]0 ]" R/ k9 I. L
for i in range(3):
plt.subplot(1, 3, i+1)7 V6 K! T% q; L
plt.title(title)
plt.imshow(tensor_to_array(display_list) * 0.5 + 0.5): z' u# @% u6 _# B
plt.axis('off')4 H8 y1 P4 e2 E8 e( C& k1 i- M
plt.show(), V! X& P' e, B/ L+ r
ckpt_manager = tf.train.CheckpointManager(checkpoint, "./", max_to_keep=2)
if ckpt_manager.latest_checkpoint:! i9 h6 p6 T3 V Z, V4 r
checkpoint.restore(ckpt_manager.latest_checkpoint)- s# X! u& N4 y+ R
for inp, tar in test_dataset.take(20):
generate_images(generator, inp, tar)" d, v# b" A9 [8 g, U8 k
第九步: 训练+ h, m9 E. S- e+ s7 T3 C2 e
在训练中,我们输出第一张图片来查看每个epoch给我们的预测结果带来的变化。让大家感受到其中的乐趣
每20个epoch我们保存一次状态
@tf.function
def train_step(input_image, target):. [; G, k0 ?4 \. T
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:: r4 I! E# b6 D' m, L0 r% E
gen_output = generator(input_image, training=True)( r* D( I# N: @* }' @# s6 G
disc_real_output = discriminator([input_image, target], training=True)
disc_generated_output = discriminator([input_image, gen_output], training=True)& V) M' o' c, {& m" o6 { o( ^( D
gen_loss = generator_loss(disc_generated_output, gen_output, target)9 X9 i& f" c# J% R5 N
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)6 f' e3 b, O8 Q6 K' b! r
generator_gradients = gen_tape.gradient(gen_loss,
generator.trainable_variables)$ e% i0 @4 D+ `
discriminator_gradients = disc_tape.gradient(disc_loss,
discriminator.trainable_variables)
generator_optimizer.apply_gradients(zip(generator_gradients,. r% f& `5 G# g2 `- v3 w
generator.trainable_variables))* i. ^* F5 t- V; O. K& H, s4 i
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,
discriminator.trainable_variables))
def fit(train_ds, epochs, test_ds):
for epoch in range(epochs):1 O/ T- A$ Y* m, V( V& j' f
start = time.time()
for input_image, target in train_ds:
train_step(input_image, target)' }' D: E/ }* h1 Z! T2 g. K1 T
clear_output(wait=True)
for example_input, example_target in test_ds.take(1):" ~7 \- t% o6 D) x6 K _: X8 @
generate_images(generator, example_input, example_target)2 `' T6 t+ M. ]' i$ W
if (epoch + 1) % 20 == 0:
ckpt_save_path = ckpt_manager.save()& M g# K" L$ N! J3 Q
print ('保存第{}个epoch到{}\n'.format(epoch+1, ckpt_save_path))- }! c: l4 x4 P9 b; C
print ('训练第{}个epoch所用的时间为{:.2f}秒\n'.format(epoch + 1, time.time()-start))
fit(train_dataset, EPOCHS, test_dataset)
6 c, B/ }$ A4 d) W
训练第8个epoch所用的时间为51.33秒。) O) S W- P2 @0 s# i
第十步: 使用测试数据上色,查看下我们的效果2 s- H; [/ y6 X. M4 `; G6 G
for input, target in test_dataset.take(20):' C/ g. N1 H7 q, i: ]( f% n
generate_images(generator, input, target)& o n# ]2 T1 v* x
矩池云现在已经上架 “口袋妖怪上色” 镜像;感兴趣的小伙伴可以通过矩池云官网“Jupyter 教程 Demo” 镜像中尝试使用。
成为第一个吐槽的人