基于tensorflow中的keras编写代码,两个文件夹中的所有图片数据1. 编写数据加载函数; 2. 根据任务要求对数据集进行划分; 3. 数据增强; 4. 构建深度学习模型和损失函数; 5. 编写模型训练相关代码,完成模型训练 6. 使用可视化库 Matplotlib 对训练过程进行可视化展示,如损失值变化、准确率变化、召回率、ROC曲线等。 7. 将训练好的模型保存
时间: 2023-11-28 09:50:36 浏览: 79
好的,下面给您提供基于tensorflow中的keras编写代码的具体实现:
1. 编写数据加载函数
我们可以使用keras中的ImageDataGenerator来读取数据集,代码如下:
```python
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255, # 像素值缩放到 [0, 1] 区间
shear_range=0.2, # 随机错切变换
zoom_range=0.2, # 随机缩放
horizontal_flip=True) # 随机水平翻转
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'train/',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
'validation/',
target_size=(150, 150),
batch_size=32,
class_mode='binary')
```
2. 根据任务要求对数据集进行划分
我们可以将数据集划分为训练集、验证集和测试集,代码如下:
```python
import os, shutil
original_dataset_dir = 'original/'
base_dir = 'data/'
os.mkdir(base_dir)
train_dir = os.path.join(base_dir, 'train/')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation/')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test/')
os.mkdir(test_dir)
train_cats_dir = os.path.join(train_dir, 'cats/')
os.mkdir(train_cats_dir)
train_dogs_dir = os.path.join(train_dir, 'dogs/')
os.mkdir(train_dogs_dir)
validation_cats_dir = os.path.join(validation_dir, 'cats/')
os.mkdir(validation_cats_dir)
validation_dogs_dir = os.path.join(validation_dir, 'dogs/')
os.mkdir(validation_dogs_dir)
test_cats_dir = os.path.join(test_dir, 'cats/')
os.mkdir(test_cats_dir)
test_dogs_dir = os.path.join(test_dir, 'dogs/')
os.mkdir(test_dogs_dir)
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
```
3. 数据增强
我们可以使用ImageDataGenerator来进行数据增强,代码如下:
```python
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
from tensorflow.keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
img_path = fnames[3]
img = image.load_img(img_path, target_size=(150, 150))
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
```
4. 构建深度学习模型和损失函数
我们可以使用keras中的Sequential模型来构建深度学习模型,代码如下:
```python
from tensorflow.keras import layers
from tensorflow.keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
```
5. 编写模型训练相关代码,完成模型训练
我们可以使用fit_generator来进行模型训练,代码如下:
```python
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
model.save('cats_and_dogs_small_1.h5')
```
6. 使用可视化库 Matplotlib 对训练过程进行可视化展示
我们可以使用Matplotlib对训练过程进行可视化展示,代码如下:
```python
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
7. 将训练好的模型保存
我们可以使用save来将训练好的模型保存到本地,代码如下:
```python
model.save('cats_and_dogs_small_1.h5')
```
阅读全文