用TensorFlow写一个杂草识别的代码
时间: 2024-06-09 14:06:02 浏览: 115
杂草识别_matlab杂草识别_杂草识别_基于数字图像处理的杂草识别_杂草图像_作物识别_源码
5星 · 资源好评率100%
以下是一个基于 TensorFlow 的杂草识别代码:
```python
import tensorflow as tf
import numpy as np
import os
# 设置图片大小和训练批次数
IMAGE_SIZE = 128
BATCH_SIZE = 32
# 定义数据集路径
TRAIN_PATH = 'train'
TEST_PATH = 'test'
# 定义标签列表
LABELS = ['dandelion', 'grass', 'lawn', 'weed']
# 将标签转换为数字
label_dict = {label:i for i, label in enumerate(LABELS)}
# 定义读取图片函数
def read_image(file_path):
image = tf.io.read_file(file_path)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image, tf.float32)
image /= 255.0
return image
# 定义读取数据集函数
def load_dataset(data_path):
images = []
labels = []
for label in LABELS:
path = os.path.join(data_path, label)
files = os.listdir(path)
for file in files:
if file.endswith('.jpg'):
image_path = os.path.join(path, file)
image = read_image(image_path)
images.append(image)
labels.append(label_dict[label])
return tf.data.Dataset.from_tensor_slices((images, labels))
# 加载训练集和测试集
train_dataset = load_dataset(TRAIN_PATH).shuffle(10000).batch(BATCH_SIZE)
test_dataset = load_dataset(TEST_PATH).batch(BATCH_SIZE)
# 定义模型
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(128, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(len(LABELS))
])
# 定义损失函数和优化器
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
# 定义评估指标
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
# 定义训练函数
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images, training=True)
loss = loss_fn(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
# 定义测试函数
@tf.function
def test_step(images, labels):
predictions = model(images, training=False)
t_loss = loss_fn(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
# 训练模型
epochs = 10
for epoch in range(epochs):
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for images, labels in train_dataset:
train_step(images, labels)
for images, labels in test_dataset:
test_step(images, labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
```
该代码使用了一个简单的卷积神经网络来进行杂草识别任务。首先,定义了一个 `read_image` 函数来读取图片,并将其转换为 `tf.Tensor` 类型。然后,定义了一个 `load_dataset` 函数来将数据集加载到 `tf.data.Dataset` 类型中。接着,定义了一个简单的卷积神经网络模型,使用了 `tf.keras` 的 Sequential API 来构建。最后,定义了训练函数和测试函数,并在训练过程中输出训练集和测试集的损失和准确率。
阅读全文