没有合适的资源?快使用搜索试试~ 我知道了~
首页论文研究 - 基于语义分割的作物检测遥感数据融合
论文研究 - 基于语义分割的作物检测遥感数据融合
需积分: 0 464 浏览量
更新于2023-05-26
评论 2
收藏 720KB PDF 举报
数据融合通常是多传感器遥感影像集成环境中的重要过程,目的是丰富融合过程中所涉及的传感器缺乏的功能。 该技术引起了很多研究的兴趣,特别是在农业领域。 另一方面,基于深度学习(DL)的语义分割在遥感分类中显示出高性能,并且需要以监督学习的方式来处理大型数据集。 提出了一种基于卷积神经网络的多源遥感影像融合语义分割方法,并将其应用于农作物的识别。 委内瑞拉遥感卫星2(VRSS-2)和高分辨率的Google Earth(GE)图像已被使用,并且已经收集了1000多个样本集用于监督学习过程。 实验结果表明,已获得的农作物平均平均提取率超过93%,这表明数据融合与DL结合对于从卫星图像和GE影像中提取农作物非常可行,并且表明深度学习技术可以作为大型遥感数据融合框架的宝贵工具,特别是在精确农业中的应用。
资源详情
资源评论
资源推荐

Journal of Computer and Communications, 2019, 7, 53-64
http://www.scirp.org/journal/jcc
ISSN Online: 2327-5227
ISSN Print: 2327-5219
DOI:
10.4236/jcc.2019.77006 Jul. 10, 2019 53 Journal of Computer and Communications
Semantic Segmentation Based Remote Sensing
Data Fusion on Crops Detection
Jose Pena
1,2
, Yumin Tan
1
, Wuttichai Boonpook
1
1
School of Transportation Science and Engineering, Beihang University, Beijing, China
2
Venezuela Space Agency (ABAE), Caracas, Venezuela
Abstract
Data fusion is usually an important process in multi-
sensor remotely sensed
imagery integration environments with the aim of enriching features lacking
in the sensors involved in the fusion process. This technique has attracted
much interest in many researches especially in the field of agriculture. On
the
other hand, deep learning (DL) based semantic segmentation shows high
performance in remote sensing classification, and it requires large datasets in
a supervised learning way. In the paper, a method of fusing multi-source re-
mote sensing images with c
onvolution neural networks (CNN) for semantic
segmentation is proposed and applied to identify crops. Venezuelan Remote
Sensing Satellite-2 (VRSS-2) and the high-
resolution of Google Earth (GE)
imageries have been used and more than 1000 sample sets have b
een collected
for supervised learning process. The experiment results show that the crops
extraction with an average overall accuracy more than 93% has been ob-
tained, which demonstrates that data fusion combined with DL is highly
feasible to crops extracti
on from satellite images and GE imagery, and it
shows that deep learning techniques can serve as an invaluable tools for larger
remote sensing data fusion frameworks, specifically for the applications in
precision farming.
Keywords
Data Fusion, Crops Detection, Semantic Segmentation, VRSS-2
1. Introduction
At present RS technology has received great attention in the agriculture com-
munity due to its ability to provide periodic and regional information for crop
monitoring and thematic mapping [1] [2]. Modern RS to identify any features
How to cite this paper:
Pena, J., Tan, Y.M
.
and
Boonpook, W. (2019) Semantic Seg-
mentation Based Remote Sensing Data
Fusion on Crops Detection
.
Journal of
Computer and
Communications
,
7
, 53-64.
https://doi.org/10.4236/jcc.2019.77006
Received:
May 20, 2019
Accepted:
July 7, 2019
Published:
July 10, 2019

J. Pena et al.
DOI:
10.4236/jcc.2019.77006 54 Journal of Computer and Communications
on the surface is no longer considered as a processing of a one-source single date
image. It has shifted to multi-source fusion of multi-temporal images. Several
spectral indices have been proven to be valuable tools in describing crop spatial
variability. In this context, the images of high spatial and spectral resolution
have already proved their potential and effectiveness in crop detection. However,
when we are considering to identify the types of crops with multispectral image-
ries, RS becomes more challenging. The main challenge of satellite based remote
sensing application in agricultural field at present is that there is no suitable
sensors with very high spatial resolution like below 50 centimeters and with a
good temporal resolution and spectral resolution at the same time.
Indeed, novel approaches and algorithms using Unmanned Aerial Vehicle
(UAV) or satellite based multispectral imaging have been developed for vegeta-
tion classification [3]. But, the acquisition of UAV images or images of other
platform such as GeoEye-1, WorldView-4 and KompSat 3a can be difficult to
acquire considering the high cost, and their availability only in the specific small
region. Google Earth (GE) provides an open data source with very high spatial
resolution, which represents a very good alternative for crops detection. Very
few studies have been undertaken to use GE images as the direct data source for
land use/cover mapping [4]. Numerous methods based on DL have been pro-
posed recently for agricultural applications over specific RS data, especially fo-
cusing on high resolution and hyperspectral images [5], plant phenotyping [6]
or weed scouting [7] and early disease detection [8]. However, some recent ap-
proaches have tried to directly adopt deep architectures designed to identify
other aspects related to the vegetation or the diseased plants, the results, al-
though very encouraging, appeared coarse [9].
In this research, we are going to identify several types of crops that has very
different shapes, sizes, and color intensities, and the surrounding plants and
background soil strongly differ across regions. In addition, data fusion of RGB
images (with high spatial resolution) obtained from Google platforms and mul-
tispectral satellite imagery obtained from Venezuelan Remote Sensing Satellite
(VRSS-2), will be done through Gram-Schmidt (GS) pan-sharpening method.
Fusion images and vegetation indices (VIs) were used as input to following Seg-
Net based semantic segmentation. Our main contributions can be summarized
as: this is probably the first attempt conducted to explore the combination of
VRSS-2 and GE imagery, through a data fusion process for crop detection; a
SegNet-based semantic segmentation model is proposed for crop type detection,
capable of adapting to fusing data sets in which the results proved that this ap-
proach provide better performance than that of the traditional classification
methods; a self-designed preparation of data sets and semantic segmentation
network have been employed to provide a per pixel labelling of the input data;
finally, two different data sets from VRSS-2 and GE, those are obtained abso-
lutely free of cost, have been employed with several pre-processing and
post-processing strategies, designed and combined with Segnet architecture, that
has increased the overall accuracy.

J. Pena et al.
DOI:
10.4236/jcc.2019.77006 55 Journal of Computer and Communications
2. Materials and Methods
2.1. Study Area
The study area is located in the north-central region of Venezuela, Aragua State,
Palo Negro Sector-Venezuela. The most important agricultural production is
concentrated in this area and the main crops produced are banana, pasture, pa-
paya and coco. Banana and pasture production have greater importance in the
study area, because they represent 65% of the economy of that region of Vene-
zuela. In recent years their production and thereby, the source of employment
have been declined considerably. Reasonably, the state has taken steps to identify
and quantify the possible reasons and overcome the problems. ‘Bare land’ comes
into this issue as one of the solutions to increase their production using that
lands which are in plenty. In this study, different training zones and testing zone
are used.
2.2. Data Sets Construction
The design of the training dataset is the key to the performance of a good CNN
classification model, and the construction of dataset is described below. All three
datasets used in this research are contained the RGB image set from VRSS-2
image, Google earth mapping, and data fusion images which are composed of
the multispectral bands including RGB bands, Near-infrared (NIR), and norma-
lized difference vegetation index (NDVI).
2.2.1. VRSS-2 Image
VRSS-2 was launched on October 09, 2017, and owned by the Bolivarian Agency
for Space Activities (ABAE). It contains two different cameras, High Resolution
Camera (panchromatic and multispectral sensors) and Infrared Camera. VRSS-2
data has a total of 10 bands including a panchromatic band (band 1) which has 1
meter of spatial resolution, nine multispectral bands (band 2 - 10) which has the
spatial resolution in 3 meter (band 2 - 5), 30 m (band 6 - 8) and 60m (band 9 -
10) respectively. However, in this research, only five bands are selected (bands 1
- 5).
The radiometric calibration procedure is first applied to selected VRSS-2 im-
ages to generate a consistent output images. To obtain the high-quality fusion
data, it is important to apply the data corrections for various lighting conditions
such as overcast skies and partial cloud coverage. To correct for this aspect, we
utilize sunlight sensors measuring the sun’s orientation and sun irradiance, as
shown in
Figure 1.
The obtained data are stored as quantized and calibrated Digital Numbers
(DN). The DN are converted to surface reflectance value using the Equation (1)
with coefficients provided in metadata file and ABAE.
(
)
( )
2
/ cos
TOA
sum s
Ld E
ρπ θ
= ×× ×
(1)
where,
TOA
ρ
is surface reflectance of the earth at the top of the atmosphere,
L
is
apparent radiance at the top of the atmosphere in Watt/m
2
/stereo-radian/
剩余11页未读,继续阅读



















安全验证
文档复制为VIP权益,开通VIP直接复制

评论0