品质至上,客户至上,您的满意就是我们的目标
技术文章
当前位置: 首页 > 技术文章
无人机搭载Airphen多光谱相机测量冠层高度
发表时间:2017-06-14 09:53:38点击:2348
无人机可搜集大量重叠图像,每一个地面相同点可以通过几种图像来观察,并可从不同角度进行观察。之后利用摄影测量技术来探索其特性,从宏观结构(如植被凸包)获得密度3D点云并计算出冠层高。该特征在野外条件进行高通量植物表型成像时非常具有吸引力。尽管该技术是常规技术,尚未有效了解预估精度和主要影响因子。本研究目的是将精度定量,用以估算冠层高度并鉴别必须计算的主要因子以取得较佳效果。在2015和2016年,我们进行了甜菜、小麦和玉米的表型实验。六旋翼飞行器或搭载高分辨率Sony Alpha相机(2400万像素)或搭载Airphen多光谱相机(130万像素)。地面冠层高度测量使用视觉符号法或使用LIDAR观测提取值。利用 Agisoft PhotoScan处理无人机相机拍摄图像,利用软件将图像对齐,生成密度3D点,即可计算出其垂直分布。分布百分比用来测量相应冠层高度。基于图像的冠层高度与视觉符号表示的冠层高度非常一致,偏差从7cm(甜菜)到15cm(玉米)。使用LIDAR采集的小麦详细数据结果表明这些不确定性来自视觉标识的精度。我们分析了通过UAV来估算影响冠层高度的精度因子。较重要的因子是计算的土壤参比高度。在冠层很密时,很难看到土壤,建议使用以前与裸露土壤相关的参比3D点云。其它因子包括相对于植被元素尺寸的相对地面分辨率,观测到的地面每个点的图像数,视野以及可供方向范围。我们评估了在Agisoft PhotoScan中使用的用于生成3D点云的参数的影响。较后,我们提供了一个优化程序,应该能保证冠层高度预估精度,进而可用于表型环境中的基因表型鉴别。
Deriving canopy height from drone observations: Overview of the expected accuracy and main influential factors
S. Madec1, F. Baret1, G. Collombeau1, S. Thomas2, A. Comar3, M. Hemmerlé3, B. de Solan2, D. Dutartre4
1INRA, UMR EMMAH, Avignon, France, email: simon.madec@paca.inra.fr
2ARVALIS Institut du Végétal, Avignon, France
3HIPHEN, Avignon, France
4ITB, Avignon, France
Drones allow collecting images with large overlaps so that the same point on the ground can be seen in several images and thus from different angles. Photogrammetric techniques are then applied to exploit this property and derive a dense 3D point cloud from which the macro-structure (i.e., the convex hull of the vegetation) and the corresponding canopy height can be computed. This canopy characteristic is very appealing in the context of high-throughput plant phenotyping under field conditions. Although this technique is becoming relatively common, the expected accuracy and the main influential factors are not precisely known. The objective of this study was to quantify the accuracy with which canopy height can be estimated and to identify the main factors that have to be accounted for to achieve the best performance. series of field phenotyping trials were conducted on sugar beet, wheat and maize in 2015 and 2016. A hexacopter was flown carrying either a Sony Alpha high-resolution camera (24 Mpixels) or an AIRPHEN multispectral camera (1.3 Mpixels). Ground measurements of canopy height were concurrently completed using either visual notations or values extracted from LIDAR observations. The images from the cameras aboard the drone were processed using Agisoft PhotoScan software that aligns images and generates a dense 3D point cloud from which the vertical distribution can be computed. A percentile of the distribution is used to determine the corresponding canopy height.Image-based canopy height was in good agreement with visual notations of canopy height, with an uncertainty of around 7 cm (sugar beet) to 15 cm (maize). More detailed results gathered on wheat using LIDAR showed that most of these uncertainties are attributable to the precision of the visual notations. Several factors potentially affecting the accuracy of canopy height estimation from UAV observations were analyzed. One of the most important factors is the way the soil reference altitude is computed. In dense canopies where the soil can hardly be seen, it is advisable to use a previous reference 3D point cloud corresponding to bare soil. Other factors include the ground resolution relative to the size of the vegetation elements, the number of images from which each point on the ground is seen, the field of view and the range of directions available. The influence of the parameters used in Agisoft PhotoScan for generating the 3D point cloud was also evalsuated. Finally, an optimal procedure is presented that should ensure accurate estimates of canopy height that can be used for genotype characterization in phenotyping experiments.