English
主页
研究团队
教授 & 副教授
博士后
博士
硕士
本科生
毕业生
项目成果
科研项目
专利发明
论文展示
学术活动
学术来访
讲座交流
科研之外
关于我们
当前位置:
首页
>
论文展示
>
正文
Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features
石莹
2021-08-08
浏览
摘要
传感器之间不准确的外方位元素会导致车载激光雷达数据和相应全景影像序列的配准失败。为了解决这个问题,本文提出一种基于点云和影像语义特征的自动配准方法。首先,全景相机和激光扫描仪之间精确的旋转参数由基于GPS和IMU数据的SFM方法获得,与此同时也能得到原始的全景影像的外方位元素;其次,利用初始外方位元素和Faster-RCNN方法获取全景影像中的车辆作为备选基元与点云中的对应基元进行匹配;最后,用基于粒子群的最大化相关基元对的重叠度来优化全景相机和激光扫描仪之间的平移量,得到全景影像和点云的精配准的结果。我们用两个富有挑战性的场景来实验和评价本文提出的方法,结果表明两个场景的配准误差都小于3个像素,体现了该方法的自动化程度、鲁棒性和准确性。
本文的目标是得到点云和全景影像之间精确的旋转平移参数,共包含三个步主要的步骤(如图1):(1)基于GPS/IMU数据辅助的SfM光束法平差的精确旋转参数评价;(2)点云和全景影像车辆提取;(3)基于最大化相关匹配对重叠度的平移参数评价。
图1 激光点云和全景影像的配准过程
用FASTER-RCNN进行车辆探测,其中绿色的矩形框表示探测到车辆的区域,蓝色的虚线框表示融合重叠的边界盒子,使整个蓝色矩形框作为一个配准基元进行计算(如图2)。
图2 全景影像中提取车辆的过程
为了找到点云中的对应的车辆,利用原始EoPs将点云投影到全景影像,位于探测边界盒子中的点作为备选车辆点(如图3)。
图3 点云中提取车辆的过程
图4 点云中车辆提取结果
图5 基元对的提取
(a)原始EoPs配准结果;(b)本文方法初始结果;(c)本文方法精配准结果
图6 配准结果比较
Abstract
Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.