Indoor Benchmark of 3D LiDAR SLAM at iilab – Industry and Innovation Laboratory

To be available soon This dataset is an innovative indoor benchmark designed to advance research in 3D LiDAR-based SLAM algorithms. Its goal is to provide a robust and diverse foundation for evaluating and improving SLAM techniques in complex indoor environments. The dataset combines synchronized data from multiple sensors—including four different 3D LiDAR systems, a 2D LiDAR, an IMU, and wheel odometry—with high-precision ground truth captured via a motion capture system.

Data Collection Method

The data was record using the ""rosbag record"" command of the ROS framework (https://ros.org) followed by some post processing steps using python scripts.

For the sensor data, the recording of the bag files was made in the LattePanda 3 Delta of the mobile robot platform. After the recording, a post-processing step was performed in order to correct the timestamps of the messages of the external IMU (Xsens MTi-630).

The ground-truth data was recorded also first in rosbag files in a secondary computer running the NatNet 4 ROS driver (https://github.com/L2S-lab/natnet_ros_cpp) on a Ubuntu operation system. This secondary computer was connected via Ethernet to a main computer running the Motive (https://optitrack.com/software/motive) in a Windows operating system (required to process the data of the Optitrack cameras). Then, the bag files were processed using the EVO open source python library (https://github.com/MichaelGrupp/evo) in order to convert to a TUM file format and adjust the initial position offset (relevant to perform benchmark of the odometry estimation of the SLAM algorithms).

Aditionally, since the time syncronization of the data is important, the ground-truth data and sensor data are syncronized by using performing syncronization between the mobile robot platform and the secundary computer using the NTP protocol.

Type of Instrument
Mobile Robot Platform: Modified Hangfa Discovery Q2 for Multimodal Perception. R.B. Sousa, H.M. Sobreira, J.G. Martins, P.G. Costa, M.F. Silva, and A.P. Moreira. ""Integrating Multimodal Perception into Ground Mobile Robots"" [Manuscript submitted for publication @ ICARSC 2025]

Sensor data: - 3D LiDARs: Livox Mid-360, RoboSense RS-HELIOS-5515, Ouster OS1-64 RevC, Velodyne VLP-16; - 2D LiDAR: Hokuyo UST-10LX-H01; - IMU: Xsens MTi-630; - Encoders from the robot wheels: OEM Faulhaber 2342 motor, 64:1 gear ratio, 12 Counts Per Revolution (CPR) 5V two-channel encoder at the motor shaft. Ground Truth Data: OptiTrack Motion Capture System with 24 cameras PrimeX 22 @ Room A, Floor 0, iilab.

데이터와 리소스

추가 정보

필드
저자 Jorge Diogo Ribeiro, Ricardo Barbosa Sousa, João Graça Martins, André Silva Aguiar, Filipe Neves dos Santos
최종 업데이트 2월 27, 2025, 10:43 (UTC)
생성됨 2월 27, 2025, 10:02 (UTC)
포맷 The files come in different formats.  The sensor data is provided in ROS1 bag files, a standardized format widely used in the robotics domain. Each ROS1 bag file in the dataset contains all the sensor data for a specified sequence (time interval in witch the robot performs a defined trajectory) organized as messages in different topics (one topic for each sensor). Here is a link for a similar dataset for reference (https://github.com/TIERS/tiers-lidars-dataset). The ground truth data is provided as .txt files in a TUM format (meaning 8 columns, 1 for the time stamp in seconds, 3 for the xyz position and 4 for the rotation in a quaternion format). This is also a normalized format for benchmark odometry of SLAM algorithms. The calibration files are provided in YAML format, which is also a normalized used by several other datasets (e.g. https://rvp-group.net/slam-dataset.html)
언어 English
Project GreenAuto: Green innovation for the Automotive Industry
Relation Article and code to be available soon.
Size Approximately 350 GB
Software Motive (OptiTrack Motion Capture System), Robot Operating System (ROS)
Type of Instrument