Robot localization with navigation stack


I’ve been working on a robot for some months already. The goal is to have a robot moving indoor.
I actually have a laser 360 : rplidar A1 (using rf2o algorithm to get some odometry information) paired with the datas coming from an IMU (xsens mti-7).
So I merge both sensors in robot_localization but the odometry I am getting is really… bad. Even tuning the different parameters I always end up having a worst result than only with the odometry given only from the rf2o algorithm (but the frequency is too low : 7HZ, so navstack can’t use it).

So my question is : is it really possible to get something great from robot_localization on a real robot ?
The datas incoming and fused are more accurate than the output I am getting… So the result is that I get a shaky robot on RVIZ and the navigation stack keep going in recovery mode. I think the localization is ok since it’s never lost in his environment but I guess the navstack needs a more accurate pose to work properly.

If I use a t265 camera from intel, I get an accurate pose and everything works properly but since I don’t have any wheels odometry, it’s not a reliable odometry source sadly. Just telling this information to say that the chain is working perfectly and my only problem is definitly robot_localization.

If there is any accurate and good video talking about this package, I would be glad to know it because I didn’t find it and I’m a bit desperate since I’ve been reading the entire package doc and still couldn’t get anything usable.

Thanks for reading it !


So I would recommend you to take a look at these courses:
Fuse Sensors:

And this Navigation Learning Path:


Thanks a lot for the answer, it seems to fulfill my needs. I just hope they are explaining everything deeply and not just showing some easy setups.

I’ll let people know later on this topic how did the course helped me.