Localization drift on soft surfaces: Software issue?

konvergentan1987konvergentan1987 Member
edited October 20 in General

The robot has issues navigating/localizing itself on some soft surfaces. I have a carpet (2.2 x 1.8m) that is 11mm high and it seems that the robot starts to drift (the location where he thinks he is doesn't match to his real-world location) as soon as he starts cleaning it. The localization drift increases over time and after 10 minutes it is roughly 2m large (see attached pic). If I remove the carpet, the robot navigates perfectly - so the problem must lie in it. This behavior is observable on firmware versions 40-41.14 (didn't try earlier versions).

Observing the robot behavior and reading similar treads on this forum I came to believe that the issue is of software and not hardware nature. Namely, the robot obviously uses the position encoders in the wheels (motors) to determine how much it has moved. However, this data is not reliable if the surface is soft (one full spin of the wheel will not move the robot in space for the circumference of the wheel!). The robot could easily correct this noisy information if it would fuse the position encoder data with the data he retrieves from two distance sensors (lasers?!?) and the camera (i.e., revert to visual odometry). If the localization software, which probably does some kind of basic SLAM (simultaneous localization and mapping), was made in a way to rely more on visual odometry and less on the motor encoders in this specific scenario the issue could be resolved (the robot anyways recognizes it is on the carpet-like surface and could adjust to which sensors it "believes more" on the fly).

However, it seems that it chooses to blindly "believe" the motor encoders, which, on large carpet surfaces, can lead to the accumulation of the drift. Even stranger software design decision is that the developers actually opted against the possibility that the robot corrects its positional drift. Once it gets lost once its a game over for him and it just roams around cluelessly. This is highly unusual in navigation robotics and I must say it is an issue that was resolved at least 15-20 years ago in this field of research. Namely, the state of the art approach would be to continuously fuse the data from several sensors (encoders, inertial sensors, cameras, distance sensors - this robot has them all) and perform (or correct) the localization based on these. Each of these sensors is noisy (inaccurate) but combined they are more than capable to correct for the drift that accumulates over time. Therefore, once it drifts away, instead of going around cluelessly and hitting against the walls/furniture "which should not be there" , the robot should engage into active recalibration. It should (re)scan the area (weirdly enough he does this to some extend since it rotates around itself - but it seems that it decides to ignore this data) and compare its features (geometrical or otherwise) with those that are saved in cloud (e.g., the map of the apartment). It seems that the robot indeed realizes that it is lost, however, it tries to navigate back to the charging station by simply following the edges (walls?) in the apartment in a "hope" that it will eventually encounter the charging station. Conversely, the behavior that is currently implemented can be characterized as a suboptimal brute-force-approach (blindly search until you hit it). Moreover, this very rudimentary strategy suggests that the robot actually doesn't compare the features between the cloud and the real-time sensor feed (or does this only in some cases). Actually, maybe it doesn't even store any geometrical or visual landmarks (beyond basic ones) but instead uses the cloud only to store a simple 2D navigation map - its hard to tell exactly.

In conclusion, I hope that my examples were comprehensive enough to understand that the current SLAM implementation is somewhat lagging behind the state of the art approaches. I hope that the developers will resolves these localization issues in the future releases of the FW. I also hope that Electrolux will have more understanding for this issue - AEG Germany didn't bother at all to understand its a software and not a hardware problem (they simply sent me a new device :D). Additionally, if the developers find it necessary/helpful, I can send them the debug data of the problematic area with the carpet.


Comments

  • raahlbraahlb Member, Moderator mod

    Hi,
    One of the problems is the way the laser sensor works - it takes a long time to actually get a full reading of a wall using the laser sensor, i.e. several seconds. So, that is the reason it is so sensitive to errors in the odometry. We could design a completely different cleaning behaviour for use on slippery surfaces, but that is of course neither fool-proof nor trivial.

    Slippery surfaces are a big problem for us, and improving dead reckoning on them is something that has been a main goal when we're looking at making a next generation.

    Here you can find a video on the laser sensor:
    https://community.purei9.com/discussion/3/video-5min-presentation-about-the-3d-vision-sensor

  • Hi thanks for the additional explanation and the video!

    Yes, I agree, a long delay in processing/reading the laser data might be problematic and indeed too slow to correct for the drift on a slippery surface in real-time.

    What I however still do not understand is the implementation (or better to say the lack of it) of the drift-correction, raised in the second part of my original comment. To clarify it a bit better: Assume that the robot (which has already compiled & locked the map of the apartment) got lost on the carpet and that he actually gets off it (it never visits the problematic area again, as in my case). Assume additionally it needs to clean another room. The robot should therefore plan how to get from A to B. However, since the location A is wrong its planned route will not correspond to the real-world route it should actually implement. Now notice that his movement from A to B is actually drift-free; the dead reckoning is precise since the robot is not on the carpet anymore. The error now comes from the different source: it **thinks and keeps thinking **it is at location A where it should be at location A'. And it keeps thinking this despite all cues telling it otherwise.

    So my question is: why it doesn't recalibrate (reset) its location based on the senor data and the cloud-map of the apartment? Why does it instead try over and over (until it depletes the battery) to implement the same strategy for getting from A to B. This is what usually makes him to hit the same obstacle over and over again (for he thinks it should not be there). To me it seems that the visual data feed that you retrieve from the robot should be enough for it to understand that it is not where it thinks it is by "simply" comparing the features extracted from the current video feed to the features that are in the map stored in the cloud. For this operation the robot could also take its time, spin around a few times or make movements in different directions. Put differently, the robot should test (i.e., go in the kind of the "explore the environment" mode) if where it thinks it is, is actually coherent with both its real-world "experience" and with what it has learned until now (i.e., the memorized cloud map). This would allow it to "jump" (correct) even several meters in the current tracking position, which could eventually resolve large drift issues that accumulate over time. The current behavior is not like this. Once the robot drifts it will keep this drift-error constant (it basically becomes a positional offset) even if the reasons for the drift are gone (e.g., it is not on the slippery surface anymore) practically forever (i.e., until it is brought to the charging) …

    Cheers,
    Marko

  • raahlbraahlb Member, Moderator mod
    edited October 22

    We do actually have a "explore the environment mode". This can kick in if the environment is off compared to expectations or if it cannot find the charger when it's heading home. Also, as it takes such a long time to get a good reading of the room, it will take time for the robot to decide it's off in its localization. We're vary of activating it though.
    The range of the laser sensor is fairly short also - we trust it up to about 2 m, as after that temperature changes can play havoc with the data (lens distortion).

    But yes, we could improve the position recovery mode - it's something we've been considering when planning our future work.

    If the robot is hitting an obstacle over and over again, I think it's more about the robot having a hard time detecting it. If its position is off, it will usually go back and forth by a wall (trying to take a path through it). Though, not that our algorithm for trying different paths is perfect.

  • Hi!

    Interesting, I never considered that the temperature could introduces noise in the data by physically distorting the lenses. But considering that the robot heats up during the operation its also, to some extent, to be expected that this could happen. Anyways, it is definitely a complex system with many things to consider.

    Yeah, I have the impression that the robot is extremely conservative in reaching the "I am lost" state. Actually, I must say that I have observed it maybe only once or twice, and none of these times its attempts resulted in correction of the positional offset. On the other hand, in these few cases the robot at least stopped trying to peruse the same goal over and over and implemented visibly different (explorative) strategy. I think that being too conservative has another downside: the robot depletes its batteries before it even reaches the state in which it thinks it is lost.

    Considering the path finding - my impression is that it improved in the latest firmware release. The robot plans better how to get from A to B, and is usually more efficient in doing so (as long as it doesn't drift). It also visibly plans better how to clean the areas which were previously skipped.

    Well keep up the good work. I hope that you will be able to update the FW in the next iterations in order to make the SLAM algorithm more robust. Maybe you could consider releasing beta-firmware for those who apply. This would probably greatly improve the amount of feedback that you could collect and allow you to see how the changes that you make affect the robot's behavior.

  • raahlbraahlb Member, Moderator mod
    edited October 23

    A problem with the recovery mode is that it might already have added new features to the map, with the offset positions, so when it tries to do a "soft recover" (turning in place) it re-observes these new, duplicate, features, thinking it's properly localized. We're trialing some new functionality that could help with this right now, but too early to tell how well it would work, and not sure if it will make it into the next release.

    Beta releases would have been fun and helpful, but sadly it would be too much administration and other issues, so for now we stick to in-house beta testing.

    Thanks for the detailed feedback!

  • apjapj Member ✭✭

    @raahlb said:

    Slippery surfaces are a big problem for us, and improving dead reckoning on them is something that has been a main goal when we're looking at making a next generation.

    >
    I'm not a patent lawyer so I don't know about if they got a patent for this:
    I think iRobot is handling this issue with the central black and white support ball.
    It has a known circumference and the change from black to white gives an idea of motion; maybe they even check the angle of the "wheel" and are thus getting an idea of how and if the robot is moving.
    By the way: Changing the abysmally small support rollers to something bigger/softer on the Pure i9.x would be a vast product improvement and hopefully reduce the driving noise and thereby improve neighborhood with people living downstairs!

  • raahlbraahlb Member, Moderator mod

    Yes, we do use the wheels to estimate our motion. However, they slip on carpets. The robot can even slide sideways slightly. Hm, yeah, a non-driving wheel might be better, it probably won't slip so much. Nice with the angle also, will detect sideways motion too.

    Ah, I think the plan is to do something about the support rollers, but not sure about the details.

Sign In or Register to comment.