Kiefer Co:
@kieferandco
KieferAndCo
ChairmanCo

My Favourite Humans

COMP 4180 - Intelligent Mobile Robotics Project

Sean and Kiefer's SLAM implementation!

Check out how Darwin (ROBOTIS-OP2) navigates the great maze of red solo cups.

See the first video for a peek into Darwin's mind and our localization strategy!

Based on several hours' work with metre sticks and some trigonometry, we mapped out relationships between what the eyes see (via OpenCV) and where cups might be.

We create a cloud of estimates (shown in cyan with estimated field of view) that diverge with some drift, and then weight them based on how likely their estimated views match current visual data.

Good estimates live and spawn children, while bad ones are culled. We then average the cloud to get our best estimate, highlighted in dark blue.

Once we have our map and location, we navigate by executed pre-recorded commands, such as turning or moving forward, and update all estimates accordingly!

As we move, we also adjust the cups' imagined positions with the assumption that our "best guess" is correct, mapping on-screen cups with cups in memory based on distance (or creating new memory objects if we think a cup hasn't been seen before).

We have similar logic for the yellow tape that demarcates the field's borders, but the general approach is the same. Darwin successfully traversed the cups and we passed!

–Kiefer