This was last week’s project: Building a Kalman filter-based IMU.
IMUs (inertial measurement units) are clever little devices which try to estimate an object’s absolute orientation (and sometimes its position) by examining the forces on the object.
Hobby drone and computer input IMUs generally look at acceleration data (which informs where “down” is), compass data (which points toward north in 3D space), and rate gyro data (which tells the axis and speed of spin). “Down” and “north” in combination can give a pretty accurate constraint on orientation, but unfortunately if there are any lateral forces (wind; turning), they will get mixed in with “down” and distort the estimate. Kalman filters use matrix math to make good use of the gyro data to correct for this. However, a constantly-accelerating drone could still be fooled about where down is.
I’ve tried here to find out whether we can try to model the drone’s translation and take this into account when estimating the orientation. It turns out that even relatively poor and infrequent data about velocity can constrain acceleration— and thus “down”— quite well. The difference in the quality of the estimate is plainly visible.
This is all done by mathematically simulating a 3D moving object using ordinary dynamics and battering it with Gaussian random forces, and then predicting what data a noisy sensor might return. A predictor algorithm using a Kalman filter (which has no knowledge about the original state) attempts to recover the true state to the best of its ability. The truth is rendered white here, and the estimate in red.
At the end you can see the same algorithm running on actual sensors. The real thing doesn’t use GPS yet, but the prediction is still pretty decent! (There is not that much sustained acceleration to throw it off in this video).
I’ve done some other cool things with the code which perhaps I’ll write up in the future: Among them are a process noise model based on Gaussian random walks, and a nice extension of it to quaternions (which has dramatic impact on the quality of the estimate!) I also make use of automatic differentiation, which makes a linearized extended Kalman filter (EKF) particularly easy and robust to implement.
Nice job, excellent work, do you have a place where I can see the configs (schematics and models of the IC?) and code?
I really really appreciate your work sir! It’s brilliant, neat and easy to follow. It really helps me understand Kalman Filter, the topic in which currently I’m focusing on. Thanks a lot, please keep continuing on your great work!
Thank you for these grate articles on Kalman filters. I am working on a school project and I am very interested to see the actual code. Did you post it anywhere?
Thank you.
We really enjoyed your teaching way. Keep serving us.!
Hello,
I just saw you video! Awesome!!!! Could you show me how to build the design i saw in your video?
i think this is what I need:
1) accelerometer
2) Rate gyro
3) 3D compoass
or even better, Could I pay you to build me one, and have it shipped to me??? I live in Texas. Thank you for taking time to consider my request.
Dr Bonin
Excellent work . Did you share your Ardrino code?
I am envious of your deep understanding of this maths.
Cheers
Hi Tim,
Inspiring stuff! Thanks for sharing. Your article popped a few questions:
GPS will give you ground speed and course only, so velocity is a horizontal vector. There is no way to correct vertical velocity using GPS, or is there? I reckon you only need the ground course to correct yaw. You cannot correct pitch and roll using GPS, or can you?
GPS refreshes at roughly 1 Hz. The average IMU will refresh at say 100 Hz. How do you use a Kalman filter when your GPS measurements are accurate only once in every 100 steps? Are doing the correction step only once every 100 or so steps? How do you keep you P covariance from getting bloated if corrections are performed so infrequently?
1) Ground-projected velocity is not a limitation of GPS that I am aware of; my sensor will return three coordinates of velocity in either Earth-centered Earth-fixed XYZ or east-north-up coordinates. It is true that vertical positioning is much less accurate due to the effect of the horizon, but as I understand it the velocity is computed from (or augmented by) Doppler effects, so should be considerably more accurate than taking a position delta.
2) I simply construct the sensor matrix for the sensor readings that are available on the current frame. On frames where GPS data is available, the sensor matrix will be much bigger and contain more readings. On those frames, the uncertainty is narrowed considerably. Intermediate frames will have a smaller sensor matrix, as though the device simply doesn’t have a GPS. Those intermediate frames will tend drift/”bleed” again in between, just as if we had a pure dead-reckoning IMU, until the next GPS reading comes in. This scheme is of course still strictly better (by a long shot, as you can see) than no GPS at all!
Awesome! Great work! Could you please how you visualized the motion, I mean the simulation?
The rendering is custom OpenGL code written in C++.
This is amazing!
Excellent work, thank you very much! Your posts were the gateway drug that led me into some heavy filter applications for my physics PhD :D Do you by any chance have a sample of the code that you’re using for this estimation? Thanks a lot!