Kendra is a Business Development Manager at MicroStrain by HBK, with over 20 years of experience spanning mechanical engineering, product development, and business leadership in autonomous systems.
A Stanford University–trained Mechanical Engineer, she specializes in inertial sensing and navigation technologies, supporting OEMs in the integration of high‑performance solutions for GNSS‑challenged environments.
Kendra works closely with customers to optimize system performance, reliability, and time‑to‑market, bridging advanced sensor technology with real‑world operational needs.
Autonomous systems are increasingly being deployed in environments where satellite navigation can’t be relied on: dense urban corridors, underground tunnels, contested airspace, and industrial sites with significant radio frequency (RF) interference. In these conditions, global navigation satellite system (GNSS) signals can degrade, disappear entirely, or be deliberately manipulated. The practical question facing engineers isn’t how accurate their GNSS is under ideal conditions. It’s what their system does when GNSS stops working.
To dig into how modern sensor fusion architectures manage that problem, we spoke with Kendra Gallup, Business Development Manager, Inertial Systems at MicroStrain by HBK. Kendra works daily with original equipment manufacturers (OEMs) integrating inertial navigation into unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), and robotic platforms.
Everyone talks about GNSS-denied navigation. What does “trusted localisation” mean in practice?
At its core, it means your system keeps behaving correctly, both physically and mathematically, even when you can’t rely on GNSS.
In a controlled demo environment, GNSS is usually rock solid. In the real world, it’s not. Urban canyons create multipath errors. Tunnels block signals. RF jamming disrupts reception, and spoofing attacks can inject false position data.
When any of those things happen, your navigation stack either degrades gracefully, or it falls apart. What determines which one happens is the quality of your inertial backbone. A well-designed inertial navigation system like the 3DM-CV7-INS gives you continuous motion propagation built on careful thermal calibration and a synchronised multi-IMU architecture. It doesn’t depend on satellites. It depends on physics.
External sensors then come in as aiding inputs. They correct for drift, but they don’t define the motion estimate. That’s the fundamental difference between a system that’s GNSS-reliant and one that treats inertial as the primary reference.
What types of external sensors are engineers using today?
The ecosystem has grown quite a bit. Depending on the platform, we commonly see visual odometry and simultaneous localisation and mapping (SLAM), light detection and ranging (LiDAR) odometry, wheel encoders on ground vehicles, barometric altitude sensors, magnetometers, Doppler velocity sensors, and high-performance external GNSS receivers used selectively.
Modern inertial navigation systems, including the CV7 platform, are designed to accept external position, velocity, and heading inputs from any of these sources.
Each one brings something specific to the table. LiDAR is geometrically powerful; cameras give you rich relative motion data in structured environments; wheel encoders are excellent short-term stabilisers on ground vehicles; Doppler sensors improve your velocity estimate over ground, and barometers help anchor your altitude estimate.
The key isn’t choosing one of these. It’s fusing them in a way that plays to each sensor’s strengths. When you do that well, you get a meaningful reduction in drift and much better endurance through GNSS outages.
How are these signals integrated?
In most embedded platforms, we use loosely coupled architecture. External systems, whether a LiDAR SLAM engine or a stand-alone GNSS receiver, compute their own solutions independently. Those outputs are then injected into the inertial Extended Kalman Filter. The advantage of this approach is that it’s modular and relatively easy to extend.
But the real engineering work happens in the measurement weighting, covariance tuning, and innovation consistency checks. Not all measurements are equally trustworthy. A LiDAR solution inside a well-structured warehouse is highly reliable. In a dusty tunnel, that same sensor might be giving you garbage. The fusion engine needs to constantly evaluate how much weight to give each input based on the current environment.
Here’s something that often gets underappreciated: the stronger your inertial core, which comes from good calibration and inertial measurement unit (IMU) array noise reduction, the more slowly your dead reckoning drift grows between aiding updates. That gives the entire system more room to breathe when aiding sources become unreliable. Better inertial performance translates directly into better resilience.
Timing rarely gets headlines. How critical is synchronisation actually?
It’s critical, and it’s consistently underestimated.
If a vehicle is moving at 15 metres per second, even a 1 millisecond timing error translates into meaningful spatial misalignment, especially when you’re aligning LiDAR point clouds with inertial data.
That’s why high-performance inertial systems provide microsecond-level timestamp accuracy, pulse-per-second (PPS) synchronisation, configurable general-purpose input/output (GPIO) event triggering, and extremely low motion-to-message latency.
In real deployments, I’ve seen timing issues cause more system degradation than raw sensor noise. If your sensors can’t agree on when something happened, fusion becomes unstable. You can have excellent sensors and still get poor results if the timing is off. It’s not an optional detail.
The stronger your inertial core, the more effectively your system can handle degraded or denied GNSS conditions. External aiding doesn’t define motion — it refines it. True robustness comes from an architecture where inertial navigation is the primary reference, and all other sensors act as intelligent corrections.
What happens when GNSS starts degrading?
A well-built system doesn’t treat GNSS loss as a binary on/off event. It manages through several distinct phases: GNSS available, GNSS degraded, GNSS denied, and reacquisition.
When GNSS starts degrading, your uncertainty estimate increases. The filter gradually shifts more weight toward inertial propagation combined with whatever else is available, whether that’s visual-inertial odometry (VIO), LiDAR, wheel encoders, Doppler, or barometric constraints.
In full denial, like an RF-jammed tunnel, the system is running entirely on dead reckoning. Drift does grow. But with good bias stability and low noise from thorough thermal calibration, that drift growth is slow enough to stay manageable for many mission profiles.
The trickier problem is reacquisition. When GNSS comes back, the correction has to be smooth. Any jump or reset can destabilise your control loops and disrupt the mission. That smooth handoff is just as important as handling the initial denial gracefully.
What about spoofing specifically?
Spoofing is particularly dangerous because the signal looks valid. Your receiver thinks it has a good fix.
The defence is consistency checking. If a GNSS update suddenly reports a position that doesn’t agree with what your inertial propagation and velocity constraints predict, the innovation residual will exceed the expected covariance bounds. A robust fusion engine, like the one embedded in CV7-based systems, can down-weight or completely reject that measurement.
Inertial continuity becomes your reference for truth. In a contested environment, that’s what protects the mission from false position injection.
Can you share a concrete deployment example?
We worked with a UGV operating inside an RF-jammed tunnel. The architecture combined a CV7-based inertial navigation core, wheel encoders, LiDAR SLAM, and a trusted multiband GNSS receiver for open-sky operation.
When the vehicle entered the tunnel, GNSS covariance increased and eventually dropped out completely. The system transitioned to inertial plus LiDAR plus encoder fusion without any intervention. Drift stayed bounded throughout.
Near the tunnel exit, there were intermittent spoofing attempts generating inconsistent GNSS updates. Those were automatically down-weighted by the fusion engine. When the vehicle came back into open sky and GNSS reacquired cleanly, the reintegration was smooth. No reset, no discontinuity in the position estimate. That’s the behaviour you need in an actual deployment. The vehicle never lost localisation.
How would you summarise the value of external aiding injection?
External aiding isn’t about piling on sensors. It’s about giving your navigation system more ways to stay accurate when any single source goes unreliable.
A navigation architecture built on precision thermal calibration, multi-IMU noise reduction, accurate time synchronisation, and flexible injection of external position, velocity, and heading inputs gives OEMs the tools to reduce drift significantly, extend their operating envelope in GNSS-denied conditions, protect against spoofing, and maintain smooth position estimates throughout a mission.
For any platform operating in the real world, building that kind of robustness into the core architecture from the start is a much better approach than trying to patch it in later.
This guide shows how modern UAV systems maintain precise positioning in GNSS-denied and degraded environments. Discover how inertial sensing, sensor fusion, and robust navigation architectures keep systems predictable even when satellite signals fail.
Download the guide to understand how to design navigation systems that stay accurate when GNSS doesn’t.
Loosely coupled integration fuses processed outputs, such as position and velocity, into the inertial filter. Tightly coupled integration injects raw measurements like GNSS pseudoranges directly. Loosely coupled architectures are generally preferred in embedded OEM systems for their modularity and robustness.