We're releasing S100 firmware update with improved stand-alone accuracy. It might be useful for customers using S100 in places without RTK correction service requiring better than 1 meter CEP stand-alone accuracy.
Below is 24hr scatter plot and horizontal error magnitude time plot result without RTK correction tested in Taiwan, a place with higher ionospheric error than later described testing done in US. Left-hand side is S100 using the new firmware, having 0.6m CEP or 0.7m RMS accuracy. Right hand side is another $2000+ receiver. Center of the scatter plot is RTK determined position. Square dot is the averaged position.
Unlike RTK receivers, which every RTK result will have exact centimeter level accuracy, a sub-meter receiver’s accuracy is statistical, each result may have better or worse accuracy than the listed spec number.
Take a sub-meter receiver that uses P206 board which has listed accuracy spec of SBAS (WAAS) 0.3m RMS (67%) and 0.6m 2DRMS (95%) for example, below is from USDA Forest Service NTDP GPS Receiver Accuracy Reports.
It's surprising that accuracy of 5 or 60 point averaged result (0.71m or 0.84m) is worse than a single 1 point result (0.62m). 15% or 35% accuracy degradation after 5 or 60 point averaging. The more trying to average out the error worse the accuracy it gets. What's going on?
It's also puzzling that accuracy of 60 point averaged result (0.84m) falls short of 0.6m 2DRMS (95%) and far from 0.3m RMS (67%) listed accuracy spec. 180% or 40% deviation off 0.3m or 0.6m accuracy expected from a casual user looking at such specs. What's happening?
Below figure is taken from a segment of the above S100 24hr test result. It might help shed some light on the above behavior.
1. If single point testing is conducted at time A, it would have 0.6m accuracy.
2. If 5 point average testing is conducted at time B, it would have 0.71m accuracy.
3. If 60 point average testing is conducted at time C, it would have 0.84m accuracy.
As sub-meter receiver accuracy is statistical in nature, short term accuracy test result can be highly time-dependent, misassessment of accuracy performance could result.
Without checking accuracy behavior of a sub-meter receiver against RTK receiver, taking listed spec for granted, one might believe in getting 0.3m accuracy while could actually be getting 0.84m or worse accuracy in scenario like above.
Similarly, without checking accuracy behavior of Estimated Horizontal Error output of a sub-meter receiver against RTK receiver, one might fall into same misconception of taking error estimate for granted as true position deviation, which could lead to unexpected inaccurate results.
All of these makes choosing a better performance sub-meter receiver not an easy job as simply comparing product spec numbers.
Given the above sub-meter receiver behavior understanding, and cross referencing USDA Forest Service test data, our S100 might be a nice candidate for sub-meter applications when unable to use centimeter-level accuracy RTK feature.