Tags:
, , , , , ,

Surveying Instruments – Building Trust Through Verification

Surveying instruments have evolved to provide reliable and repeatable precision and accuracy, but you cannot afford to eschew putting this to the test. This should include your GNSS gear.

In all manner of surveying, practitioners are wary of simply accepting someone else’s work. It is like a “prime directive”: independent verification. And likewise, when it comes to your instruments, independent verification and calibration is not only a good practice, but may also be a requirement.

Calibration of instruments by independent means is as old as surveying. The 18th-century French/Spanish “Geodesic Mission to the Equator” was undertaken to provide the key measurements needed to resolve contentious debates on the shape of the Earth. The mission painstakingly measured two degrees of latitude, on the ground and through triangulation. The 12km baseline was measured with wooden rods called a “toise”, and these were checked throughout the course of the survey with a master iron toise brought from France. As the wooden rods would wear, new ones could be fabricated to match the master. They also laid out huge, graduated arcs on the ground to constantly check their Quadrants (angle measuring instruments).

Checking a GNSS rover on a federal calibration baseline.

During the 1970’s, when laser Electronic Distance Measurement (EDM) began to come into common use in surveying, first as standalone instruments, and later integrated into total stations. At the time it was not uncommon for surveyors to chain some distances to prove the technology to themselves. This era also saw the expansion of public calibration baselines (CBL) as an aid in testing and calibrating such instruments. Many countries, states, provinces, departments of transportation, and private entities required periodic checks. You would often find such checks, or requirements for annual or semi-annual submission of the instrument to qualified labs or service centers for calibration. CBL calibrations may be required in some situations, to provide the legal traceability of measurements.

Considering how well the instruments of today can achieve and maintain their specified precision, the idea of chaining-to-check seems like overkill and could be considered checking against a less precise method. Visits to a CBL, and/or sending in an instrument for periodic calibration are, or should be, an essential part of the due diligence the operator of a total station must exercise. The requirements to do so are not as ubiquitous as they once were, nor the frequency requirements, nor are visits to CBL’s, but the resources are there and tests can be relatively easy, and quick to perform.

National Geodetic Survey certification of a calibration baseline (CBL) – source NMSU

For the testing of total stations, for the EDM, and for angular precision, such external resources are readily available, and they are truly a form of a ‘control’ (in terms of the ‘scientific method’). It is true that the distances published for CBL were established with another EDM, but the process is quite bit more stringent than what you do with your own EDM. For instance, the National Geodetic Survey (NGS) of the U.S., like its counterparts worldwide, requires teams to perform measurements spanning a week or more, using one of their specially calibrated instruments, precisely recording meteorological data, and submitting observations for rigorous processing. Though any measurement method will hold some amount of error, albeit extremely small in this case, it is safe to say that CBL can be trusted as external control for purposes of instrument tests and calibration.

Electro-optical instruments, as far as testing and calibrating goes, are relatively (no pun intended) straight forward. Even the nature of how total stations are often used, as in close traverses, can reveal when something might be awry. Testing scanners would be a whole different subject, but there have been methods developed.

GNSS Testing Challenges

The situation is not so straight forward for satellite-navigation-based methods and instruments. And this has ironically become more complicated with the widespread addition of multi constellation capabilities in most surveying GNSS instruments. Users have found that the addition of more satellites, and modernized signals, their new wave of GNSS rovers can fix in places they could not have with legacy gear—this can and has resulted in some amount of over-confidence in results. That the rovers can suddenly fix in sky-view challenged places like in urban canyons, close to structures, under various densities of tree canopy, and in high multi-path environments — exciting developments but not a cure-all.

There is no magic bullet, and no one has some secret sauce that no one else has that overcomes the basic physics of how GNSS works, and the nature of error in challenging environments. And, if you test drive multiple makes/models, this is sure to reveal if touted advantages actually make any significant difference. If anyone asserts 100% confidence, they either know little about surveying measurements and their analysis, or might be doing a little subtle gaslighting for marketing purposes. If you or your crews are unsure about the nature of error analysis, here is some suggested reading.

The jump from an older rover (e.g. that could use only one or two constellations), to the latest wave supporting multi-constellation, and improved to RTK engines, can be quite an exciting shock.  It is almost liberating to experience the speed and precision of a new rover in many more places. Yes, they can fix and give you great looking results, but should you blindly trust this without doing some tests? Fortunately, most surveyors are aware of the risks of not verifying, and seek to test in various environments, to build confidence in their new gear. And there are many different kinds of tests, often the very same tests that surveyors did when they got their first rovers long ago.

Testing can get quite elaborate. Manufacturers have test courses set up, baselines, and even do tests inside anechoic chambers with recorded observations (to do very-apple-to-apple comparisons). Geodetic authorities have test courses, and antenna calibration baselines.  Academic and commercial labs may do tests (that none of us ends users typically would), like for ISO Standard 17123-8

Academic study of real-time kinematic solutions using multiple sets of equipment. Source – Newcastle U.

With the exception of geodetic surveying, nearly all surveying activities are relative in nature, and terrestrial surveying instruments (e.g. total stations and levels) are very well suited for this. Be it the courses of a boundary, the topo for a design project, or layout for construction, you are tying back to some form of control in a relative manner. Certainly, a GNSS base set up on known point a very short distance away emulates this, and can be processed as such, but is not quite the same as a physical on-the-ground measurement. It can yield very high precision, with multiple values tightly grouped, (though not as tight as a total station shot), but it has the specter of inconsistencies in accuracy hanging over it. The very same reason why, even with the new rovers, surveyors avoid using GNSS for certain types of work. For instance, short legs of a boundary: the lower precision/accuracy results for both ends of the leg combined, could yield an unacceptable inverse.  The improvements to GNSS have tempered this somewhat, but then again, you really need to test to see how good this new gear really is. 

Checking in to Published Marks

This has always been a standard operating procedure for many surveys. If you have geodetic control nearby with values published in a reference frame you can match with your rover output, you can do some quick comparisons. The caveat is that not all published control is created or perpetuated equally. In many parts of the world, the stewards of geodetic frameworks and infrastructure are backing away from passive control; it is expensive to establish and maintain. And, passive control could potentially be obsolete as soon as you drive away from setting it. In contrast, active control, like continuously operating reference stations (COR) are constantly evaluating their positions. You might find a passive benchmark or horizontal control mark that was set by a federal agency with First Order methods, but the last time it was physically observed (to update published values) may have been half a century ago.  Has there been subsidence? What has tectonic velocity done to the values?  Some people mix the convention for reporting a position, or the name of an agency that established it, as some kind of guarantee of quality. A “State plane” coordinate published in say 1990 could be centimeters to decimeters different by now.

Those caveats aside; if you find a marks or marks that you determine represent current geodetic values, you can use these to check repeatability over time, and how well the rover can resolve to a geodetic reference framework. To simply check repeatability, many survey firms might just check in to recent project control or set a point or points at their office. These local points doubly serve as a way to check if the gear is working properly before heading out to the field.

Considering different makes and models of rovers? A simple test crossbar is easy to fabricate, with brass bolts of the appropriate thread.. You can also swap positions on, and/or rotate the cross bar.

Inverses

One simple test that many surveyors do for their GNSS gear is to either set up a baseline with their total station and compare to the inverse of GNSS observations. This is where a visit to an established CBL visit can be worthwhile, and there are often multiple marks set up at different distances along the baseline. And, if a CBL has some marks in or adjacent to tree canopy, you get to see how that will affect your results.

Some CBL have, besides the extremely precise published distances, may also have geodetic values for each mark, and elevations. You can check more than just the inverses in these cases. There are often marks set off the baseline for angle checks. You can set more marks off the baseline (if it not prohibited), and these can include marks under various tree canopy densities, or near buildings to test multipath performance.

Checking with Post-processed GNSS

For wide-open-sky test points, this is a great option. For GNSS challenged locations, there are caveats.

If you do a long-session static campaign, and post-process, this can yield a much higher precision geodetic position than your real-time GNSS can—except when it might not (more on that later). Constraining to the active control (i.e. continuously operating reference stations, or CORS) is common practice and provides that link to a geodetic reference framework your rover and respective field software to resolve to. This process can come with a large time/cost burden but could be worth it to establish check marks or baselines. But be aware of the velocity in your area; you may need to update periodically.

There are automated post processing services, like the online positioning user service (OPUS) of the NGS in the U.S., and PPP-based services like CSRS-PPP from natural Resources Canada, and many others both public and commercial. These help, but you need to look at the details; you may see peak-to-peak values that exceed what you would be expecting with your own RTK setup. The challenge for some CORS-based automated post processing services is that the CORS can be far apart and may only be using one constellation. In contrast, many local/state/regional real-time GNSS networks (RTN) have higher station densities, and many have implemented four constellations. Those RTN that have automated post processing services can deliver equitable (and mostly better) results than the wider area services. For instance, the online service of a local RTN I tried can match a 5-hour OPUS with a ten-minute observation, 5 minutes in many cases.

The Canopy Conundrum

The Galileo and Beidou constellations reached what is considered full complement in 2020. This is truly when the full benefit of these new, modernized constellations could be experienced by anyone with a rover where some or all of the new signals are supported. Adding the new constellations and signals was no easy feat for manufacturers, and many began the process years ago. And is a moving target; for instance, the interface control document (ICD) for Beidou 3 was only released in 2019, and the race to implement has been a scramble. Putting so many more sats and signals into an RTK engine has seen many manufacturers having to design new hardware with substantial increases in processing power to accommodate new ways to mix and match signals.

The end result though is that very recently developed GNSS rovers, that I like to call the “fourth wave”, deliver a rather striking user experience—almost scary good. One of the key differences that users notice is the ability to get fixes under certain densities of tree canopy. The manufacturers have played this up to varying degrees. Most do not wish to overplay this and responsibly urge caution, but others (and it seems the late adopters especially) really play this up. The posturing over “best-in-canopy” has been going one for many years and is sometimes both comical and insufferable. But the overall benefits of the new wave represent a new chapter for end users, yet one that represents a more pronounced hazard: overconfidence.

Simply getting a fix under dense canopy does not guarantee precision, repeatability, and accuracy. In this instance though, it was set up over a point checked with a total station. Repeated testing in varied conditions can help you build confidence in the performance of your gear, and to recognize thresholds where you would choose a different method.

While most surveyors are fully aware of the nature of error in GNSS and the general principles of error propagation, the sudden ability to fix in canopy sets some users up to become over-confident in results. And this does not help the perception that some users are characterized by long time practitioners as simply being “button pushers”.

Manufacturers have evolved survey rovers to include many features that help in evaluating results, but also in alternate positioning solutions. No-calibration tilt lets you lean out towards open sky, there are offset image point options on some rovers, and you can also send data directly to online-post processing from the field, be that PPP or automated baseline processing. And, with new rovers it is much quicker and easier to get offset points out in the open with confidence, to check with other instruments.

All of these developments can be valuable, though the use of GNSS-in-canopy to check GNSS-in-canopy could be questionable. No matter how many different GNSS based solutions you do in a particular spot, the sources of error (from the limited the sky view and multipath), can be the same for each method; they are not completely independent of each other.  You could do a half-a-dozen different types of processing; PPK, PPP, rapid-static, collect and process at a high rate, etc.—none erase the sources of error. There are very practical limitations with how well a rover can determine the quality of its own solutions. The quality could be excellent, but rather than get tied up into all kinds of hypotheticals (or marketing blather) you can test to find out just how well gear does in different situations.

Trust but Verify

Back to one of the “prime directives” of surveying mentioned above; you do not blindly accept another surveyor’s results, you do a certain amount of independent verification, and you often do so by walking in the footsteps of the original surveyor. Modern GNSS rovers can be likened to a digital surveyor that can do in minutes what a whole crew might have taken days to do a century (or decades) ago. That’s amazing (or frightening), but how do you build confidence in its skills? Do some testing.

Your new rover is a technological wonder, and it is no surprise you are impressed with it. Chances are that when you do some tests, it will prove to be everything you hoped it to be, and most warnings could be moot. I hope so, and in my own tests I’m finding great, repeatable results—verified with total stations. Yours should be as well…… but you’ll never know until you put it to the test. 

Related Articles

Accuracy and precision [Part I] : Are you being rigorous with your data?

This article is the first part of a series about accuracy and precision, which briefly covers common factors that affect…