, , , , , , , , , , , , ,

Geospatial Technology Guiding the Development of Autonomous Vehicles

Driver-less cars, also known as autonomous vehicles or self-driving cars, are vehicles that are equipped with sensors, cameras, and other technologies that enable them to navigate and operate without the need for human input or control. These technologies rely on a range of geospatial data and information, including maps, satellite imagery, specialized sensors, and location-based services, to provide the vehicle with the information it needs to make decisions and navigate safely.

We will focus on some of the key attributes of self-driving cars and how geospatial technologies enable them to work.

Knowing where they are

Since the first ocean-going civilizations set out to explore the world in prehistory, there was always one big problem. How could they possibly know where they were on the planet’s surface. They could have used the stars, stick charts, the mariner’s astrolabe, or the magnetic compass. But all of these tools had flaws, and all of them depended on the user’s proficiency. We needed something that would take the guesswork out of navigation. Enter the Global Positioning System (GPS).

The development of GPS technology began in the Cold War-era space race as scientists noticed what came to be known as the Doppler Effect while observing radio signals from the Sputnik satellite. Through observing these signals they realized they could be used to pinpoint objects and their locations on the surface of the earth. In 1958 ARPA began work on Transit, the world’s first global satellite navigation system.

An image of the original global navigation system, the Transit satellite designed by ARPA in the 1950’s.

GPS as we know it today has come a long way from its infancy in the space industry in 1978, when it was solely used by the military, to the launching the final satellite for full coverage in 1995, to 2000 when selective availability was turned off and public use became more prevalent, and now to the upgraded versions such as differential GPS we use today that allow us to track everything, everywhere.

A great video on how GPS works. (From VirtualBrain on YouTube)

We now have pinpoint accuracy for every GPS-enabled object on earth, and there are a lot of them. From the original GPS units we have branched out to use GPS in watches, smartphones, and even golf carts, all of which have their uses. The increasing accuracy of GPS has enabled a lot of innovation, and while this has contributed significantly to our quest for the self-driving car, it is only one piece of the puzzle.

Knowing what’s around them

An autonomous car knowing where it is (roughly) on earth is a good start. But until accuracy is exact, and the world stops moving around us, GPS alone will never be enough for these vehicles. Without a driver to guide it in a dynamic, unpredictable environment, a car needs eyes of it’s own.

These eyes come in the form of a plethora of different sensors, such as LiDAR, radar, and traditional cameras, that are used to collect data about the vehicle’s surroundings and provide the vehicle with a detailed, real-time view of its environment.

LiDAR – Light Detection and Ranging – is a sensor that sends out rapid pulses of waves and uses their reflections to paint a picture of the world around the sensor. Bats are well known for using echolocation to help navigate their environment, and LiDAR acts in much the same way, only using light waves as opposed to sound waves. By continuously pulsing light waves out, the LiDAR system allows for a rapid, real time picture of the environment to be fed to the systems algorithms and sensors.

How the Google self-driving team teaches their car to navigate cities. (YouTube)

Radar is much the same as LiDAR, and helps the autonomous vehicle get a better picture of it’s surroundings. However, radio waves are used instead of light waves in LiDAR, allowing for the fine-tuning of frequency of the waves used, giving the ability for the cars to use millimetre-scale waves and have equal precision. Due to the unpredictable nature of refraction and reflection of light waves, radar is also more dependable in poor weather conditions, allowing the car to see further and better than LiDAR, even identifying hazards that may be unseen to the human eye. Radar and LiDAR work together to paint a complete 3D image of a vehicles dynamic surroundings in any road conditions.

While LiDAR and radar can cooperate to create an incredible 3D image of the surroundings of the car, nothing quite beats a traditional camera for the context and clarity it provides with it’s imagery. By using various lenses to give a complete 360 degree view surrounding the vehicle, the cameras can also provide long and short-range views to be integrated into the systems and algorithms. This is ideal for obstacles and context such as reading road signs. As clear as their pictures are though, they lack the ability to give distances and velocities to the algorithms running the car and therefore are well-complimented by the LiDAR and radar as discussed above.

Traffic Sign Recognition (TSR) is offered by some car manufacturers using their onboard cameras.

Knowing where they need to go

Having tackled both geolocation of the vehicle with GPS and environmental hazard navigation using the variety of sensors, a discussion of geospatial impacts on autonomous vehicles would not be complete without discussing maps. Getting from A to B is literally the entire point of cars (sorry car enthusiasts). Having a feed of high quality navigation data is of utmost importance. Once a car can know where it is, and know it’s surroundings, it’s going to need to know where to go and how to get there.

Navigation apps such as Google and Apple Maps, as well as Waze, are popular amongst drivers for their assistance in helping us get from A to B as fast as we can, after all who likes sitting in traffic? But they also go a step further when it comes to autonomous cars, adding context and information that we all take for granted. Rules of the road that are generally inherent to us living drivers may not be so clear to autonomous systems and are therefore reinforced by feeding this navigational data into the stream of inputs.

Path of travel and 3D object detection system seen in an autonomous vehicle. (Incheon National University)

These can be rules such as one-way streets, left-and-right turn prohibitions, time sensitive rule enforcement and more. By having these rules and conditions programmed into navigation apps, we can add much needed context to be considered by the autonomous pilots, keeping our roads safe and efficiently moving (ish?).

Aside from the obvious rules of the road, some of the innovative features of these navigation apps are also worth mentioning:

  • Selecting the most fuel efficient route is enabled in Google Maps and contributes to a lower carbon footprint and also save money for businesses that hope to use fleets of autonomous vehicles.
  • Updating with live traffic conditions and offering alternative routes as blockades occur ahead.
  • Offline maps can be downloaded so that vehicles can navigate safely and independently regardless (and in case) of network connectivity issues.
  • Time-based navigation decisions, so that if cars are required to arrive at certain times this can be calculated and adjusted for, they can leave enough time to arrive based on average route conditions.
  • Update and search for stops along the way for logistics solutions and on the fly navigation change.

What’s next?

Self-driving cars have come a long way aided by innovative developments in geospatial technology and engineering. But there is still a long way they can go. As we continue to see improvements and new implementations of the technology there is no doubt they will continue to improve. How long until they become ever-present in our society? What other technologies will aid in their development? Let us know your thoughts on the geospatial impacts on self-driving cars. We look forward to the discussion.


[1] https://aerospace.org/article/brief-history-gps

[2] https://illumin.usc.edu/the-evolution-of-gps/

[3] https://cosmosmagazine.com/technology/lidar-how-self-driving-cars-see/

[4] https://flyguys.com/lidar-vs-sonar-whats-the-difference/

[5] https://resources.system-analysis.cadence.com/blog/msa2022-the-use-of-radar-technology-in-autonomous-vehicles

[6] https://www.jdpower.com/cars/shopping-guides/what-is-traffic-sign-recognition

[7] https://blog.google/products/maps/google-maps-best-features-tips/

[8] https://www.iotworldtoday.com/transportation-logistics/self-driving-car-navigation-boosted-by-3d-object-detection-system-

Related Articles

Canadian Geospatial Digest May 29, 2023

Canadian Geospatial Digest May 29th, 2023

Why are hazard maps still lacking in NWT communities? Alberta wildfires are depicted on a digital map in near real-time.…

Canadian Geospatial Digest for August 15, 2022

Apple Maps is Coming to Your City this Summer, Satellites show World’s Largest Beaver Dam, Who lives in Canada’s Hottest…

Canadian Geospatial Briefing Oct. 4th: Netflix Canada map; Lim Geomatics; accurate COVID-19 mapping; BC rising sea-level maps; Apple Maps in Canada; rural open data event

Netflix Canada launches website showing its filming locations across the country Netflix in Your Neighbourhood will allow users to search for…