A Conversation With The Reality Capture Guy
Over a very chatty lunch with Paul Burrows, known globally in geo-social media as ‘The Reality Capture Guy’, many subjects were covered: trends in reality capture, sensor integration, AI, automation, life, the universe, and everything…
Paul Burrows, Senior Technical Sales Manager Europe – Reality Cloud Studio, powered by HxDR, Leica Geosystems (part of Hexagon), and Gavin Schrock, land surveyor, geo-geek, and consulting editor for GoGeomatics.
Schrock : “When I mentioned to folks that I was meeting you while in London, they offered many discussion topics. Chiefly, though, folks were interested in hearing about trends in sensor technologies. We’ve just experienced the most impactful decade for developing such technologies for the geospatial sector—things we could not imagine would become a reality in such a short amount of time. However, I suspect there is more than one trend interwoven into what we might shorthand as ‘sensor technologies’.”
Burrows: “There are a lot of new sensors, certainly from us, and different companies. We keep evolving—more, smaller, and smarter sensors. But the key thing is that the software must move at the same pace. When we talk to a customer nowadays, they place 25% of the importance weighted on the hardware, and 75% is on software workflows. It has to be that way; none of the magic will happen without software that is smart enough to stay a step ahead of the flood of data that reality capture sensors can produce.”
Schrock :“I see examples of where some outfit develops a sensor in search of a problem, but also in search of software that can make it useful. I’ve also seen examples of where the software came first. Is this a common thrust? The needs driving software development also driving sensor development?”
Burrows: “We had some customer meetings, for example, this past year, where several customers identified a gap, or saw a need, that maybe we hadn’t quite got the product to fill the niche right now. So, they went off and developed their own platform. Whether it was using off-the-shelf code or using open-source viewers and all these bits and pieces, they eventually realised that there was a limitation, not only in the technology but also in their development skills. Plus, when it comes to sensors, those have become so complex and sophisticated that a full R&D team and manufacturing capacity are needed. That’s not to say there aren’t some amazing self-developed solutions out there, but there are practical limitations.
“Then they come back to us and say: ‘Okay, we need a Hexagon solution now, we need it to be tied to our sensors, and we need it to work’. We do see this big shift now.”
Schrock : “It’s great to see R&D being driven not only by customer feedback and market trends but, in some instances, customer-driven innovation. When people recognise a need, they act. Right now, there is a lot of focus, and rightfully so, on data management, and collaborative environments, including sensor integration. This makes sense as reality capture produces massive amounts of data, so much so that legacy processing and data handling methods and tools can never keep up. Has the cloud held its promise in helping solve these challenges?”
Burrows: “We used to talk about quite a long transition from desktop to cloud, and I don’t think it’s so much of a transition. It’s more a symbiosis—how things work together. We always talk about the key workflow environments: field, desktop, cloud, and the solutions that then work across those technologies. It has evolved rapidly into this symbiosis across sensors and systems. Some steps can be performed in multiple places; you can choose which best suits your workflows instead of how legacy processes and compartmentalisation often dictate workflows.
“For example, you can register scans in the field, you can register on the desktop, and now you can register, to some degree, in the cloud. They’re all, at the moment, slightly different, but the same algorithms are used in the background. What we’re seeing now is more weight on the cloud aspect, whether that’s storage or collaboration, and we’re processing more with Reality Cloud Studio. We’re seeing more conversations around the cloud compared to where we were two or three years ago.”
Schrock: “The more I’m exposed to Reality Cloud Studio, the more I see that it might be able to heal some of the wounds I have from working with legacy, compartmentalised software and processes. I’ll be doing a write-up on that soon and will pick your brain for certain details. But I digress…
“A decade, or even a few years ago, fewer applications lent themselves well to the cloud. For instance, infrastructure life cycle elements with substantial data capture/processing functions. The situation, though, has changed rapidly. Provided there is always an option to work disconnected when needed, I would expect to see more users integrating the cloud into workflows.
“The cloud is enabling a broader range of end uses and users. What are some examples of these new types of users, and how do they differ from the more traditional end users?”
Burrows: “Reality capture is indeed a very much bigger space now. One way I describe respective user communities is ‘track one’ and ‘track two’ type customers. Not that one is more vital or sophisticated than the other, just different end-uses and in some cases, with specific responsibilities, like with surveyors.
“For instance, we’re seeing more reality capture in the media and entertainment space, and more in the public safety space. As people become aware that reality capture technology is much more democratised than it ever was and much more accessible.”
Schrock: “I like to refer to some as ‘prosumers’ (a term I did not invent, but I like it). Prosumer, to me, means you’ve got consumer-level accessibility to the technology, ease of use, yet verifiable quality and precision of what you’re going to be getting. And there are professionals doing something with it. There might be specific demands on users that might be characterised as track one, like surveyors. They need to meet the legal requirements of boundary and cadastral work, but there can be serious expectations for track two users as well. For instance, public safety doing forensic work, and filmmakers with megabuck projects on the line.
“Interestingly, this explosion of reality capture had among some of its earliest end uses in the field of archaeology. Many of the first users of laser scanners were archaeologists, and the industry has many from that background in its ranks. Leica Geosystems has some deep roots in the field of archaeology, through connections to Cyra, which developed one of the first viable laser scanners under the leadership of Ben Kacyra, founding director of the historical preservation organisation Cyark. Justin Barton, director of product management software for Leica Geosystems reality capture also hailed from Cyark. I understand that you have some roots in archaeology as well.”
Burrows: “I did a year of computer animation, but I didn’t like the programming aspect or the maths. I was very much more into the artistic side of it and wanted to pursue this from a creative perspective. I would not necessarily need a degree to pursue this, but I wanted to carry on doing a degree. I decided to do archaeology. It was really interesting, because there was this kind of fusion off the back end of knowledge coming out of a classical degree like archaeology and then my deeper knowledge around spatial technology.
“I ended up working at the Institute of Archeology and Antiquity of Birmingham University—and they had laser scanners. I just started playing with them. However, I was soon tasked with building a business around the scanners they had bought. Then, one of the guys from Leica Geosystems in the UK came along and offered me a good job. Yes, so many people in the industry side of laser scanning came out of archaeology, it was one of the first places where scanners showed up, which is ironic as there was not much money in that field. However, the use of scanners by archaeologists for preservation is of course essential.
“Then people flipped the idea and thought: ‘This could be used for process, plant, manufacturing, surveying, engineering, construction, and more’.”
Schrock: “You’ve developed quite a following online, not just in the ‘Leica Geosystems-sphere’, but in the broader ‘geo-space’. A GIS acquaintance asked me to take a selfie with you. I’ve been following your posts for years, to see what’s new in reality capture, especially as you promote examples of what people are doing with the technology.
“I’m so happy to see that we are past the early days of scanning and the ‘hey, look what we scanned’ novelty projects and articles, where folks were not necessarily focusing on real-world applications. Now, there is serious focus for all the sectors we discussed earlier.
“How did you get involved in this kind of outreach, and how does that mesh in with your other duties?”
Burrows: “It’s kind of a byproduct of everything. Having that social media presence and connectivity to your market is nice because there’s also an element of the feedback loop. And it’s much more accelerated than traditional feedback mechanisms.
“When you start talking to people, it’s the preferred way they like to learn. You go and ask them, and they say: ‘I want a video’. Very popular videos like a very basic guide to Leica CloudWorx. We host many videos on our official sites, but many people want to be able to find them simply through YouTube searches.
“My primary focus presently is on Reality Cloud Studio which recently launched into the market. It’s all about expanding the business very, very quickly. If you’re in the U.S., Canada, or Europe right now, if you buy any laser scanning hardware, you get Reality Cloud Studio included as part of the package. So, we’ve very quickly got thousands of users being added to the platform.
“So that’s one part of my job, to make sure that everything is working and everyone’s on-boarded, that training and other materials like FAQs are available. This is an example of where my social media presence has proved invaluable.
“I collaborate closely with our reality capture software teams; so, I interface with all the product management teams and I’m probably one of the people in those groups that is closest to the customer base, and that comes through social media.
“That really benefits us. If I start reading threads on the Laser Scanning Forum about a certain request, then that will eventually filter back down into the product management group. And we’ll try and make sure that we either log it, we develop it and push it out to market.. This works well because we’re lucky to have a very competent product management team.”
Schrock: “As a geospatial writer, someone called me a ‘geek translator’. In your work, I see elements of being a kind of docent, evangelist, and guide to help others explore technologies, subjects, and concepts.”
Burrows: “I try to take that very complex stuff and make it digestible. Whether it’s a webinar, step-by-step guides, or just talking to people on LinkedIn. There can be quite daunting technology, and the job is to understand it on behalf of many people. So, how do we turn that into a technology that, for our audience, might become a commodity, used on a day-to-day basis, and truly delivers increased productivity? And how can it make them money.
“For me, that’s an important part of what I do, in terms of like you said, the evangelising aspect. Because you can have the most important, incredible technology, but if you can’t simplify it, and you can’t say it in broadly digestible terms, then you’re not going to sell it – simple as that!”
Schrock: “The whole ‘web-o-sphere’ has made it possible to find educational content through so many channels. I’m not saying everyone should be autodidacts and eschew formal training or schooling—there is no substitute for that in many instances. However, there are aspects to geospatial, like reality capture, that evolve so fast, that it may not be reflected in formal courses for some time. Going online may be the only way you can learn about it in the moment.
“As long as people jumping into such technologies have at least some foundation in the subject. You mentioned that your background in animation provided a foundation. I was a film major, I surveyed to pay for school, then found it to be a more promising career path. But I did learn about a few things in the course of those studies that helped me understand evolving surveying tech, like computing and digital imaging. What aspects of your education did you find helped the most?”
Burrows: “Mostly, it’s having a base understanding of just how things operate and how projects, systems, and processes work. We also did a module on the principles of computer systems, and it’s something that I never really thought that I would need to know. But now, when you’re thinking about GPUs, RAM requirements, etc., it’s very useful to have that kind of high-level information so you know what people are talking about. I can talk to very technical software people and understand what they’re saying without having to go and read up on it afterward.”
Schrock: “People scoff at the teaching of the game side of computing. Gaming tech is a driving force in nearly every high-tech industry and drives a lot of R&D in processing and visualisation.
“Circling back to the various end-user communities and how reality capture has changed these. I see parallels in the way gaming tech has influenced the IT world, and how reality capture has reshaped professions like surveying. Presently, there is a lot more demand for reality capture and construction-related surveying tasks than for the traditional core of say, cadastral. I even hear a lot of young surveyors say that they prefer to label themselves as ‘reality capture specialists’.”
Burrows : “I think it’s really interesting because I feel like a lot of surveyors feel like their specialism was not de-skilled. That’s maybe not the right word, but there are a lot of people out there who are just a laser scanner operator. Perhaps they’re not actually surveyors. If you were to ask them to set up a control network to attach this scan data to… They might not know, because that’s not what they’ve been trained to do. They’ve been trained to go out and press as many buttons as possible, as quickly as possible. They’ve been trained to go and walk around this building and then let someone else process the data.
“That means there is also this huge value attributed to true surveying. There was a post on LinkedIn recently, quoting some guy in the U.S. that said: ‘someone quoted me $300 for a boundary survey’. A surveyor responded ‘No, no, no; the minimum for a good boundary survey without any complications is about $2,000 – $3,000’. Where is this coming from? Is this just someone coming to the bottom of the market, who thinks they can do it on the cheap, devaluing the work of serious practitioners? It’s like measured building surveys in the U.K.; going cheap is not going to turn out well. I think most people get it: you only get it done wrong once.”
Schrock: “You touched on the democratisation of reality capture; it’s like so many other technologies. I always go back to the analogy of, typing classes in schools, and typing pools in offices. Now everybody’s got a keyboard on their desktop. And GIS, in the early days, it was command-line and utterly complex. You just about had to have a degree in computing as well as in geography. Now GIS is another icon on the desktop that almost anyone can participate in. There are open-source options now as well.”
Burrows: “It’s gone that way with UAVs. Fairly easy to get licenced or certified, bring the technology to your firm or start your own. Now, just about anyone can pull one out of their backpack, and off they go. We have one in our house, a small one, but we have one.”
Schrock: “Same here. Our kids introduced us to a lot of new tech through their varied interests in gaming, animation, filmmaking, illustration, 3D modelling, etc. There’s a level of reverse mentoring going on with regard to tech. Not only from the kids but the very bright young folks entering the geospatial space. You mentioned you had youngsters as well.”
Burrows: “Our oldest has just turned 14 and the twins are 10. Incidentally, they’re very tech-savvy, which I think is the same for a lot of kids these days. When I see how they approach things, with all of the advances they apply so easily, I think: ‘That’s not how we did it’. I even find myself saying this to colleagues. You know, 20 – 30 years ago, that’s not how we did it.
“The fact that my kid does all of his homework and submits it via a computer rather than writing it all out, there are both massive benefits and potentially negatives. You know, he’s not very good at handwriting, but he’s fantastic at being able to do everything on a computer. But on reflection, when I think about my day-to-day work, when do I use a pen and paper? For the occasional note in a meeting?”
Schrock: “I’m happy there are so many things we do not have to do anymore. I’m not a surveyor who pines for the days of lugging heavy kit over hill and dale, waiting hours or days for decent satellite geometry, and hand-reducing things, etc.
“I don’t view technology and automation of mundane tasks as devaluing any of the geospatial professions. No matter what aspect of geospatial, the fundamentals of precision accuracy can be adhered to easily, in some cases even easier than the analog days. You can still stand behind your data.
“It is all too easy to dismiss youngsters as ‘button pushers’. Those buttons deliver massive productivity gains. That is reality capture in a nutshell. ‘Button pushers’ is also a way some traditional practitioners characterise ‘track two’ users you outlined.”
Burrows: “There’ll be the consumer and the prosumer, or the professional and the consumer—whatever the split looks like. Because there is always, for us, going to be a need to sell to a market that isn’t a traditional surveying customer base. Surveying is in the DNA of Leica Geosystems and influences what we do as strongly as ever, but it is a finite customer base. So how do we expand?
“There’s loads of stuff happening in the background in terms of really exciting software-based stuff, around virtual production, and how we leverage the cloud. And how do we tie in with external companies who are all already keyed into that space? What partnerships can we forge there?
“I think that they’ll always be the two ends of the spectrum. I feel that we have to address that, both in hardware and in software. With Reality Cloud Studio, for example. I did a LinkedIn post about how I can send data from my device by 5G to the cloud. It’s registered, it’s meshed. I can do animations. I can download it if I want to. I can stream it into other applications if I want to. I haven’t had to do anything, apart from upload it.
“Say if I want to scan a church for renovation. I do 20+ setups, no control, no targets. I just set it up, do the scans, and upload it to the cloud. It’s automatically registered, and automatically meshed, and I have a deliverable that’s good for some people, like an architect client that just needs a high-fidelity 3D model. However, a professional surveyor would turn around and go: ‘Okay, why is this not on control? Where’s your registration report? Where’s this, where’s that?’ Perfectly valid questions for many types of work. That is why we accommodate ‘track one’ users who need to be able to drill deep into the interfaces and data.
“There’s going be professionals that do very specialised scans. But somebody might say: ‘I can scan, it can’t be that difficult’. The specialised work will need to go beyond just the physical task of scanning. The end-user might say: ‘This is like drafting; ‘I did a stint as a drafter with pen & ink, can’t be that different’. That’s missing the point, plus, engineers do much of their drafting themselves, with a lot of automation and rules-based engineering in design software.
“The mindset of users and clients’ needs updating, to recognise the strengths and limitations of tools, and to learn about solutions they might not be aware of. And, of course, who should be driving the gear.”
“I think that whoever the clients are, professionals or semi-professionals will be able to access and benefit from working with the data in the cloud. They can stream that down into their CAD solutions, design, and pass that along as a model for the construction phase. However, a process and plant customer is not going to do the workflow that I just described. And they can’t because they’ve got strict compliances. It’s critical infrastructure that it has to be geo-referenced, put on a control grid, etc. Call in the ‘track one’ professionals.”
Schrock: “Leica Geosystems and Hexagon have been in cloud space for quite some time. However, Reality Cloud Studio as a product is relatively new. Please give us a ‘state of’.”
Burrows: “It is in an early stage in its life, but we are continually adding more and more services and tech there. This may not be a long-drawn-out process; we already have algorithms for scanning for specific end-uses. For example: deviation reports, progress monitoring, tolerance checking, clash-detection, and more.
“We’ve already got the SDKs [software development kits], which we can pull from other parts of the business. We just need to cloudify them and then put them within the solution with a nice user interface. This goes for the field tools, the desktop, and the cloud. It doesn’t matter whether you’re in the field, you should be able to register to the same level of quality and accuracy that you can do if you were sitting in front of your desktop machine. The same with the cloud. It depends on the user that you’re working with. At the minute, some of the tools in the cloud are pretty basic, but full functionality is coming fast.”
Schrock: “Construction is booming, globally. As is the digitalisation of construction. It has to. Legacy methods could never keep up with demand. I see some amazing collaborative environments, being used for managing huge infrastructure projects, and the massive amounts of data involved. This includes the cloud and/or syncing if the teams need to work disconnected on occasion. So many amazing packages for VDC [virtual design and construction], 4D construction, BIM, etc.
“However, a weak area in these wired construction environments seems to timely capture, processing, integration, and analysis of spatial data. Is this an area of focus for your firm?”
Burrows: “Construction is an interesting area. With the tight timelines, many construction firms gave up on having surveyors to the scanning. They still call them in for crucial things like project control, but not the mundane parts of scanning. Typically, they find a savvy construction worker who can take on the scanning. There’s sort of a class of reality capture for construction firms. Some of them are setting it up, others have had this in place for a while, putting scanning under their VDC group. They can get scans done in-house and rapidly.
“It’s already a very big market for us, probably one of the biggest sectors that we sell to on the reality capture side of the business. This is why, when we’re starting to build out the tools, whether it’s field, desktop, or cloud, we will have that as a focus. Construction is not only a huge portion of our customers now, but where we want to grow in that sector because we know there are still loads of customers who are far behind the curve, that aren’t yet doing this.”
Schrock: “I see this a lot in what you’ve put out in recent years. The Leica BLK2GO PULSE can compute stockpile volumes onboard. I tested the Leica AP20 tilt prism pole, and it is the cat’s meow for staking, etc. How about other sensor combinations?”
Burrows: “Another thing we need to be mindful of is that not every customer is going to want, or need to rely on ‘scan, scan, scan’. You know, keep scanning throughout the construction process and doing the progress monitoring reporting with the scanner – it’s not always the most efficient or effective approach.
“Sometimes what they’re doing is a progression of different surveying and capture methods and tools. They’re surveying the control with total station, GNSS, levels, etc. They’re putting the project on a grid, scanning to get a complete site model, but then the daily progress stuff might be done with a 360 camera on a hard hat of a worker or workers. The site gets captured just from their day-to-day walking around.”
Schrock: “I hear this is called ‘crew sourcing’. I had heard the term decades ago but now that hardware, software, and AI can make this idea of continuous representation of reality possible, I hear the term more often. One developer even trademarked the term this year. The challenge though, is how to ingest and make sense of the data.”
Burrows: “True, how do you then take the images and other live sensor data and convert that into 3D, and how do you turn that into something representative of progress in a 3D system, like in Reality Cloud Studio? An area of our focus for the future is how to look at different types of data sources and democratise them within that platform. One advantage of the cloud is scalability; we can develop what will be needed to contextualise these constant streams of sensor data.
“I don’t want to belabour a point, but we’re going to be doing more stuff in Leica Cyclone FIELD 360 for example. More of the verification, and construction-type workflows so a customer can stand there, scan a room, localise it, align this with the design model, and identify work that needs to be done on the spot. They can add annotations for the areas where there are problems and push a report directly as a PDF off the back end within the field software, without ever touching a desktop or the cloud.”
Schrock: “‘Everything, everywhere, for everyone’ was a mantra I heard when I interviewed folks in the reality capture division during a recent visit to the Heerbrugg Switzerland campus of Leica Geosystems. The division president, Juergen Mayer gave a great explanation of this idea of ubiquity across platforms.”
Burrows: “Exactly. The idea is that in the future when a customer invests in registration tools, they are not bound to one single environment. They can choose which one fits their preferred workflow for any given project or task.”
Schrock: “One of the big changes we’ve seen in reality capture data processing in just the past few years has been a leap forward in point cloud classification [PCC]. Yes, that has been available, in one form or another, for nearly two decades. But the latest iteration is so much more powerful. Forgive me if I’m oversimplifying the distinction, but it is my understanding that the old approach was relatively simple machine learning type AI. Now it is neural network type AI. There is no way we could keep up with mass data capture with the older tools. Your engineers in Heerbrugg explained this to me.
“Of course, I have to ask, how might functions like PCC find their way into the tiered platforms we’ve been talking about. Is there enough horsepower in the field to do classification on the spot, and is it slated for the cloud as well? And of course, everyone wants to know the future of the Leica Cyclone family.”
Burrows: “The biggest changes, I can’t say. I’m going to be very careful, as some stuff may be far in the future, and other stuff not so far away. In terms of in-field classification, we have a module called PCC. This module is one that just drops everywhere. So, if you have classification, whether it’s in Cyclone REGISTER 360 PLUS, Cyclone 3DR, or Pegasus OFFICE, it’s the same module.
“There are differences between what PCC is being asked to do in each, but the core module is the same. For instance, it might be that the Pegasus OFFICE only exposes the airborne or the street-based classification models, and then in Cyclone, maybe we show the indoor classification, outdoor classification, etc. But that’s the point, we can then take and apply it in the cloud. One of the many things we are actively pursuing.
“Anonymisation is another area of focus. We have this on-the-fly in the Leica Pegasus TRK mobile mapping system, so that at no point is there personal data, like faces or licence plates, in captured images. This has been developed to meet very strict privacy rules, especially in Europe. We’ll work on applying that across multiple platforms.
“And as for Cyclone, I can understand why people are interested in hearing about any changes ahead. Let’s just say that there is a continuum of development, regardless of what things are called, they evolve. There are elements of algorithms that may be decades old. They’ve not really had to change much, but change, positive change, always comes, it is our way. Rest assured that our R&D teams are never sitting still.
“Regarding your point about limited processing power, say on field devices, one could just keep packing in the processors.
“As live connectivity like 5G is becoming so fast and widespread, pushing processing up to the cloud is a very attractive and practical option too.”
Schrock: “Something that is discussed a lot, in the geospatial sector, or any sector for that matter is AI. We see the huge and positive promise of current and emerging AI for geospatial applications. A prime example is the latest generation of point cloud classification we discussed earlier.
“However, some folks in our sector have anxieties about AI; founded or unfounded. Same as they did when lasers replaced surveyors’ chains, and when high-precision GPS/GNSS came along. I’ve just published a feature titled: ‘Balancing Human and AI Elements for Geospatial Applications‘. I interviewed a dozen industry thought leaders who shared their insights. So now I get to ask you the same questions.
“Two prompts: what are some examples of analytical and/or generative AI that you’ve used, or are used in your products or find valuable for your work? And is it wise for firms to promote the application of AI as simply a way to reduce staff?”
Burrows: “As a business, we have to embrace it, and we have to leverage it, but we need to be in control of that process. So, if I look at what I’ve described in terms of all the guided workflows, automating processing and classification, Reality Cloud Studio, etc., there are elements of AI tools that we have used to speed up the process.
However, categorically the content, what we’re talking about, and how we’re presenting it, all comes from the human brain, it wouldn’t come from anywhere else.
“AI has been valuable in helping us strategise and understand the key concepts in user training and reference content. Humans do the final product, but we’ve been able to use it to help organise and develop outlines. We need to break things down sequentially, to make sure that a user is following those steps properly. And then, we used tools that helped us to create indexes and notes.
“For example, I can do a four-video recording, and we have a tool that we use that will allow me to automatically create the steps. And it might be 10 steps, it might be 50 steps, but we use that tool, and then you can look at what it’s created, and go: ‘Okay, this isn’t quite right’. We tweak it. We may remove 50% of the steps, and then you can then add text to each step.”
Schrock: “It’s like the AI-based services that can extract CAD linework from aerial images and LiDAR point clouds. Even if it only gets 70% of the linework, and a drafter has to deal with the balance, the automated part saves a lot of time and costs. Of course, humans need to do QA/QC, and it is highly beneficial to experiment and leverage such tools.”
Burrows: “Per my example, there are so many tools we can and do use. Okay, here’s a little AI button: ‘Make this sound more professional, make it sound more casual, or break these steps down into multiple steps’. And then from a not-necessarily-explicit-content perspective, but from a training and e-learning perspective, we know that we can make our life easier.
“I was starting to churn out guided workflows that ordinarily would have taken me weeks and weeks of digitising, scripting, etc. But by leveraging some AI tools I managed to do it in half, or a third of the time. That then leaves me productive for those other two-thirds to do other useful stuff. For us, it’s just another tool to speed up certain processes, but it still has to have the human element of control.”
Schrock: “I have used any process automation and data mining tools I could find, for decades for surveying and writing research. NexisLexis, then web search engines, desktop assistants, different apps and widgets to help correlate, correct, and analyse things. It’s a continuum of things we incrementally get used to, like spell check and grammar checks in Word.
“There are still many examples of failures in AI completing finished products without human intervention, and AI does not know if it is telling the truth. The human is also aware of purpose, but the AI is not. These combined factors result in something seeming a bit off, or a lot off. A couple of tech publications I do some work for have had to turn off their submission portals as they were getting as many as a hundred AI-written pieces per day. And they note how easy it is to recognise those and what garbage they are. Despite the shortcomings, overall, I’m a huge proponent of appropriate use, like in your example.”
Burrows: “We have been able to recognise where we can effectively leverage the AI tools. For instance, I set out to create a series of webinars, and they’re going to take place every three months or every four months, and the topics are going to be these. I’ve used ChatGPT to help me understand how I can create outlines and an app. What should my abstract be? What should the key learnings be? What is the best way to get the maximum number of attendees? And it would come up with a kind of a plan.
“No, I would never take that plan on verbatim. I would use that look and go: ‘Okay, I can take some cool stuff from here’. But, you know, we generated very quickly, maybe five or six outlines for webinars to be offered over a period a year that ordinarily would have taken a team of people to sit down, generate ideas, and plan.
“For me, it’s also about creating a starting point. Certainly, I’m all about using the tools intelligently to make our lives better. I would never go: ‘Please write me a 5,000-word article on the importance of, say, reality capture in construction’. I’ve tried even writing LinkedIn posts, maybe three or four paragraphs, and you can see straight through it, as can everyone else. And I can see when others have used it recently, I sent a message to my colleague, I said: ‘That LinkedIn post, you didn’t write that’. One, because I know he’s not a native English speaker, and two because it had all these emojis sprinkled through. I went: ‘That’s not yours, that’s AI’.
Schrock: “Exactly. At present, it’s not quite there yet. Can the AI conduct an interview and pick up on the nuances of the responses and add follow-up questions? Can it do a product test drive? Could an AI-powered field mapping bot understand that it has to talk to the locals to get a key to that gate over there? Can AI add appropriate inside humor into content for the esoteric audience of the geospatial sector? Not yet, but it sure does help reduce the mundane elements of research, transcribing, outlining, etc.
“The other AI question has to do with a trend I hate to say that we are seeing, even in the geospatial sector. This is sales and marketing people that try to sell AI as a way to reduce workforce. Do you think it’s unwise to kind of publicise it as a way to cut jobs?”
Burrows: “It is not a good strategy to sell AI primarily as a way to reduce headcount. Even if that might be an end result over time, those who sell it that way to current customers are biting the hands that feed them.
“I think the big thing is at the minute, when it comes to AI, is that our industry is in relatively unknown territory. However, we are far enough along to recognise that if you’re going to succeed, you’re going to need to leverage AI, so it’s not going to go away. The genie is out of the bottle. You’re going to have to learn how to live with it. And those who are using AI competitively will do infinitely better than those companies who are not.
“I don’t feel that it will help get rid of people. I think what it will do is make everyone more productive and allow them to focus on the things they need to do. It can and is helping small firms become more productive and competitive, so they can take on more work. An agile firm of 10 that effectively leverages AI can potentially outperform a firm of 100 that does not.”
Schrock: “Well, I’ve heard it said in many different ways that no, AI is not going to take your job, but somebody that leverages AI could. About a decade ago I interviewed the then-CEO of Hexagon, Ola Rollén, and he said that AI will need to be adopted, otherwise, firms will fail in the long run.”
Burrows: “I think that’s not necessarily a mantra. When you talk to the guys in the AI group, they view it as it’s not just something we have to live with, and it’s something we want to live with because it allows us to do some very cool stuff. For instance, on the processing side, and on the delivery side, we’re doing a lot of work around computational photography. We’re using AI to get the most out of sensor data but also to transcend it in some cases.
“Say you are at a point where you have a limit in the sensor, in the hardware, of what you can achieve. Until a new sensor comes along with a new camera element, how can you use AI to give the best possible results? I saw a post on LinkedIn, an AI up-scaler for imagery. How can I enhance a low pixel count imagery? There are now algorithms that will up-scale and make beautiful imagery – in a single click. I’ve tested them – they are amazing!”
Schrock: “On that note, there was so much buzz about image enhancement and model-generating approaches like Neural Radiance Fields (NeRF) and Gaussian Splatting. I’ve tried some of these out, and while I can see how wonderfully beautiful it can make things visually, and can enable modelling from fewer images, I am pressed to find an application within the geospatial sector aside from visualisation. Or are we just not there yet?”
Burrows: “There’s a lot of R&D going on with these, and any new approaches that come along. There may very well end up being practical applications. Gaussian splatting in particular holds some potential in that it allows you to take the point cloud and get a nice final result without having to mesh it. There’s still some computational premium to it, but to me, and I’ve said this to multiple people, I feel like it just gives us another way of displaying the data.”
Schrock: “There’s a lot of value in realistic visualisations, most certainly for things like stakeholder buy-in for infrastructure projects. But sometimes I worry that the folks with the checkbooks get blown away by the visualisation aspect and not the true elements that increase productivity and save costs.
“One of my favourite examples was a group of folks I talked to after a demo of a product that displays underground utilities; fabulous, 3D colour-coded pipes. The folks were under the impression that you simply point the tablet at the ground, and it magically sees everything underneath. At no point in the presentation did the vendor explain that the pipes had to be physically located and mapped, ground penetrating radar (GPR), potholing, etc. Almost deceptive marketing not to mention that.”
Burrows: “Yes, visualisations might get attention, but measurable results can impress them even more. Like cost and time savings, avoiding rework, frictionless workflows, and more.”
Schrock: “Agreed. I’d like to wrap this up with a bit of free-form future-casting. Many years ago, I wrote a sci-fi series of articles, that were more like survey-fi. Half a century into the future, a surveyor who is wheelchair-bound runs a successful business, doing work all over the globe, by remotely operating a team of very clever bots. I wonder if that is where we might end up.
“And for the not-so-distant future. Could a type of multi-sensor kit be set up on a site and it does a quick image series to evaluate the site and decide where to focus scans, etc., from lists of features you have specified in advance? Maybe it launches a small drone where needed. It does the bulk of the mapping while the surveyor goes and gets the data the bot cannot reach. Are we nearly there?”
Burrows: “What’s interesting now is we’re seeing a lot of people looking deeper into teleoperation. There was this big project in Japan, about how to enable truck drivers and mine operators to work remotely, and with autonomy assistance where it can be safely implemented.
“I think there’ll always have to be a human element on site. If you even look at one of our autonomous solutions, which is Leica BLK ARC combined with the Boston Dynamics Spot®, that’s great. Yes, you can go and set it off to do an automated surveying path, and it will go and collect the data. You can even set it to repeat the path periodically, every hour, every two hours, whatever. You can set it to upload data in the background, using a script.
“Imagine you have the bot do that first pass of the whole building, and it says: ‘You need to focus on this, this, this, and this’. Or it might conceivably send a work order to an on-call survey firm to do the fill-in work. It could be specified, for instance, that task ‘A’ needs to be done with a Leica RTC360, not a Leica BLK2GO, to meet tolerance requirements.
“A company that jumps into something new, until they’ve done it for a while, they can’t weigh the pluses and minuses of this or that tool or approach.”
“I would love for this bot solution to be able to guide them. And also evaluate various strategies. Hi bot: ‘Can you tell me if I set up 10 scans in this hall, will I have all the information I need to perform this study?’ If not, where are the setups that I need to do in that space, and can you guide me to them when I get there?’ That would be a very helpful bot.”
Schrock: “That might sound a bit ominous to some folks; that a bot is directing what they do. But after all, we rely on things like car navigation. But at some stage in the process, the human has made the critical decisions. And the bot is not in charge. Humans give it parameters, and it evaluates the site and suggests actions. It’s like we’ll be coaching a team of bots, and the best and most successful coaches will be those who can work symbiotically with them.”
Burrows: “It could help transform the way careers and work are done. Back to our note about ‘button pushers’. Say there’s a young person fresh out of college or school and they in up for a service where they might get an email in the morning that says: ‘Go to this location and do these scanner setups’. It gets uploaded to the cloud, will automatically get registered, and it will automatically get added to the project where we’re doing all the analysis. And I don’t see that as a bad thing. That kind of gig work is appealing to some folks as it can provide the flexibility to balance other pursuits, art, studies, hobbies, etc.”
Schrock: “We’re already seeing small UAS-based aerial mapping firms offering on-demand imaging through networks of gig drone pilots. Gig reality capture could be a great option for part-time and retired geospatial folks.
“Thank you for the amazing insights; you see so much of the geospatial sector from your virtual chair and do so well in presenting your findings to the rest of us.
“I’ll end on another decidedly geeky note. The reason why so many of us find the world of geospatial so fascinating is that it is a bit like Doctor Who’s TARDIS: ‘Much bigger on the inside’.”