How ADR and Intel went underground with edge AI

The ADR Explora robot can autonomously conduct mining inspections and monitoring activities.

The Explora robot can autonomously conduct mining inspections and monitoring activities. | Source: ADR

As mining operations go deeper underground, the environment becomes increasingly dangerous for humans. However, deep underground, it’s also difficult to establish Wi-Fi or cloud connections, creating barriers for robotics, according to Australian Droid + Robot, or ADR.

The developer of rugged robotics recently announced a strategic collaboration with Intel Corp. The companies plan to deploy autonomous inspection robots using edge AI to help keep workers while capturing critical data in hostile environments.

The system integrates Intel Xeon processors and Intel Core Ultra processors directly into ADR’s Explora robots. This onboard computing power allows the robots to process massive amounts of data from 3D lidar, thermal cameras, and gas sensors in real-time.

Mat Allan, co-founder and chief technology officer of Taringa, Australia-based ADR, gave The Robot Report more insights into how this collaboration came together.

When did ADR start working with Intel, and why was it the right partner for this project?

Allan: We’ve been working on the architecture for some time, but the realization that we needed a partner like Intel came from looking beyond the robot itself.

Initially, you think the challenge is just “solving robotics” – interacting with the physical world, moving through mud, avoiding obstacles. But we realized that is only a small aspect of the job.

To deliver true reliability and integrity to the customer, you need the capacity to generically solve compute and workload scaling. We aren’t just moving a robot; we are running a mobile data center.

Intel was the right partner because it provides that server-grade elasticity. Intel allows us to scale our workloads to meet customer demands — whether that’s processing 3D information or running complex analytics — in a way that standard embedded robotics chips simply cannot.

What dangers do humans face in these underground environments?

Allan: The risks are diverse and often invisible. You have the obvious dangers like unstable ground and rockfalls, particularly in “exclusion zones” or areas that have just been blasted. But you also have atmospheric hazards — toxic blast fumes, heat, and lack of oxygen.

Traditionally, humans have to physically enter these spaces to test them, which is a paradox: You are risking a person to see if it’s safe for a person.

Our goal is to break that cycle. By sending a robot in first to check gas levels or scan for structural convergence or movement, we ensure that if a human enters, it’s because it’s already been verified as safe.

How long can the robot operate, and how do you manage power consumption when computing at the edge?

Allan: Runtime depends on the mission profile, but we typically see between four to 12 hours depending on drive intensity. The real challenge, however, is balancing that run-time against the massive compute requirements.

When you are at the edge, efficiency is everything. This is where the difference between generic processing and hardware acceleration becomes critical. If you try to run heavy media transcoding or AI workloads on generic hardware, you burn through power rapidly and the quality suffers.

We utilize the specific hardware-offloading capabilities within the Intel architecture to handle these tasks efficiently. This allows us to maintain high performance without draining the battery, giving us the power-per-watt efficiency needed for long-range missions.

How did your company develop the AI that the robot uses? What specific things is it typically looking for during these inspections?

Allan: The “AI” in our context is really about interpretation of the physical world and insight. We developed the system to handle unstructured, chaotic environments — mud, acidic or alkaline water, abrasive dust, and uneven terrain — that would stop standard UGV platforms.

In terms of what it looks for, it’s highly configurable. In a re-entry scenario after a blast, it’s using multi-gas sensors to “sniff” for toxic fumes, to investigate fragmentation of rock. In a geotechnical inspection, it’s using 3D lidar to scan and map the walls for convergence or to quantify risk assessment for mine safety.

We also use thermal cameras to inspect conveyor belts for overheating rollers. It’s looking for the anomalies that signal danger.

The system is also a tool for emergency response. When things do not go to plan, having an asset deployed in the area, already configured to ascertain forward information, is incredibly useful and can save lives.

The robot is taking in a lot of different kinds of data. How does Intel’s technology helps it manage these while in the field?

Allan: This is really about the difference between standard computing and performance silicon. The robot is ingesting massive data streams. [They include] millions of data points per second from a multitude of sensors, plus high-quality thermal and visual video.

Generic software solutions often degrade in quality when trying to handle this volume—you get laggy video or slow processing. To get high reliability, you need the performance of ASIC-level hardware acceleration, which Intel provides for things like media transcoding and AI workloads. This allows us to compress, analyze, and store high-fidelity data in real-time.

We can transcode many 4K video streams and run inference models simultaneously without the system choking. That level of workload scaling is essential when you can’t offload immediately to the cloud.

Has ADR started testing the system in the field, and how did those tests go?

Allan: We are well beyond the testing phase. The system has been used by Rio Tinto for over five years, but it has come a long way since those early days. We are very grateful for its continued support as a customer.

We have moved from simple remote control to true autonomy and advanced edge analytics. Today, these units are in active daily operation with major miners like BHP and Rio Tinto.

For example, at Rio Tinto, the robots are inspecting conveyor belts and confined spaces, removing the need for shutdowns and human entry. The feedback has been that the platform is now robust enough to be a “business as usual” tool, saving hours of lost production time while keeping their teams out of harm’s way.

ADR has focused on robots for the mining industry. Do you have an interest in applying your technology to other industries? Or, what are the benefits of focusing on mining?

Allan: Our history is in mining. We focus on it because it is the ultimate edge case. If you can build a robot that survives a deep underground mine — with the heat, dust, mud, and water — you can deploy it anywhere.

While the technology certainly has applications in other sectors like search and rescue or heavy infrastructure, mining presents the most immediate and valuable problem to solve. We are saving lives and recovering millions of dollars in lost production time. We believe in doing one thing exceptionally well before broadening our scope. We want to do this exceptionally well for mining.



The post How ADR and Intel went underground with edge AI appeared first on The Robot Report.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top