
By James Blackman April 20, 2026
Collected at: https://www.rcrwireless.com/20260420/private-5g/physical-ai-on-private-5g-ntt-data
Physical AI for robot automation in factories and plants requires lightweight maths models, not token-hungry language models, says NTT Data. It does not have to wait on expensive GPUs, when CPUs work fine. But it does need private 5G for mobility – and, contrary to reports, private 5G is doing fine, finally.
In sum – what to know:
AI misconception – despite all the talk about physical AI at the edge as a driver for AI chip sales, industrial models for robots do not require on-site GPUs, says NTT Data.
5G miscalculation – after a tough couple of years, and a market narrative that says otherwise, the private 5G industry has lately spluttered into life, reckons NTT Data.
IoT re-designation – both edge 5G and AI, as private 5G and physical AI, are part of the broad IoT discipline, suggests NTT Data – and are starting to scale together.
Like with the Ericsson interview last week, when similarly pressed for time, here is an excerpt from another interview session, this time with Shahid Ahmed, global head of edge services at NTT Data – which, like the Ericsson, piece will hopefully be extended, based on the full interview. But the discussion skipped through a whole catalogue of edge-based intrigue about industrial IoT, private 5G, physical AI, and placed them, collectively, amid all the other trendy AI-subplots the industry is peddling. The takeaway, as written in an op/ed piece at the top of the newsletter last week, is that physical AI does not need GPUs. Physical AI on private 5G is essentially an IoT discipline that works well enough with a central processing unit (CPU), he says.

So don’t be confused, he says: save your expensive GPUs for your big IT workloads, whether managed locally or not. Well, who knew?
We should qualify that statement: OT-geared physical AI – based on maths-based models, for industrial automation workloads – does not need graphical processing units (GPUs) on site, at the enterprise edge. This seems plain; but it also sounds like a revelation – given the excitement about adjacent RAN- and ‘grid’-based GPU placements for physical AI, encroaching on the enterprise edge, and the desperation among the private 5G community for some kind of a lift, while it seeks to deploy the same kinds of cases in the same kinds of places, as it has for years. Ahmed’s comments are telling because they suggest both that the physical AI narrative (GPUs for enterprise robots, at least) is overegged, and that the private 5G narrative (connected on cellular in private spectrum) doesn’t need saving.
Ahmed’s point is that physical AI – in Industry 4.0, at least – is based on computationally lightweight maths/physics models, which are deterministic and bounded, and which run locally on-prem; versus probabilistic, token-hungry LLM-style inference models. It is a different game entirely – about model types, latency constraints, control loops. He explains: “The way we define physical AI and edge AI – which are, by the way, within the same spectrum – is everything that has to happen locally, at the [enterprise] edge. Where you can’t, by definition, have a large language model. Robotics and kinematics don’t use language models anyway. They use maths and physics-based models to solve specific [scientific] idioms – like in quantum electrodynamics at A-Level. Which means they’re small.”
He goes on: “You don’t need a high-powered GPU; some of them run on a Raspberry Pi. The robot doesn’t need to know the capital of Tajikistan; it needs to know millimeters on the X, Y, Z axes. You can get away with whatever CPU you have, and local memory, and be done with it. But you need that maths-based model, locally run – for latency and control reasons. You have to have the right type of edge compute power, but that’s it. I mean, honestly, it is IoT with AI in it. It is not that much more complicated. Yes, robotics is coming – you just have to look at what’s happening in China – and it’s happening on the consumer side first, interestingly enough. It is going to move very quickly to the business and factory environments. We are going to see that wave come here [to the edge].
“And you need physical AI and local inference, 100 percent. But it won’t be based on LLMs. There’s no token performance here. We’re talking maths models, compressed by their nature; less than a billion parameters – versus Claude or ChatGPT, where we’re talking hundreds of billions, right? Because those are language-based models.I mean, ask them what a-squarded plus b-squared equals, and they might scan the internet and tell you it is c-squared, but not because they inherently understand Pythagoras’s Theorem; so you’ve just wasted a bunch of tokens.” Perhaps RCR has been too quick to join the AI and 5G dots in the enterprise space. Or perhaps the dots were already joined, as RCR has reported in its broad IoT coverage.
But more than just that GPUs being unnecessary for robots (AMRs, AGVs, UAVs, HMIs, humanoids) at industrial sites, the tech industry is misclassifying (overegging) workloads to force-fit AI gadgetry onto problems that are already solved with classical compute. The compute requirements for physical AI are rather modest, and edge-based CPUs on private 5G architectures are already sufficient. More to follow; but Ahmed addresses this last point, too, about the private 5G market, where the hype has outrun the reality, again. He is philosophical about the state of the market, but also happy that it is picking up, finally. “If you asked me that question 12-18 months ago, I would’ve said it is a struggle. We didn’t have the devices; spectrum availability was patchy, at best,” he says.
“We had CBRS in the US, and it was a pain everywhere else… But fast forward, and we are now beginning to see this ratify itself. There are more devices; spectrum access has been mostly fixed. In the UK, say, you just have to ask the regulator, and, boom, it’s done; it is a one-day process and you get the spectrum to deploy. At NTT, at least, we are beginning to see this scale. You wrote about Cargill, where we’ve just deployed at around 60 sites; well, we’re looking at 100-plus sites just this year. And these aren’t tiny little sites. These are medium-sized warehouses and factories – for Cargill, right? Again, meat-and-potatoes stuff – mostly just for coverage. But we are talking 10-plus access points (APs); they’re not large-large automotive factories, but anything above five APs is significant.”
He adds: “And it is not just Cargill. I mean, Brownsville and Las Vegas – they are all expanding. It is more like what I thought we would see two or three years ago, when we started. It didn’t have that kind of momentum back then, and we almost hit a wall right off the bat. We had to build the market in many ways.” Ahmed says more besides – about the hard yards of consultancy and integration to make private 5G work; about demand for eSIM-based inter-public/private 5G roaming; about interest in the new sounding reference signal (SRS) protocol in 5G NR for RAN units for link adaptation, resource scheduling, beamforming management, and – most importantly, for enterprises with digital twins and physical AI cases – for asset positioning.
As stated, RCR will try to write more from the discussion with Ahmed.

Leave a Reply