This week has been a big one for NTT announcements. The Japanese company has been keen to highlight its work in innovation during the run-up to Mobile World Congress. As one of relatively few operators to have major R&D capabilities, it is striking to see the areas where they are giving attention.
To begin with, NTT has been active in developing new ways for their network to operate. This week they announced a “successful demonstration of computing and mobile networks convergence to provide diverse services in the 6G era”. While the title is relatively unclear, the actual content could be very significant.
While telecoms networks until now have done very well at carrying data between devices and the cloud, with virtualisation every network node is becoming its own resource for compute and storage. The computing power in the network has gone on transporting data between the devices and… the cloud, which then uses its computing power to perform services such as 3D rendering and then send data back to the device.
When viewed like this, it’s a bit like taking a car to get to a car hire shop, using its services and then driving home again. While there may be occasions where that makes sense, there are plenty more where it just doesn’t.
Especially when it comes to low-latency services, using network nodes’ compute power could deliver some clear benefits. It allows for processing data closer to the end-user, delivering the minimal latency required, but removes the need for that processing to be done on the device itself. The devices can be simpler, cheaper and lighter, requiring less energy to run and therefore extend battery life. While this might be a marginal benefit in a handset, it could make all the difference in a headset or glasses.
The press release does not expressly compare this service (called “In-network Service Acceleration Platform” or ISAP) to other models of edge computing. However, it appears as though the key difference is that in this case there is no requirement for an edge datacentre. The network itself is the processing environment.
Moreover, the demonstration – which took place using Nokia’s 5G core Saas – was designed to test out whether the network resources could adapt in accordance with the demands of a “metaverse service” – exactly what it will be interesting to discover, but some kind of XR looks likely. NTT commented that “we have confirmed that computing services with GPUs for high-performance rendering, encoding, and decoding can be controlled based on the state of the service. We have also confirmed that computing resources can be allocated within the metaverse state change time.”
The potential benefits here are considerable for applications and the designers of hardware. It will be interesting to see how that gets monetised, however. Charging the application or hardware developers for using network capabilities in a Saas model, perhaps? As things stand it’s unlikely that end-users will foot a regular bill.
Edging Closer
On a related topic but with more immediate impact, NTT also announced a tie-up with Schneider Electric this week. The pair have put together NTT’s Edge as a Service with Schneider Electric’s ‘EcoStruxure’ to “meet the demands of compute-intensive tasks such as machine vision, predictive maintenance, and other AI inferencing applications at the edge.”
“Processing vast amounts of data generated by edge devices is where the future of digital transformation lies,” commented Shahid Ahmed, NTT’s EVP New Ventures and Innovation.
According to a report commissioned by NTT, roughly two-thirds of enterprises are adopting edge processing to support their own evolution. While there must be questions about exactly what enterprises were included in this survey, based on the collaboration with Schneider Electric NTT seems to be taking the result seriously.
Camille Mendler, Chief Analyst, Enterprise Services, Omdia noted that “AI-enriched data already accounts for a third of enterprise network traffic, but it will dominate digital interactions by 2030. To profit from AI insights, enterprises must invest in digital resources at the edge, and the technology infrastructure that powers it.”
So we can see that there are potentially several avenues for NTT to pursue in the evolution of new business models and capabilities. While providing edge computing and private networks is good for now, over time they can adapt to a more integrated ISAP model. This also gives NTT a closer involvement not just with the movement of data but the data itself. While projects like Open Gateway stand to expose network functions to application developers as a new role and revenue source for operators, ISAP may offer NTT a richer platform for delivering insights back to the client or – at best, if they get the model right – as a two-sided model to provide intelligence to others.
This is an interesting play as it starts to tie in with the long-term R&D efforts into a future network that are the IOWN programme. Indeed, last October NTT delivered a press release outlining the wider concept, in which ISAP plays a crucial part alongside concepts such as decentralised identifiers and improved resilience. They aim to bring this into the standards process, so we can expect to see further demonstrations and tests as standardisation gets under way.
(Note – you can meet Takehiro Nakamura, NTT DOCOMO’s Chief Architect, at 6GSymposium this April to get news and opinions direct).