In the NGMN’s September 2023 position statement on 6G, they suggested that there was “no intrinsic need for a hardware refresh” to migrate to the next generation. While resistance to another generational investment cycle in telecoms is entirely understandable, there is an issue much less talked-about, and yet very relevant to questions of network investment and upgrade: that of decommissioning existing network equipment when it reaches end-of-life or end-of-effective OEM support.
As pointed out in a recent report by circular economy company TXO, many pieces of network equipment only have a limited period of support or an anticipated operating lifetime. In their survey of 50 service providers across the globe, the majority expected that 10-30% of their network hardware would reach end-of-life or end-of-OEM support in the next two years, with almost 15% expecting that half their hardware would do so. This is a considerable quantity of material.
While decommissioning and replacing equipment can be an opportunity to reduce OPEX and energy consumption, the overriding single reason given for decommissioning hardware was simply that its maintenance is no longer supported by the OEMs, after two to five years of operation.
6GWorld has previously reported on the NGMN’s initiative in this regard, encouraging the development of hardware which is more modular and capable of being maintained or of having parts replaced as necessary, rather than replacing the entire network element. As Orange’s Ana Galindo Serra commented at the time:
“For example, the majority of equipment breakages are the plastic cases, and screens also break very easily, so these should be very accessible and should be very easy to replace straight away rather than replace the whole equipment.”
Some Infrastructure Challenges
While this is clearly a much less wasteful (and expensive) way to manage network hardware, this is a double-edged sword. Programmes for replacing end-of-life equipment have traditionally offered the opportunity to upgrade systems and deliver better performance at lower costs. Deferring the replacement of equipment means reducing opportunities for performance improvements… providing solely that hardware is the measure of performance.
As telecom providers separate out, as far as possible, the physical infrastructure from the service software, there is an argument that the relationship between service delivery and hardware upgrades is decoupled. By upgrading software to create more efficient systems and roll out new services, there is an opportunity to reduce the dependency on physical infrastructure.
That is a compelling argument but will never be entirely the case. For example, increased demands for computation at the edge may well require additional servers or upgraded processors, while backhaul constraints have often been a limiting factor in keeping up with data growth. Meanwhile, the sheer physics of radio propagation at different frequencies will create its own demands.
But Also Bright Possibilities
Nevertheless, we can imagine an environment where a macro-layer of software-driven cellular networks may start to be largely divorced from generational upgrade cycles. We may no longer have to deploy physical hardware to make a new site Next-G-capable, and instead hardware upgrade cycles become more closely linked to questions of material lifecycle management and OEM contracts or responding to local demand for capacity. This would at least make investment cycles more regular.
At the same time, other forms of connectivity, such as Wi-Fi, in-building, private, and mesh networks have their own logic and business models. Densification in these areas – which account for a significant proportion of connections – does need to be considered but deployment is likely to largely paid for by the people that need it, as we have seen in the case of Wi-Fi. Again, that is likely to be shaped by very local concerns and priorities rather than the strategies of national-scale telcos.
Open and Shut
There are two other elements that need to tie into the discussion about different types of network deployment and upgrade strategies.
The first is the much-talked-about open networking. While open networks, and especially Open RAN, are currently going through a “trough of disillusionment,” there are still compelling reasons for believing that this may be the future. In part, major OEMs have bought into it, which helps; meanwhile, organisations such as the US government are providing funding to drive this as a matter of national security. Open networks reduce lock-in by one vendor and, thereby, vendor-based risk.
While some parts of the journey to open networks are questions of conformance testing, certification and interoperability which is in progress, there is one other element which has remained a major stumbling-block: telecoms procurement processes.
The processes which exist today have, by and large, been a product of necessity, ensuring that supplier x’s solutions really work in the context of operator y’s existing climate, network, and subscribers, to the levels of performance desired.
Unsurprisingly, this has been a complicated process, more so where customised hardware has been created to suit the environment. Telecoms procurement processes today are notoriously lengthy and complex, which means that they are not scalable. Unsurprisingly, then, we have announcements that companies such as Nokia and Ericsson are taking the lead in many open RAN deployments.
These procurement processes are, of course, designed to reduce risk for the operator. However, for service providers who are aiming to improve their agility, their supply chain resilience and more – in both hardware and software – then procurement processes will need to adapt to become faster and more scalable.
Alongside this, the risk mitigation process will need to be performed differently, and understanding how to answer the questions “Who do we call when things go wrong? And who carries the responsibility?” will be central. At present, this seems to be based very much on the traditional model of having ‘one neck to wring,’ whether that is a traditional OEM or a systems integrator.
The question of risk management and mitigation is a pinch-point. If it can be addressed successfully, then this can indeed tie back into a model where telecom providers really can avoid major generational upgrade cycles. Instead, we could see a transition towards more modular, more open processes to deliver new capabilities incrementally where needed and respond more flexibly to a changing market.
Image by Mike Goad from Pixabay