Translating Mobile Networks From Smart To Genius – 5G And Machine Learning

Translating Mobile Networks From Smart To Genius - 5G And Machine Learning
Translating Mobile Networks From Smart To Genius – 5G And Machine Learning

5G is ushering in a new breed of “genius” networks to deal with the increased levels of complexity, prediction and real time decision making that is required to deliver the performance gains promised not just in enhanced mobile broadband applications but also in IoT and mission critical use cases. At the core of this evolutionary step is the use of machine learning algorithms.

The ability to be more dynamic with real-time network optimization capabilities such as resource loading, power budget balancing and interference detection is what made networks “smart” in the 4G era. 5G adds support for new antenna capabilities, high-density and heterogeneous network topologies, and uplink and downlink channel allocation and configuration based on payload type and application. While there are many uses of machine learning across all layers of a 5G network from the physical layer through to the application layer, the base station is emerging as a key application for machine learning.

More Resources Only Means Better Performance If Coordinated
One of the hallmarks of a next generation 5G base station is the use of advanced antenna capabilities these capabilities include but are not limited to massive multiple-input multiple-output (MIMO) antenna arrays, beamforming, and beam steering.

Massive MIMO is the use of antenna arrays with a large number of active elements. Depending on the frequency band in which it is deployed, massive MIMO designs can employ from 24 active antenna elements to as many as several hundred. One of the uses of MIMO in general is to be able to transmit and receive parallel and redundant streams of information to address errors introduced by interference. However, another use specific to massive MIMO is beamforming and in more advanced systems, beam steering. Beamforming is the ability to utilize a set of phased arrays to create a beam of energy that can be used to focus and extend signal transmission and reception to and from the base station to a particular mobile device. Beam steering is the ability to then control that beam to follow the device in a fully mobile environment within the coverage footprint of that antenna array. When massive MIMO is fully brought to bear and beamforming and beam steering optimally employed, network operators and consumers’ alike benefit from increased network capacity and expanded coverage through increased data streams, decreased interference, extended range and more optimized power efficiency.

But how does machine learning help with this? Imagine if you will a race between a boats with 10 oars vs a boat with 20 oars. The boat with 10 oars is coordinated by a coxswain not just for rhythm but also is making real-time corrections to heading and cadence based not just on what is currently happening but also what is predicted to happen further down the course. In contrast the boat with 20 oars has a coxswain who is not capable of coordinating rhythm and is only making corrections based on general information that has already occurred. Clearly the former will win the race while the latter’s oars are not only making minimal progress but in some cases are actually interfering with each other. The same is true with massive MIMO. In order to fully realize the benefits of massive MIMO capability, beamforming and beam steering, machine learning is being utilized at the base station to provide real time and predictive analysis and modeling to better schedule, coordinate, configure and select which arrays to use and when.

Location, Location, Location
The new 5G network standard requires higher density deployments of smaller cells working with larger macro cells and multiple air interface protocols. The vision is for smaller cells to be designed for indoor locations or dense urban environments where GPS positioning is not always reliable and the radio frequency (RF) environment is far from predictable. Understanding the location of the devices interacting with the network is essential not only to application layer use cases but also to real time network operation and optimization. It is therefore critical to find ways not only to be able to accurately locate where user equipment is located but also to track them as they move within the coverage footprint.

To this end, machine learning is being applied to estimate user equipment location using RF data and triangulation techniques. While this is not a new concept, the use of machine learning algorithms is yielding material improvements in terms of accuracy, precision, and viability of widespread use than previous means. This is even more significant in that these improvements are being achieved in an environment that is orders of magnitude more complex and dynamically variable than ever before.

One Network To Rule Them All – Not As Easy As It Sounds
One of the driving considerations for the development of 5G is to have one framework to address the varied and often conflicting requirements of 3 use cases, including Enhanced Mobile Broadband (eMBB), massive IoT, and mission critical applications.

Previously served by purpose built, disparate networks, these use cases now will be supported with the 5G network architecture while continuing to require capabilities that are at odds with each other. Networks designed to support EMBB use cases are required to be optimized for high speed, low to medium latency, and profitable capacity. Massive IoT networks on the other hand, need to be low cost, narrow bandwidth, with low control plane overhead and high reliability. While mission critical networks require high speed, low latency and high reliability.

In order to make this vision a reality, 5G has been designed for high variability and flexibility both in the control plane and in channel configuration. As such, it is essential that 5G networks have the ability to predict payload type and use case based on changing conditions, such as historical loading data, RF conditions, location and a wide range of other factors, in order to efficiently and dynamically configure and utilize 5G channel resources.

Consequently, machine learning is being used to not only predict user equipment characteristics and capabilities, probable use case requirements, and RF conditions, but also potentially the type of content most likely to be requested and using edge caching techniques to bring the content closer to the end user. For example, based on historical trend data, it might become known that due to the proximity of a base station to the university as well as the current trending titles on Netflix or Disney + that at certain times of the day, specific movies should be made available closer to that base station to reduce network congestion, buffering, and latency. Similarly, a certain base station located close to an intersection that gets congested at certain times of the day might need more traffic and V2X sensor data to help aid ADAS or autonomous driving applications.

The Next Step In The Evolution
As an industry, we are at a critical evolutionary point as the combination of 5G and machine learning combine to put us on a path towards generational leaps in network capability and efficiency brought about by increasingly more complex functionality and adaptability. But it is an evolution not a revolution and these are the very early days. These 5G machine learning applications are just the beginning of the potential that can be unleashed not just at the physical layer enabled by the base station but through to the application layer as these two foundational technologies are brought together and we enter the era of genius networks.

originally posted on forbes.com by Francis Sideco