The post below, is an oversimplification of the subject for the benefit of non-technical readers and the general public.
The number of devices with mobile access, emerging with the development of the Internet of Things (IoT), is constantly increasing. At the beginning, the mobile phone was the only type of device that could access and utilise telecommunication networks. Nowadays, other devices such as tablets and smart watches, also occupy a part of the network. In addition, current innovative companies as well as future ideas include small devices that will also have internet access through the same networks. These include smart lamps deployed in the streets that will turn on only when a mobile user has been detected nearby, traffic light control to adjust the car flow and other ideas that aim to provide efficiency in energy consumption and improve our everyday lives. The automotive industry is also interested in accessing wireless networks, where vehicles will act as “devices” exchanging data. This will allow applications such as self-driving cars, optimising the calculation of the fastest route based on live traffic, avoiding car accidents and more. This means that 5G and beyond wireless networks must be able to support a massive number of devices, which is one of the main requirements set for 5G networks, called massive connectivity.
Furthermore, some of the applications mentioned above, as well as many of the new applications that will be employed, such as the communication between cars to avoid accidents, virtual reality (VR) and others can only be useful if they don’t have any delay. That is why 5G networks must have less than 1ms latency.
In order to achieve the requirements of 5G networks, the operators must increase the number of base stations (BSs) causing them to be closer to the mobile user, therefore minimising the latency. This layout is called ultra-dense network (UDN) and is considered an emerging key technology for future generation wireless network architectures. However, UDNs still face many challenges to surpass, as the dense deployment of multiple small base stations (SBSs) causes severe interference.
The user is also affected by these limitations. More specifically, high interference can cause the user’s data rate to decrease significantly. As the mobile network user population grows, more dense deployment will be needed. The total available network resources remain the same so at one point, the network usage demand exceeds the available network resources. This results in bad network performance and poor user experience.
In order to solve the interference problem in the traditional network architecture, the operator will need to constantly control/manage/monitor/coordinate the cooperation among all the network elements (i.e. SBS) in order to minimise the interference, hence increasing the average data-rate.
This is not only time consuming, but also humanly impossible as many of the factors are constantly changing in an unknown environment and are only known at the time when the desired information will be sent to the user.
The telecommunication industry is utilising the current advancements of the Cloud Computing infrastructure, designing a promising novel architecture called Cloud-Radio Access Networks (C-RAN) which can provide efficient resource allocation, thus dealing with the limitations of UDNs. In C-RAN the SBSs are connected with a centralized base band unit (BBU) pool, where each SBS’s information regarding their available resources, nearby active users, channel conditions etc are collected and processed which establishes dynamic and flexible resource allocation.
To be able to support the increased number of devices we also use a novel technique called Non-Orthogonal Multiple Access (NOMA). Unlike current networks, where the resources are assigned separately to the users, with NOMA the same resources (i.e. frequency band, time frame, code) can be utilised simultaneously by multiple users. This feature is often investigated with C-RAN due to the benefits towards solving massive connectivity issues.
Despite the high computational resources of the centralised BBU pool, which can theoretically improve the average throughput and minimise the delay, there are still limitations on how to do so in real time. As we said earlier, each user moves randomly, with random activity in an unknown environment, hence the distribution of the resources must adapt rapidly to achieve the theoretical advances of NOMA in a C-RAN architecture. This topic is still open and being investigated by the research community.
This type of problems can be resolved with game theory which is relatively new in the area of wireless networks.
Game theory has been successfully used for decades in several areas including economics, politics and social sciences, but has appeared in wireless networks only recently in the form of algorithms showing promising results.
The approach of game theory is to provide optimal (or sub-optimal) solutions by considering all (or as many as possible) factors. Nash-stability ensures that the system will converge at a final state where the game value will be maximised for all the players’ values is provided.
A very simple example with three kids attempting to buy ice cream is presented below:
In a telecommunication network, the variables are constantly changing and each combination affects the condition (i.e. the money) of the rest of the players. Imagine, if Kiki and Agatha decide to cooperate, Michael’s money might increase or decrease. This is caused by the interference which will vary with each combination, this is called a Non-Transferable Utility (NTU) game. Moreover, a BS could serve thousands of users, therefore the complexity increases significantly and the needed time to estimate the best case scenario is remarkably higher. However, game theoretic approaches has shown that optimal and sub-optimal solutions can be reached when applied in the form of algorithms.
The proposed high-speed algorithms derived from our research show that when game theory is applied in Cloud RAN architectures where data are gathered and concurrently processed, best state can be reached much faster than conventional methods. Continuing our research, we aim to minimize the algorithm’s time even further and design a strong candidate that will meet and surpass the latency requirements of 5G mobile communications.