SOLID-B5G
A Massive MIMO Enabled IoT Platform with Network Slicing for Beyond 5G IoV/V2X and Maritime Services
The main goal of the SOLID-B5G project is to develop breakthrough beyond state-of-the-art solutions in orchestration management and control (OMC) of resources, in the context of network slicing and edge computing based on mIoT enabled radio access network (RAN) and core network (CN) for B5G IoV/V2X and maritime applications. The output of this project will be produced based on our cutting-edge research.
Description of the Norway Grants program
The project scope is in full conformity with the Norway Grants CRP Call, aiming at enhancing research-based knowledge development in Romania through long-term collaborations with two Norwegian partners as well as two other eminent third-country partners.
Papers
2021
Abstract: Fifth generation (5G) networks offer tremendous opportunities for Internet of things applications by facilitating massive machine-type communications (MTC). As many MTC devices are battery powered and intend to stay often in the sleep or power-saving mode, cell (re)association needs to be performed when a device wakes up. On the other hand, 5G networks are usually deployed as heterogeneous networks consisting of both macro and small cells under which a single device may be covered by multiple radio access technologies (RATs) simultaneously. Therefore, it is imperative to design effective cell association schemes for the purpose of efficient connectivity and resource utilization, especially when a device has multiple connectivity options. In this paper, three cell association schemes are proposed by considering a network scenario where multiple cells and different RATs are available for MTC devices. To perform cell association, received signal strength, channel occupancy status of neighboring cells, and directed handoff capability are considered as the criteria for our scheme design. Through analysis and extensive simulations, we demonstrate that superior system performance could be achieved for a given network by employing a suitable scheme that integrates multiple criteria and considers performance tradeoff. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9482587&isnumber=9482415
Abstract: The paper aims to use machine learning models in SDN network traffic classification and compare their performance. In this sense, a simple SDN architecture was created in which traffic is generated using the ”D-ITG” utility. Traffic data is saved using a processing script and then used for training and testing the machine learning models used. In order to be able to label the traffic based on the groups generated by the unsupervised model we used the supervised model with the best performance. To achieve the highest possible classification performance in the case of the unsupervised classification model, we previously chose the centroids of the groups and made a comparison between the results thus obtained and those resulting from the use of randomly generated centroids which is a new approach and resulted in an accuracy of 83.5 percent, which is better than in the literature, for such a small number of groups.
Abstract: Network slicing enabled by fifth generation (5G) systems has the potential to satisfy diversified service requirements from different vertical industries. As a typical vertical industry, smart distribution grid poses new challenges to communication networks. This paper investigates the behavior of network slicing for smart grid applications in 5G radio access networks with heterogeneous traffic. To facilitate network slicing in such a network, we employ different 5G radio access numerologies for two traffic classes which have distinct radio resource and quality of service requirements. Three multi-dimensional Markov models are developed to assess the transient performance of network slicing for resource allocation with and without traffic priority. Through analysis and simulations, we investigate the effects of smart grid protection and control traffic on other types of parallel traffic sessions as well as on radio resource utilization.URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9617808&isnumber=9617763
Abstract : Vehicles and transportation systems are essential parts of the today society, being driving factors for the development of vehicular networks and associated services. The vehicular-to-everything (V2X) concept comprises several communication modes such as: vehicle-to-vehicle (V2V), vehicle-to-pedestrian (V2P), vehicle-to-road infrastructure (V2R/V2I), and vehicle-to-network (V2N) (as defined in 3GPP documents). An extension of V2X is Internet of Vehicles (IoV), seen as a global network which includes V2X. The IoV objectives include vehicles driving services but also others – like vehicle traffic management in urban or country areas, automobile production, entertainment services, road infrastructure construction and repair, etc.
The fifth generation 5G technology is a strong candidate to support the IoV/V2X communications and services. 5G offers highly flexible and programmable end-to-end communication, networking, and computing infrastructures achieving high performance (throughput, latency, reliability, capacity, and mobility). 5G network slicing (with logical, and mutually isolated networks sharing the same physical infrastructure) can serve different classes of applications. Dedicated 5G slices can be constructed for V2X use cases, to meet their special requirements (e.g., low latency, high reliability, security, throughput, mobility). However, the high dynamicity of the environment and the complexity of V2X services, ask for edge-oriented computing infrastructures.
This keynote presentation provides an overview on edge computing support for V2X/IoV communications and especially the integration of Multi-access Edge Computing (MEC) in the 5G systems including and slice – based architectures.
Abstract : Identification of ecosystems and Business Models (BM) is an important starting point for new complex system development. The definition of actor (or stakeholder) roles and their interactions (at both business and technical levels), together with target scenarios and use cases, provide essential input information for further system requirement collection and architecture specification. The powerful and flexible Fifth Generation (5G) network slicing technology, which is capable of creating virtually isolated and logically parallel networks, enables a large range of complex services and vertical applications. Although various terminologies and models have been proposed in recent years for BMs in the 5G domain by many studies, projects, standards, and technical specifications from dedicated organizations, they are not always consistent with each other. This study presents an overview and comparison of different BMs for 5G sliced systems, followed by an example on BM definition for a 5G system in a novel ongoing European research project. While a general ecosystem and business model could involve a large range of organizations (including, e.g., regulation and standardization bodies), the scope of this article will be limited to primarily 5G operational BMs, with a focus on those actors or stakeholders who are active and interacting during real-life system operation. Within the project, we perform a selection among some tailored BM configurations, adapted for dedicated slices with different service requirements, aiming to serve Vehicle to Everything (V2X) and Internet of Vehicles (IoV) applications as well as Internet of Things (IoT) for maritime vertical applications. The final part of this article presents our proposed IoT connectivity solutions for various maritime scenarios with and without involving satellite links. Furthermore, we shed light on future challenges and directions for network slicing in beyond 5G systems.
Abstract : Heterogeneous cellular networks (HetNets) are one of the key enabling technologies for fifth generation (5 G) networks. In HetNets, the use of small base stations (SBSs) inside the coverage area of a macro base station (MBS) offers higher throughput and improved coverage. However, such multi-tier base station deployment introduces new challenges, e.g., (i) All users experience significant inter-cell interference (ICI) due to frequency reuse, (ii) SBS associated users experience severe MBS-interference due to higher MBS transmit power, and (iii) MBS coverage edge users receive lower signal-to-interference ratio (SIR) due to longer distances. To address the aforementioned challenges, this work proposes a framework, including a novel cell deployment strategy, which combines Stienen’s model with soft frequency reuse (SFR) and the corresponding performance analysis. According to Stienen’s cell based SBS deployment, SBSs are deployed in MBS coverage edge area to enhance downlink SIR. To further mitigate ICI and MBS-interference, SFR is employed along with Stienen’s cell based SBS deployment. Based on stochastic geometry, we derive expressions for coverage probability with four different types of cell deployment and SFR employment combinations. Numerical results indicate that SFR-enabled Stienen’s cell based SBS deployment leads to enhanced edge user coverage and, hence, improves network performance gain.
Abstract: Although grant-based mechanisms have been a predominant approach for wireless access for years, the additional latency required for initial handshake message exchange and the extra control overhead for packet transmissions have stimulated the emergence of grant-free (GF) transmission. GF access provides a promising mechanism for carrying low and moderate traffic with small data and fits especially well for massive machine type communications (mMTC) applications. Despite a surge of interest in GF access, how to handle heterogeneous mMTC traffic based on GF mechanisms has not been investigated in depth. In this paper, we propose a priority enabled GF access scheme which performs dynamic slot allocation in each 5G new radio subframe to devices with different priority levels on a subframe-by-subframe basis. While high priority traffic has access privilege for slot occupancy, the remaining slots in the same subframe will be allocated to low priority traffic. To evaluate the performance of the proposed scheme, we develop a two-dimensional Markov chain model which integrates these two types of traffic via a pseudo-aggregated process. Furthermore, the model is validated through simulations and the performance of the scheme is evaluated both analytically and by simulations and compared with two other GF access schemes.URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9335255&isnumber=9433469
Abstract: Currently, the use of unmanned vehicles, such as drones, boats and ships, in monitoring tasks where human presence is difficult or even impossible raises several issues. Continuous efforts to improve the autonomy of such vehicles have not solved all aspects of this issue. In an Internet of Unmanned Vehicles (IoUV) environment, the idea of replacing the static wireless infrastructure and reusing the mobile monitoring nodes in different conditions would converge to a dynamic solution to assure data collection in areas where there is no infrastructure that ensures Internet access. The current paper fills a significant gap, proposing an algorithm that optimises the positions of unmanned vehicles such that an ad hoc network is deployed to serve specific wireless sensor networks that have no other Internet connectivity (hilly/mountainous areas, Danube Delta) and must be connected to an Internet of Things (IoT) ecosystem. The algorithm determines the optimum positions of UV nodes that decrease the path losses below the link budget threshold with minimum UV node displacement compared to their initial coordinates. The algorithm was tested in a rural scenario and 3rd Generation Partnership Project (3GPP), free space and two-ray propagation models. The paper proposes another type of network, a Flying and Surface Ad Hoc Network (FSANET), a concept which implies collaboration and coexistence between unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) and several use cases that motivate the need for such a network.
Abstract : Recent advancements in information and communication technologies (ICT) have improved the power grid, leading to what is known as the smart grid, which, as part of a critical economic and social infrastructure, is vulnerable to security threats from the use of ICT and new emerging vulnerabilities and privacy issues. Access control is a fundamental element of a security infrastructure, and security is based on the principles of less privilege, zero-trust, and segregation of duties. This work addresses how access control can be applied without disrupting the power grid’s functioning while also properly maintaining the security, scalability, and interoperability of the smart grid. The authentication in the platform presumes digital certificates using a web of trust. This paper presents the findings of the SealedGRID project, and the steps taken for implementing Attribute-based access control policies specifically customized to the smart grid. The outcome is to develop a novel, hierarchical architecture composed of different licensing entities that manages access to resources within the network infrastructure. They are based on well-drawn policy rules and the security side of these resources is placed through a context awareness module. Together with this technology, the IoT is used with Big Data (facilitating easy handling of large databases). Another goal of this paper is to present implementation and evaluations details of a secure and scalable security platform for the smart grid.
Abstract : Farming livestock—cattle, sheep, goats, pigs, and chickens—contributes to the air pollution of the atmosphere. Agricultural air pollution comes mainly in the form of ammonia, which enters the air as a gas from heavily fertilized fields and livestock waste. A reduction in air pollutants from the livestock sector can be achieved by reducing production and consumption, lowering the emission intensity of production, or combining the two. This work proposes an approach for assessing the air pollutant emissions derived from intensive cattle farming. For doing this, the animal feed, the animal behavior, and characteristics and the stable environment data are monitored and collected by a cloud platform. Specifically, Internet of Things (IoT) devices are installed in the farm and key air pollutant parameters from the stable environment (such as CO, NH3, PM1, PM2.5, PM10) are monitored. In this scope, a study about monitoring air pollutants is conducted, showing the most relevant platforms used in this domain. Additionally, the paper presents a comparison between the estimated and monitored air pollutants (AP), showing the fluctuation of the measured parameters. The key takeaway of the study is that ammonia concentration has a higher level during the night, being influenced by the ventilation system of the farm
Abstract: Smart cities have frontline responsibility to ensure a secure and safe physical and digital ecosystem promoting cohesive and sustainable urban development for the wellbeing of human beings. In this paper, we propose to integrate advanced technological solutions in a market-oriented unified Cyber–Physical Security Management framework, aiming at raising the resilience of cities’ infrastructures, services, ICT, IoT, and fostering intelligence and information sharing among city’s security. The project we implement, “Smart Spaces Safety and Security for All Cities” (S4ALLCITIES), is dealing with Systems of Systems Architecture to deploy and validate its intelligent components and functionalities on actual environment, ensuring the delivery of solutions and services in line with smart cities emerging requirements, focused on: risk-based open smart spaces security management; cyber security shielding; and behavior tracking; real-time estimation of cyber-physical risks in multiple locations and measures activation for effective crisis management. URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9637717&isnumber=9637710
Abstract: The research work aims to design a decentralized decision-making mechanism with the help of data processing capacity and intelligence to the edge, to develop 5G network slicing methods, algorithms, and protocols and implement a proof-of-concept standalone B5G. The V2X and satellite-based maritime low-latency services are targeted. The paper will present how multiple wireless accessing technologies are designed to provide connectivity in vehicular networks and cellular communications using IoT. Recently, cellular V2X was standardized and developed for automotive services. V2X supports two communication modes through a single platform to provide both Wi-Fi and cellular communication. 5G-new radio is expected to address the automotive capabilities, improvement, and services for the 21st century. 5G-NR becomes a competitive technology compared to other wireless technologies due to high reliability, high capacity, extensive coverage, increased and minor delay. We optimize 5G with V2X and analyze the current V2X standards, introducing the development of 5G and considering its challenges, features, requirements, design, and technologies. We use Libelium smart city solutions for traffic measurements to analyze the congestion of vehicles or pedestrians by anonymously detecting and tracking the MAC addresses of the user’s devices on the road. URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9663700&isnumber=9663417
Vehicle-to-Everything (V2X) communications and Internet of Vehicles involve complex multi-actor systems that cooperate in order to support vehicles capabilities to exchange data with other entities (vehicles, infrastructure, grid, pedestrians, etc.). The V2X services mainly aim to improve the transport, safety and comfort on the roads and also to suport autonomous driving. The 5G technology can provide a powerful support for V2X, in multi-tenant, multidomain, multi-operator and end-to-end contexts. Particularly, using the 5G slicing capabilities, one can construct virtual parallel networks (slices) dedicated to different applications and services (verticals). Consequently, 5G dedicated slices can also be built to meet the V2X special requirements. The complexity of the V2X systems (involving many actors) and the multitude of visions led to proposal of many variants of V2X ecosystems and business models comprising several cooperating actors. Defining the business models (BMs -seen mainly from technical point of view) is important; they essentially provide input information for system requirements
identification and architecture definition; for V2X systems this is still an open research topic. This paper is an extension of a previous work presented at IARIA ICNS 2020 Conference. It is focused on so-called operational business model, containing only the actors that are active during the system exploitation and maintenance and not on general business models including economic and financial aspects. The paper analyzes and compare several relevant operational BMs for 5G slicing and discuss how they can be adapted for rich V2X environment. The stakeholder roles and interactions are discussed. Heterogeneity among different BMs is analyzed. Finally, a BM for a V2X system proposed by the authors in a novel 5G-oriented research project is introduced.
Open-Source MANO (OSM) is an ETSI-hosted project supporting the development of Open-Source Network Function Virtualization (NFV) Management and Orchestration (MANO) software stack aligned with ETSI NFV. Control System Hub (CSH) is an own Software as a Service (SaaS) platform service assurance and analytics for Quality of Service (QoS) or other parameters solution. To outcome the grow demand on IT applications, the enterprises and service providers must build and expand the use of edge and multi-access edge (MEC) compute. This can lead to possible difficulties concerning how to optimum place workloads of network services automation and evaluate the costs. This paper’s intention is to overcome these difficulties and present a deployment and monitor of edge services in three different use-cases which can have the same hardware trail and logical topology.
Abstract : Vehicle-to-Everything (V2X) communications, Internet of Vehicles (IoV) based on V2X and their services have been intensively studied and developed in the last decade. The V2X supports a large range of applications, such as safety oriented, vehicular traffic optimization, autonomous driving, infotainment and auxiliary operations in vehicular area. Various stakeholders/actors are playing roles in such a complex system, e.g., regulators, authorities, service or network providers, operators, manufacturers, tenants and end users. Therefore, to specify and design a specific V2X/IoV system, one should first identify the ecosystem actors and then derive in a structured way the system requirements, while
harmonizing needs coming from different entities. The 5G slicing technology is seen as a strong candidate to support V2X
communications, in multi-tenant, multi-domain, multi-operator and end-to-end contexts. The 5G slicing allows
construction of dedicated slices, to meet particular V2X requirements. Given the large variety of environments and
actors involved in a planned V2X system, the identification of the system requirements is a complex process that could
benefit from a structured approach. This paper is an extension of the work presented at IARIA Mobility 2020 Conference. It
contributes to develop a methodology to perform a top-down systematic identification of requirements for a V2X system
supported by 5G dedicated slices. Examples from a recent research project in 5G area are given.
This paper presents a framework to implement an Internet of Things (IoT) system using fuzzy logic. This framework provides general fuzzy membership functions that can be particularized for various applications implementing specific fuzzy rules to determine the system output. The motivation of this work is to create and test a flexible framework for the rapid implementation of IoT applications. Generic functions are implemented that are integrated into a commercial hardware platform. The framework is evaluated in terms of computation time and power consumption using a wireless sensor network (WSN) evaluation kit from Analog Devices. Also, a system for determining a person’s health condition is illustrated as an application implementation example using the proposed fuzzy IoT framework.
URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9606349&isnumber=9606243
2022
Although the number of massive machine type communications (mMTC) devices covered by a fifth generation (5G) cell could be huge, not all devices are always active. For radio resource allocation, it is vital that a base station has the knowledge on the number of active mMTC devices in each frame. In this paper, we develop a mathematical framework to perform pseudo-Bayesian estimation on the number of active devices within a 5G new radio (NR) subframe. The estimation is performed by the base station on a subframe-by-subframe basis based upon which a transmission permission probability is imposed on active devices for their transmissions in the next subframe. The active devices include both backlogged devices and new arrivals in the current subframe that will attempt for medium access in the next subframe relying on a combination of NR radio resources in both time and frequency domains. Such an estimation constitutes an important component for the design of grant-free access schemes for mMTC uplink traffic.
A Pseudo-Bayesian Framework for Estimating the Number of Active Devices for Subframe based Grant-Free Access in 5G NR Networks”: Although the number of massive machine type communications (mMTC) devices covered by a fifth generation (5G) cell could be huge, not all devices are always active. For radio resource allocation, it is vital that a base station has the knowledge on the number of active mMTC devices in each frame. In this paper, we develop a mathematical framework to perform pseudo-Bayesian estimation on the number of active devices within a 5G new radio (NR) subframe. The estimation is performed by the base station on a subframe-by-subframe basis based upon which a transmission permission probability is imposed on active devices for their transmissions in the next subframe. The active devices include both backlogged devices and new arrivals in the current subframe that will attempt for medium access in the next subframe relying on a combination of NR radio resources in both time and frequency domains. Such an estimation constitutes an important component for the design of grant-free access schemes for mMTC uplink traffic.
Network slicing is an emerging concept that can provide parallel virtual networks (slices) as dedicated solutions for various tenants. Slices can be built based on the infrastructure with the same hardware, but they are logically isolated from the viewpoint of service provisioning and security. Network Function Virtualization (NFV) plays an essential role in a sliced system by providing the softwarization of the network functions. Powerful orchestration and management functions are important in order to construct run and adapt the complex NFV framework to satisfying the needs of tenants. One popular orchestrator is Open Source MANO project that provides a software stack for orchestration. In this paper, an automated solution to make the process of slice creation with this orchestrator is proposed. The automation platform is developed by using React, a widely used JavaScript framework.
Abstract: In wake-up radio (WuR) enabled Internet of things (IoT) networks, a data communication occurs in a synchronous or asynchronous manner initiated either by a transmitter or receiver. A synchronous transmission is triggered when multiple devices report an event simultaneously, or by a common wake-up call. In this paper, we focus on synchronous transmissions and propose a multicast triggered synchronous transmission protocol, abbreviated as MURIST, which enables contention based and coordinated data transmissions among distributed devices in order to reduce transmissions latency and energy consumption. Furthermore, we develop a novel analytical model based on an absorbing Markov chain to evaluate the performance of MURIST in a network cluster. Unlike existing models that are merely targeted at the behavior of a single device, the novelty of our model resides in a generic framework to assess the behavior of a cluster of devices for synchronous data transmissions. Based on the analytical model, we obtain closed-form expressions for the distributions of successful and discarded transmissions, number of collisions, and delay, as well as for energy consumption. Extensive simulations are performed to validate the accuracy of the analytical model and evaluate the performance of our scheme versus that of two other schemes.URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9760727&isnumber=4656680
To address the challenges faced by non-orthogonalmultiple access (NOMA) for multiple access in beyond fifth gen-eration (B5G) networks, rate-splitting multiple access (RSMA)has newly emerged as a promising approach. In addition toachieving high data rates, RSMA may be used for massivemachine-type communication (mMTC)/massive Internet of things(mIoT) applications as well. Although RSMA addresses the usercoupling and scheduling overhead problems met by NOMA, itis not clear whether and how RSMA can be applied touplinkmMTC/mIoT traffic. In this study, we explore the applicabilityof RSMA for uplink traffic and investigate whether there is anybenefit to employ RSMA and if yes how this benefit can beachieved. To this end, we propose a novel uplink RSMA schemewith two constituent components, known as inter-device decodingand intra-device decoding, respectively. Through user pairingfor inter-device decoding and power allocation for intra-devicedecoding, we reveal under which circumstances RSMA can bringbenefits for uplink traffic.
The Software Defined Network (SDN) concept has changed the traditional network architecture to achieve higher flexibility, the programmability of the network, the abstraction based on a logically centralized control plane, and the data (forwarding) plane decoupling. The SDN control centralization raises scalability issues. This work focuses on the scalability concerns of the Ryu SDN controller in various scenarios with distinct network topologies like tree and datacenter. Throughout this paper, we assess the consequences of distributing subscriber data requests over the controller performance into different network topologies also using a different number of subscribers. The performance of the Ryu is tested by observing the throughput of the controller. The assessment was accomplished by employing an Iperf traffic generator and Mininet. The paper studies two matters that impact the Ryu throughput: the exponential increase of the number of nodes in tree topology and the adoption of the same condition over a datacenter topology. The experiments involve the two types of topologies mentioned and a topologies size range from 200 to 2000 nodes and assessing the controller performance using the throughput as the performance criterion. The results of this research will be used in a research project oriented to 5G slicing, where the control plane is based on SDN.
IoT devices present an ever-growing domain with multiple applicability. This technology has favored and still favors many areas by creating critical infrastructures that are as profitable as possible. This paper presents a hierarchical architecture composed of different licensing entities that manage access to different resources within a network infrastructure. They are conducted on the basis of well-drawn policy rules. At the same time, the security side of these resources is also placed through a context awareness module. Together with this technology, IoT is used and Blockchain is enabled (for network consolidation, as well as the transparency with which to monitor the platform). The ultimate goal is to implement a secure and scalable security platform for the Smart Grid. The paper presents the work undertaken in the SealedGRID project and the steps taken for implementing security policies specifically tailored to the Smart Grid, based on advanced concepts such as Opinion Dynamics and Smart Grid-related Attribute-based Access Control
Distributed multi-controller deployments are explored in large SDN-controlled networks to achieve the control plane scalability and reliability. There are many types of research on the Controller Selection or Placement Problem (CSP/CPP). The majority of studies consider a static approach to optimize the controller placement and a static mapping of switches to controllers. However, in a dynamic topology (susceptible to controller overload, nodes, or link failure), the initial mapping of the forwarders-to-controllers and even the controller placement could become nonoptimal, w.r.t. QoS service requirements. The flow of data through multiple controllers could vary following an unequal distribution of the load between multiple controllers. To overcome the aforementioned challenge, a solution could be a limited dynamic switch migration if controller overload is detected. The contribution of this work-in-progress is a proposal of a powerful condition-aware mechanism for switch migration and its functionality validation. The system considered in this work include the Redis database, Ryu controller, Mininet, and Iperf and introduce the concept of Supervisor controller. Our prototype resolves multiple overloads, simultaneously fixing the load balancing problems within the SDN controller plane.
Automating software orchestration and service development represents the newest trend in the development of fifth generation (5G) core network (CN) as it enables flexible and scalable service deployment. The building blocks for such a trend include Containers, Docker, Kubernetes, and other orchestration methods that facilitate easy scaling, management and control, load balancing, and personalized quality of service. In this paper, we develop a containerized 5G standalone (SA) network, building two types of network topologies for 5G SA deployment based on the concepts of 5G cloud network functions, Docker containers and Linux virtualization. Based on our implementation of both Minimalist Deployment and Basic Deployment, an assessment on the attach procedure is performed through next generation application protocol (NGAP) filtering along with subscriber information. Moreover, emulated transmission control protocol (TCP)/user datagram protocol (UDP) traffic is injected into the network and its performance is evaluated based on metrics such as traffic volume and data rate for both uplink and downlink
To enable emerging applications of fifth generation (5G) systems for vertical scenarios like mobile broadband and Internet of things (IoT) for vehicles and maritime sectors, a new trend towards softwarization and establishment of private networks is gaining momentum. Different from earlier generations of mobile systems where network functions are traditionally operated based on dedicated hardware, 5G infrastructures can be virtualized and services can be provided based on software, common hardware and servers. In this paper, we present an open-source based prototype private network which has been implemented in the framework of an EEA research project SOLID-B5G. The prototype network is a 5G non-standalone (NSA) network with a fourth generation (4G) evolved packet core (EPC) connecting both evolved NodeB (eNB) and next generation NodeB (gNB) to commercial off-the-shelf (COTS) user equipment (UE). The prototype is implemented based on two popular open-source software platforms, namely OAI RAN and srsRAN. Based on our implementations, extensive experiments have been conducted to assess the performance of the prototype network in terms of downlink/uplink throughput and latency between different parts of the system.
Network slicing is a novel fifth generation (5G) concept that enables operators to create logical networks (slices), each dedicated to certain specific applications, on top of a common network infrastructure, computation and storage resources. These slices are logically isolated from each other with respect to network performance and security, providing services to vertical industries and various use cases. Accordingly, an orchestrator is needed in order to manage network resources for various slices. Among different orchestrators that have been developed, open source MANO (OSM) appears as a popular platform, providing a software stack for slice developers. In this paper, we present an implementation of deploying network slices based on the OSM automation platform and report preliminary experiments that have been performed. To validate our implementation, a specific media streaming use case has been selected and a slice with three pre-configured network services that contain a media server has been created.
Compared to earlier generations, the Fifth Generation of Communication Networks (5G) proposes a broader range of services, supporting more use cases and applications. In the first section of this paper, we’ll analyze the development of 5G technology, give a brief explanation of the idea, and provide some examples of application fields, namely Artificial Intelligence (AI), Smart Cities, Internet of Things (IoT), Industry 4.0, Smart Grid 1.0, and Vehicle-to-Everything (V2X) Communication. To address the current network limitations, the development of 6G vision, applications, technologies, and standards has already emerged as a well-liked research topic in academia and the industry. Therefore, in the next section, we will explain the concept of 6G technology and also give examples of application domains, including Internet of Everything (IoE), Smart Grid 2.0, Industry 5.0, Extended Reality (XR), Holographic Telepresence, and Vehicle-to-Everything (V2X) Communication.
This poster focuses on 5G applications in the Transport and Logistics (T&L) sector. One of the advantages of these applications is the speed and bandwidth of 5G that allows access to data from both sensors and various smart devices in factories and warehouses. Meanwhile, data traffic is not increasing. Thus, end users can obtain more data in real time, data that is from different systems and smart assets in a unit (example: telematics-enabled lift trucks, drones, conveyor and sortation, camera systems, or motors and pumps compatible with IoT connectivity). All this is highlighted in the VITAL-5G project, which aims to improve the effectiveness of the way T&L verticals interact with the 5G network. This is done by providing innovative platforms and services, which were created to promote these capabilities of the T&L industry and also to have services not known to their customers. In order to achieve everything that the project has proposed, it is necessary to realize first of all, a marketing strategy where a market analysis will be done to see what is the state of the market, then to obtain information, captures market value chain trends and to identify factors and/or barriers in the T&L 5G ecosystem.
Abstract—LoRaWAN-enabled networks represent a paradigmshift from short-range transmissions to long-range connectionsfor various intelligent Internet of Things (IoT) applications. Toensure maximum coverage of LoRaWAN gateways, exhaustiveexperiments have to be performed beforehand for gatewaydeployment. Although simulation based solutions are viable, theobtained results may not reflect on-field propagation and terrainconditions. On the other hand, commercial LoRaWAN networkfield testers are extremely limited in terms of analysis andstatistics capabilities. In this paper, we present an end-to-endIoT platform which encompasses an built-in network coveragetesting facility. The term built-in is used to reveal coverage testingis an in-built functionality. That is, it is not necessary to use anexternal solution to test the coverage of the LoRaWAN network,but the platform provides it by design. The platform is developedbased on open-source software and commercial low-cost, low-power hardware, and it can be applied to various vertical sectorsincluding smart agriculture, maritime applications, environmentmonitoring, and device tracking.
Fifth generation (5G) mobile communication systems are currently being deployed worldwide intensively. Different from earlier generation mobile systems, 5G offers a variety of novel features to satisfy distinct requirements of diverse applications. One of the most innovative aspects of 5G is its potential openness in radio access network (RAN) protocols, allowing RAN elements to be interchangeable and be implemented using open-source software. This flexibility enables developing software solutions based on commercial off-the-shelf (COTS) hardware for facilitating specific applications. Meanwhile, open-source core networks (CNs) are available to be integrated with RAN software, empowering the softwarization of 5G or beyond 5G (B5G) systems to be completely open-source based. In this paper, we present two prototype implementations including a fourth generation (4G) multiple-input multiple-output (MIMO) system and a 5G non-standalone (NSA) system which are connected to the global Internet. The implementations are performed using two open-source software suites for RAN and CN respectively. Based on the developed prototype systems, extensive experiments are carried out to evaluate the peformance of such networks with respect to connectivity, performance, and service satisfaction.
The Perception layer in Internet of Things (IoT) architectures is responsible for connecting sensor nodes and data acquisition units such that sensing devices capture relevant data from the corresponding environment. Most IoT platforms are designed to transmit data at fixed time intervals, which is a disadvantage in the modern world dominated by rapid changes in the evolution of events. This paper addresses improvements performed on an IoT platform dedicated to critical applications (e.g., fire, air pollution). This novel approach assumes the use of two equations empirically determined to compute the time interval between successive transmissions depending on the detected event. A new method for communication technology selection (LoRaWAN, Wi-Fi or cellular) is implemented and the time interval between two successive transmissions is adjusted according to the occurring event. Comparisons were highlighted for each analyzed case. The proposed method proved to be suitable for critical scenarios or scenarios that can generate false-positive alarms, due to abnormal variations of parameters.
URL: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10014973&isnumber=10014702
The estimation of an image geo-site solely based on its contents is a promising task. Compelling image labelling relies heavily on contextual information, which is not as simple as recognizing a single object in an image. An Auto-Encode-based support vector machine approach is proposed in this work to estimate the image geo-site to address the issue of misclassifying the estimations. The proposed method for geo-site estimation is conducted using a dataset consisting of 125 classes of various images captured within 125 countries. The proposed work uses a convolutional Auto-Encode for training and dimensionality reduction. After that, the acquired preprocessed input dataset is further processed by a multi-label support vector machine. The performance assessment of the proposed approach has been accomplished using accuracy, sensitivity, specificity, and F1-score as evaluation parameters. Eventually, the proposed approach for image geo-site estimation presented in this article outperforms Auto-Encode-based K-Nearest Neighbor and Auto-Encode-Random Forest methods.
URL: https://www.mdpi.com/2078-2489/14/1/29
2023
To obtain benefits for non-orthogonal multiple access (NOMA) based concurrent transmissions for uplink Internet of things (IoT) traffic in multi-antenna enabled networks, device clustering has been a challenging task. Most previous studies focused on grouping devices into clusters that are served by different beams in multiple input multiple output (MIMO)- NOMA networks. In this paper, we perform an exploratory study on the impact of both intra- and inter-cluster interference on the performance of uplink concurrent transmissions considering that multiple clusters are served by a single beam. For performance assessment, we define two metrics, cluster throughput and transmission latency, and evaluate network performance with various network configurations. The study provides insight on how to
configure device clusters and perform access control in order to maximize performance and improve fairness. As devices closer to the base station experience less path attenuation, we introduce distinct access control mechanisms to improve transmissions fairness for concurrent transmissions from difference clusters. Index Terms—Multi-antenna, non-orthogonal multiple access, uplink traffic, concurrent transmissions, performance assessment. URL: http://personales.upv.es/~jmartine/Publications/ICC23.pdf
One particular example of a useful software application for Software Defined Networks (SDN) is represented by a traffic analysis mechanism, which provides a network administrator with a control panel from which he can collect traffic data. The data can then be used to fit Artificial Intelligence (AI) models, which will further classify the traffic of the network in real-time, enabling a network admin to monitor the network with ease. This paper presents an SDN classifier, aiming to achieve real-time multi-class traffic classification in a software-defined network. To enhance the classification accuracy, six artificial intelligence algorithms, including Logistic Regression, K-Nearest Neighbors (KNN), Naïve Bayes, Support Vector Machines (SVM), Decision Tree, and Artificial Neural Networks (ANN), are tested. Due to the possibility of training on unnormalized data, the data is preprocessed by rescaling values between 0 and 1. Additionally, the paper explores the supervised learning potential of the last three algorithms in traffic classification. The findings show that one of the top performing algorithms is ANN, along with SVM and KNN. URL: https://doi.org/10.1145/3600160.3605078
The ratification of the IEEE 802.11be WiFi 7 standard amendment has been driven by the need to achieve ultrahigh throughput and stringent low-latency for emerging applications. To accomplish this goal, 802.11be relies on various novel techniques. However, the same as in many earlier WiFi standards, two significant challenges known as the hidden terminal problem and the exposed terminal problem still remain, leading to degraded network performance. In order to mitigate the effects of these terminal problems, a promising technique called dynamic sensitivity control (DSC) has been introduced since 802.11ax WiFi 6 networks. This paper proposes an effective algorithm for applying DSC in WiFi 7 networks, aiming to enhance the performance of IEEE 802.11be and 802.11ax stations by reducing collision probabilities resulted from hidden terminals. Through extensive evaluation and analysis, our algorithm showcases its potential in optimizing network performance for future 802.11be deployment. The findings contribute to the seamless integration of WiFi standards in the evolution of wireless communication landscape towards next generation systems.
IoT is expanding into all aspects of our lives, one of the domains that weren’t explored to their full potential in the past is traffic management, but this is changing fast due to new technologies being rolled out. Improvements in traffic flow with IoT and the new 5G networks are a relatively new domain and focus of this article. 5G opened a new range of possibilities in traffic management with multiple interconnected nodes that can exchange data continuously over any movement speed. Besides speed, 5G provides better link stability and security, vital for traffic management. An improved version of IoT is the Internet of Vehicles (IoV) [1] which is considered an integrated part of smart cities and is composed of several components that communicate through a wireless communications infrastructure to always have signal coverage. A new alternative to enable such wireless connections is the 5G cellular network. Moreover, our article highlights the innovative SOLID-B5G project along with the latest developments in the IoT – 5G field. The SOLID-B5G project focuses on solving network security issues and adopting flexible, high bandwidth, and low-cost networks, facilitating the interconnection of as many IoT sensors and the simultaneous data collection, storage, and processing processes.
Empowered by the capabilities provided by fifth generation (5G) mobile communication systems, vehicle-to-everything (V2X) communication is heading from concept to reality. Given the nature of high-mobility and high-density for vehicle transportation, how to satisfy the stringent and divergent requirements for V2X communications such as ultra-low latency and ultra-high reliable connectivity appears as an unprecedented challenging task for network operators. As an enabler to tackle this problem, network slicing provides a power tool for supporting V2X communications over 5G networks. In this paper, we propose a network resource allocation framework which deals with slice allocation considering the coexistence of V2X communications with multiple other types of services. The framework is implemented in Python and we evaluate the performance of our framework based on real-life network deployment datasets from a 5G operator. Through extensive simulations, we explore the benefits brought by network slicing in terms of achieved data rates for V2X, blocking probability,
and handover ratio through different combinations of traffic types. We also reveal the importance of proper resource splitting for slicing among V2X and other types of services when network traffic load in an area of interest and quality of service of end users are taken into account.
To enhance dynamic resource adaptation in fifth generation (5G) networks, network slicing management empowered by artificial intelligence (AI) through decision-making algorithms may improve resource utilization, quality of service (QoS), as well as network scalability and flexibility. In this paper, we propose an AI-driven network slice management (AI-NSM) framework that enables enhanced adaptive resource allocation for 5G networks by ensuring additional management and orchestration for network slices. The integration of AINSM into 5G networks exhibits superior adaptability supporting dynamic organization of network slices based on predicted traffic patterns through reinforcement learning (RL), leading to reduced latency, optimized resource allocation, and improved QoS. Based on a virtualization platform through Oracle virtual machines, we implement an AI model including a multi-agent deep deterministic policy gradient RL algorithm that provides complementary support for other network slice management functions. Through implementation and experiments, we demonstrate that AI-NSM can enhance resource allocation and improve network responsiveness for slicing in 5G networks.
Index Terms—5G, AI/ML, network slicing management, reinforcement learning, resource allocation, efficiency and flexibility
This paper addresses the need for enhanced Net- work Slice (NS) Management, within the Cloud Network Func- tions of 5G Core (5GC). Given that the Network Repository Function (NRF) acts like a central hub for registration and dis- covery services, some level of slicing information is processed, but the slicing management is rather traditional, not being entirely able to ensure dynamic resource adaptation. Therefore, the so- lution proposed leverages the power of artificial intelligence (AI) and Machine Learning Techniques to support better resource allocation, to improve the quality of service (QoS) and to reduce the CAPEX/OPEX for Mobile Network Operators (MNOs). The solution involves adding an additional component which acts like an integrated slicing manager, taking decisions based on AI predictions, more specifically, based on an AI model which processes and filters the training data. Results demonstrated majorly improved network responsiveness, decreased latency end enhanced packet loss percentage. Emergent technologies, such as AI, Machine Learning, Neuronal Networks, Edge and Cloud Computing, DevOps instruments, Continuous Integration, Continuous Development (CI/CD) algorithms or other solutions, could be the start of unlocking new opportunities for mobile net- work communications, especially for 5G/6G development, which governs over a self-healing and self-optimising environment.
Along with the evolution of communication methods, there has also been a normal and necessary consequent evolution of the methods underlying the security of the transmitted information. No matter how complex an encryption algorithm is, it does not guarantee a 100% degree of security. The paper aims to compare the secure communication method Quantum Key Distribution (QKD) with the methods by which a Virtual Private Network (VPN) server ensures the security of information and how a server that hosts websites achieves this. The experimental method regarding QKD is done through the OpenQKD project, which will track the steps that go through so that the connection between two systems is made and even allow file transfer. The VPN server is made using OpenVPN and the web server is made using Apache Web Server.
SD-WAN: In recent history, computer networks and the Internet have developed at a very high speed, their number has increased, and with them the need to transport data in an easy way and at a reasonable speed. This necessity also led to the development of new solutions such as SD-WAN technology, which is a modern solution for managing and optimizing communication networks between different geographical locations. The paper presents a performance evaluation of one of the SD-WAN solutions available commercially. The most feature-rich solution is chosen, and a simple network topology is implemented such that main network metrics can be obtained and compared, thus providing relevant information for users seeking to implement such a solution. Results show that obtained values of latency and jitter, demonstrated that the implementation of the network does not introduce significant overhead.
The evolution in communications technology and embedded systems led to a new paradigm called the Internet of Things (IoT). Currently, IoT is one of the most important trends of industrial transformation. In today’s IoT devices, blockchain techniques are essential features to consider in applications. As a result, there has been growing interest in using blockchain for security and privacy reasons. This paper proposes a Proof of Stake (PoS) algorithm to reach an agreement between nodes to discover anomaly data. The blockchain in the IoT model consists of four approaches, Hashprev , Hashnext , Transactions, and Randomness. For election in blockchain, the sensor nodes consider two parameters to identify each node. First, the IP address, and second, the Uniform Resource Name (URN). Our results demonstrate that the proposed PoS method outperform the existing related schemes regarding the Quality of Experience (QoE), such as Processing Time Reduction Ratio (PTRR), Resource Gain (RG), and low latency. Finally, we validate and evaluate our work through extensive simulations using Network Simulator 3 (NS3) software.
The Intent Based Networking (IBN) approach and platforms offer automation of orchestration and management of the 5G/B5G networks and services in single or multi-domain environment. The user has to define only its higher-level service requirements (intent) using different tools (including natural language), without dealing with the realization of them. IBN system translates the intent into policy configurations through domain policy configurators and forwards them to underlying orchestrators to automatically activate resources over the infrastructure.
Intent controllers can be used to responsible for handling the core, transport and and RAN parts of the network. An IBN system can perform the 5G/B5G network and services orchestration and management in single or multi-domain environment and E2E in a flexible way.
URL:https://www.iaria.org/conferences2023/filesICN23/EugenBorcoci_Keynote_OrchestrationAndManagement.pdf
Task offloading in vehicular networks is a major method to relieve the edge devices (subsystems of vehicles) from intensive computing and
contributes to increasing in the overall system efficiency. Many methods have been developed and still are in study. Edge infrastructures servers, and in particular MEC servers constitute a strong support nodes for task offloading. Cooperation between edge and central cloud processing is possible. Balance between local processing and task offloading is necessary. Decisions between partial or full offloading need a complex evaluation of tasks and also the current availability of edge computing resources. Dynamicity of vehicular networks has impact on task offloading decisions and creates real time problems
URL:
https://www.iaria.org/conferences2023/filesDataSys23/EugenBorcoci_Keynote_TaskOffloading.pdf
Commercial and accelerated 5G deployment in most markets worldwide is on-going or will start soon. The architectural evolution of 5G is still running, as it will likely continue for eight more years or so. 3GPP Release 18, the start of 5G-Advanced, includes a diverse set of evolutions to
boost 5G performance and address a wide variety of new use cases. The innovative technology components in 5G-Advanced are essential precursors to key 6G architecture and design
The unpredictable noise in received signal strength indicator (RSSI) measurements in indoor environments practically causes very high estimation errors in target localization. Dealing with high noise in RSSI measurements and ensuring high target-localization accuracy with RSSI-based localization systems is a very popular research trend nowadays. This paper proposed two range-free target-localization schemes in wireless sensor networks (WSN) for an indoor setup: first with a plain support vector regression (SVR)-based model and second with the fusion of SVR and kalman filter (KF). The fusion-based model is named as the SVR+KF algorithm. The proposed localization solutions do not require computing distances using field measurements; rather, they need only three RSSI measurements to locate the mobile target. This paper also discussed the energy consumption associated with traditional Trilateration and the proposed SVR-based target-localization approaches. The impact of four kernel functions, namely, linear, sigmoid, RBF, and polynomial were evaluated with the proposed SVR-based schemes on the target-localization accuracy. The simulation results showed that the proposed schemes with linear and polynomial kernel functions were highly superior to trilateration-based schemes.
This paper proposes a traffic forecasting approach that uses neural networks based on graphs. The method minimizes the communication network (between vehicles and the database servers) load and represents a reasonable trade-off between communication network load and forecasting accuracy. To traffic is forecasting using a LTSM neural network with a regression layer. The inputs of the neural network are sequences – obtained from graph that represent the road network – at specific moments of time that are read from traffic sensors or the
outputs of neural network (forecasting sequences). The input sequences can be filtered to improve the forecasting accuracy. Also, the paper illustrates a general framework to implement a graph neural network. Two cases are studied: one case in which the traffic sensors are periodically read and the other case in which the traffic sensors are read when their values changes are detected. A comparison between the cases are made and the influence of filtering are evaluated.
Among numerous published research allowing comprehensive comparisons between Standalone (SA) and Non-Standalone (NSA) architectures or even innovative models for service differentiated networks within the 5G SA mobile communication system, most of the papers are primarily centered around presenting theoretical findings. Therefore, in this paper the performance of a service differentiated 5G SA network was evaluated and discussed, using a laboratory setup which includes a gNodeB and several User Equipments (UEs) that can be placed either in indoor conditions or in a fully isolated Faraday cage. Both the transmission rate and the latency were used as metrics.
The analysis of the separation and coexistence of the enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communications (URLLC) network slices represents the main topic of this study, demonstrated through measurement results. Index Terms—5G mobile communication, standalone, network slicing, eMBB, URLLC
Internet of Things improved the decision making processes related to ship and maritime environment and brought the software and hardware data acquisition, transmission and processing platforms and knowledge closer to the users through the so-called Maritime IoT (MIoT), enabling maritime services such as cargo, containers and ship monitoring. Recently, commercial solutions employing LoRa/LoRaWAN as a communication technology for containers monitoring were announced, but in the literature no experimental testbed and associate experiments were identified to assess the performance of such solutions, even though the maritime environment is very challenging and
different communication technologies and topologies are proposed to meet its requirements. The contribution of the current paper is three-fold: i) proposing a practical implementation of a LoRaWAN-based container monitoring system, ii) analysing the collected dataset, solving a machine learning problem, i.e classifying LoRaWAN packets origin location based on k-best selected features algorithm and algorithms such as k-Neighbours, Gradient Boosting a.s.o iii) assessing the performance of the communication in multiple containers scenarios. The results shown that LoRaWAN has a huge potential to ensure communication between containers and outdoor gateways.
Index Terms—LPWAN, environment sensors, Gradient Boosting, Decision Tree, Random Forest, k-Neighbours, Stochastic
Gradient Descent
Network function virtualization (NFV) is a novel concept that enables an architectural transition from dedicated hardware to orchestrated resource and function management. As an integral part of the core network, NFV offers a fine-grained network capability to cellular operators by scaling out or scaling in network resources in an on-demand manner to meet the performance requirements. However, designing an autoscaling algorithm with low operation cost and low latency in non-standalone networks, where legacy network equipment coexists with a virtual evolved packet core (EPC), is a challenging task. In this paper, we propose a dynamic NFV instance autoscaling algorithm that considers the tradeoff between performance and operation cost. Furthermore, we develop an analytical framework to assess the performance of the scheme by modeling the hybrid network as a queueing system that includes both legacy network equipment and NFV instances. The virtualized network function (VNF) instances are powered on or off according to the number of job requests. Numerical results based on extensive simulations validate the correctness of the model and the effectiveness of the algorithm.
URL: https://ieeexplore.ieee.org/document/10018535
To enable concurrent transmissions for Internet of Things (IoT) traffic in multiantenna beyond fifth generation networks, nonorthogonal multiple access (NOMA) mechanisms appear as a promising approach. For NOMA-enabled transmissions, IoT devices are grouped into clusters in order to exploit the benefit of concurrent transmissions. However, how to facilitate transmissions from both intra- and intercluster is not an easy task and the performance of such concurrent transmissions is so far not well understood from a mathematical point of view, especially when error-prone channel conditions are considered. In this article, we propose two random access schemes which enable intra- and intercluster concurrent transmissions for uplink IoT traffic with and without access control. To assess the performance of such systems, we develop two analytical models based on discrete-time Markov chains (DTMCs) that mimic the behavior of such transmissions. Our models deal with cluster-level performance considering dynamic packet arrivals and the transmissions from devices belonging to the same or different clusters. Through extensive simulations, we validate the accuracy of the analytical models and evaluate the system- and cluster-level performance in terms of throughput and delay under various traffic load conditions and network configurations.
URL: https://ieeexplore.ieee.org/document/10356610
2024
Electric vehicles represent a global endeavor towards environmentally friendly transportation, and such a green transition is promoted worldwide, being deployed in many regions nowadays. As the number of electric vehicles has been increasing rapidly for more than a decade, howto meet the need for charging their batteries appears as an important research topic, having received remarkable attention in both industry and research community. Uncoordinated charging of many electric vehicles may lead to congestion at charging stations and unbalanced load of the power supply grid. To address this problem, optimized charging schemes which consider available energy resources and user requirements are required. This paper offers an overview of state-of-the-art charging solutions covering two main categories of approaches, namely, centralized and decentralized charging. In addition to addressing the potential challenges that arise in charging schedule optimization, we cover various optimization techniques that have been proposed for optimizing charging schedules. Furthermore, this paper analyzes the current solutions and identifies their limitations and gaps. Open research issues are identified and several potential research topics are suggested.
In an era of cloud and data centers, compact and versatile data are important and needful. This paper proposes and develops an experiment as a demonstration of a web routing application inside Azure cloud, more specifical-ly on a web routing AddOn feature on Azure Kubernetes Service. The goal of this add-on is to make it easier for any user to expose their application to the world in a secure manner while reducing some of the operational overhead that goes into that. This paper’s contribution is useful for developers working in the domain of implementing Azure Kubernetes Services.
The rapid advance of Internet of Things (IoT) and its immersion in every domain brought into attention, besides many of its technical, social, and economic advantages, a panoply of security vulnerabilities and attack vectors that threaten IoT interconnected devices. IoT devices face many security issues such as weak authentication, insufficient encryption, deficient device management, insecure interfaces, inadequate physical security, lack of standardization, privacy concerns, insecure networks, resource constraints, and non-compliance with security standards. This highlights the pressing need for comprehensive security measures that currently seem insufficient to address the evolving landscape of threats in the dynamic IoT ecosystem. This paper conducts a security evaluation of a physical IoT device through the penetration testing methodology. It then focuses on a known vulnerability from the Common Vulnerabilities and Exposures (CVE) database. It presents the execution of a brute force attack to uncover credentials and the device’s buffer overflow vulnerability to cause a denial of service (DoS) on the device’s server. Employing a hands-on approach, the research emphasizes the practical execution of these exploitation scenarios providing a step-by-step guide on how they were performed. Lastly, it delves into the development of a proof of concept (PoC) application created to automate the process of firmware analysis and running the buffer overflow exploit for this particular use case.
In recent years, together with advances in the software defined radio (SDR) field, the concept of mobile private networks (MPNs) gained momentum. Several solutions based on open-source software modules and using SDR platforms as hardware were developed. In the current paper, the radio coverage of a private 4G mobile network was evaluated using the HTZ Communication software and field-test measurements. The network was implemented using an USRP B210 SDR platform and the srsRAN-4G software suite. A comparison between the coverage estimated using different propagation models and measurements is performed, together with an analysis of the obtained results.
This paper illustrates a general framework in which a neural network application can be easily integrated and proposes a traffic forecasting approach that uses neural networks based on graphs. Neural networks based on graphs have the advantage of capturing spatial–temporal characteristics that cannot be captured by other types of neural networks. This is due to entries that are graphs that, by their nature, include, besides a certain topology (the spatial characteristic), connections between nodes that model the costs (traffic load, speed, and road length) of the roads between nodes that can vary over time (the temporal characteristic). As a result, a prediction in a node influences the prediction from adjacent nodes, and, globally, the prediction has more precision. On the other hand, an adequate neural network leads to a good prediction, but its complexity can be higher. A recurrent neural network like LSTM is suitable for making predictions. A reduction in complexity can be achieved by choosing a relatively small number (usually determined by experiments) of hidden levels. The use of graphs as inputs to the neural network and the choice of a recurrent neural network combined lead to good accuracy in traffic prediction with a low enough implementation effort that it can be accomplished on microcontrollers with relatively limited resources. The proposed method minimizes the communication network (between vehicles and database servers) load and represents a reasonable trade-off between the communication network load and forecasting accuracy. Traffic prediction leads to less-congested routes and, therefore, to a reduction in energy consumption. The traffic is forecasted using an LSTM neural network with a regression layer. The inputs of the neural network are sequences—obtained from a graph that represents the road network—at specific moments in time that are read from traffic sensors or the outputs of the neural network (forecasting sequences). The input sequences can be filtered to improve the forecasting accuracy. This general framework is based on the Contiki IoT operating system, which ensures support for wireless communication and the efficient implementation of processes in a resource-constrained system, and it is particularized to implement a graph neural network. Two cases are studied: one case in which the traffic sensors are periodically read and another case in which the traffic sensors are read when their values’ changes are detected. A comparison between the cases is made, and the influence of filtering is evaluated. The obtained accuracy is very good and is very close to the accuracy obtained in an infinite precision simulation, the computation time is low enough, and the system can work in real time.
Along with the ubiquity of various Internet of Things (IoT) applications, data collection in harsh environments with the help of unmanned aerial vehicles (UAVs) and unmanned surface vehicles (USVs) appears as a popular approach. However, from a networking and transmission perspective, how unmanned vehicles collaborate with each other and with end IoT devices as well as the propagation characteristics for such data transmissions are less exploited. In this article, we develop a long-range wide-area network (LoRaWAN)-based data collection framework which enables UAV and/or USV-assisted data acquisition in harsh environments. Within this framework, end devices are placed in hard-to-reach locations and data acquisition is performed in an on-demand manner by unmanned vehicles acting as a gateway or peer for end devices (EDs). The framework also includes a step-by-step procedure for data collection and analytics, from parameter configuration to model validation. To reveal the transmission characteristics, we identify three scenarios in a mountainous region in Romania and perform extensive real-life measurements for data collection. Based on the collected datasets, we develop four propagation models and demonstrate that our empirical models outperform the theoretical reference models. Based on the developed empirical models, the coverage of LoRaWAN for any spreading factor (SF) can be estimated with high precision.
URL: https://ieeexplore.ieee.org/document/10385137
Electric vehicles represent a global endeavor towards environmentally friendly transportation, and such a green transition is promoted worldwide, being deployed in many regions nowadays. As the number of electric vehicles has been increasing rapidly for more than a decade, how to meet the need for charging their batteries appears as an important research topic, having received remarkable attention in both industry and research community. Uncoordinated charging of many electric vehicles may lead to congestion at charging stations and unbalanced load of the power supply grid. To address this problem, optimized charging schemes which consider available energy resources and user requirements are required. This paper offers an overview of state-of-the-art charging solutions covering two main categories of approaches, namely, centralized and decentralized charging. In addition to addressing the potential challenges that arise in charging schedule optimization, we cover various optimization techniques that have been proposed for optimizing charging schedules. Furthermore, this paper analyzes the current solutions and identifies their limitations and gaps. Open research issues are identified and several potential research topics are suggested.
URL: https://ieeexplore.ieee.org/document/10453517
The usage of electric vehicles (EVs) is a growing trend, but limited charging stations (CSs) and the fear of charge running out hinder confidence in relying on EVs.Optimal scheduling algorithms are needed to optimise EVs charging objectives. This paper proposes using a bi-non-dominated sorting genetic algorithm (NSGA-II) to optimise charging cost and service time jointly. NSGA-II outperforms traditional genetic algorithms (GA) regarding diversity and domination, resolving extreme solution issues. The proposed optimization algorithm based on NSGA-II, in principle, could be applied to any charging system, no matter what electrical technologies (e.g.,AC-DC, DC-DC, or both) are used.
URL: https://www.scientificbulletin.pub.ro/rev_docs_arhiva/rez3c3_377272.pdf
Spectrum sensing (SS) based on energy detection (ED) is a simple yet effective approach to detect the presence of unknown signals that are active in a specific band of frequencies. The classical ED (CED) algorithm uses the value of the energy detected in the current sensing slot as test statistic and has a fixed threshold, but improved signal detection performance is possible by modifying the test statistic and/or the detection threshold. In this paper we propose a novel ED algorithm for SS that considers a binary activity model for the signal to be detected and combines the use of an average energy test statistic with an adaptive decision threshold for improved detection performance. We present the analytical characterization of the proposed test statistic in terms of its mean and variance, and derive the expressions corresponding to the correct decision probability (CDP) and false alarm probability (FAP). Using the derived CDP and FAP expressions, we also determine the detection thresholds that yield desired values for the false alarm and missed detection probabilities. The proposed algorithm is illustrated with numerical results obtained from simulations, which confirm our theoretical findings and also show that the algorithm outperforms alternative adaptive ED algorithms.
URL: https://ieeexplore.ieee.org/document/10598350
Abstract—The growing advancement in battery technology has revolutionized various domains and services, notably transportation and energy storage. This research focuses on improving battery management systems (BMS) for electric vehicles (EV) and hybrid vehicles (HEV) by leveraging advanced machine learning models and 5G connectivity. This study proposes an integrated software architecture designed for both industrial and road applications, aiming to optimize battery performance and safety by analyzing collected data and predicting the anomaly events by training Random Forest and Gradient Boosting machine learning (ML) models. Throughout the research, key battery performance parameters and their normal operating ranges, such as voltage, current, temperature, charge/discharge rates, internal Resistance and State of Charge (SoC) are analyzed, according to the official regulations and utilized in Python scripting. The Python script implemented simulates battery operations by updating sensor values and logging incidents with factors such as latency, jitter, and encryption time. The script trains and compares machine learning models to determine the most accurate one for anomaly detection. The final part of the script simulates battery performance under different connectivity modes (5G Non-Standalone (NSA) and 5G Standalone (SA)) to assess the impact on data transmission and real-time monitoring. Ultimately, obtained results demonstrate not only advanced, sustainable, mobility solutions by improving the safety and performance of Battery Management Systems in electric and hybrid vehicles, but also the importance of leveraging cutting-edge technologies (such as ML) and connectivity (5G).
Abstract— This paper presents a practical application for a growing technology: the Internet of Things (IoT). It details the design and implementation of a custom, battery-powered, embedded prototype system for fast and early fire detection in vehicles onboard maritime transport. The system primarily relies on gas detection and thermal imaging cameras. The system utilizes a network of portable sensors mounted underneath the vehicles for monitoring. These sensors communicate with a central base station using the LoRa protocol for long-range, low-power communication. Prior to installation, each sensor is programmed using an integrated keyboard to specify its exact mounting location. The base station then interrogates each sensor individually, receiving real-time data only from the queried sensor. This data includes the location, CO and COV concentrations, ambient temperature and humidity, and the temperature beneath the vehicle surface. The base station acts as a server with integrated Wi-Fi and Bluetooth for wireless communication. This allows for easy access to the collected data in the future from a mobile application, website, or online database. The data is displayed on a web interface for convenient monitoring and analysis. Additionally, both the base station and the web interface display visual alerts. To ensure quicker detection and response, an audio signal also sounds from the sensor that detects the danger.