Browsing by Author "Wu, Huaming"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Article Citation - WoS: 16Citation - Scopus: 17ATOM: AI-Powered Sustainable Resource Management for Serverless Edge Computing Environments(IEEE-Inst Electrical Electronics Engineers Inc, 2024) Golec, Muhammed; Gill, Sukhpal Singh; Cuadrado, Felix; Parlikad, Ajith Kumar; Xu, Minxian; Wu, Huaming; Uhlig, SteveServerless edge computing decreases unnecessary resource usage on end devices with limited processing power and storage capacity. Despite its benefits, serverless edge computing's zero scalability is the major source of the cold start delay, which is yet unsolved. This latency is unacceptable for time-sensitive Internet of Things (IoT) applications like autonomous cars. Most existing approaches need containers to idle and use extra computing resources. Edge devices have fewer resources than cloud-based systems, requiring new sustainable solutions. Therefore, we propose an AI-powered, sustainable resource management framework called ATOM for serverless edge computing. ATOM utilizes a deep reinforcement learning model to predict exactly when cold start latency will happen. We create a cold start dataset using a heart disease risk scenario and deploy using Google Cloud Functions. To demonstrate the superiority of ATOM, its performance is compared with two different baselines, which use the warm-start containers and a two-layer adaptive approach. The experimental results showed that although the ATOM required more calculation time of 118.76 seconds, it performed better in predicting cold start than baseline models with an RMSE ratio of 148.76. Additionally, the energy consumption and CO2 emission amount of these models are evaluated and compared for the training and prediction phases.Article Citation - Scopus: 5Captain: A Testbed for Co-Simulation of Scalable Serverless Computing Environments for AIoT Enabled Predictive Maintenance in Industry 4.0(Institute of Electrical and Electronics Engineers Inc., 2025) Golec, Muhammed; Wu, Huaming; Ozturac, Ridvan; Kumar Parlikad, Ajith; Cuadrado Latasa, Felix; Gill, Sukhpal Singh; Uhlig, SteveThe massive amounts of data generated by the Industrial Internet of Things (IIoT) require considerable processing power, which increases carbon emissions and energy usage, and we need sustainable solutions to enable flexible manufacturing. Serverless computing shows potential for meeting this requirement by scaling idle containers to zero energy-efficiency and cost, but this will lead to a cold start delay. Most solutions rely on idle containers, which necessitates dynamic request time forecasting and container execution monitoring. Furthermore, Artificial Intelligence of Things (AIoT) can provide autonomous and sustainable solutions by combining IIoT with artificial intelligence (AI) to solve this problem. Therefore, we develop a new testbed, CAPTAIN, to facilitate AI-based co-simulation of scalable and flexible serverless computing in IIoT environments. The AI module in the CAPTAIN framework employs random forest (RF) and light gradient-boosting machine (LightGBM) models to optimize cold start frequency and prevent cold starts based on their prediction results. The proxy module additionally monitors the client-server network and constantly updates the AI module training dataset via a message queue. Finally, we evaluated the proxy module’s performance using a predictive maintenance-based real-world IIoT application and the AI module’s performance in a realistic serverless environment using a Microsoft Azure dataset. The AI module of the CAPTAIN outperforms baselines in terms of cold start frequency, computational time with 0.5 ms, energy consumption with 1161.0 joules, and CO2 emissions with 32.25e-05 gCO2. The CAPTAIN testbed provides a co-simulation of sustainable and scalable serverless computing environments for AIoT-enabled predictive maintenance in Industry 4.0. © 2025 Elsevier B.V., All rights reserved.Article Citation - WoS: 64Citation - Scopus: 103Edge AI: A Taxonomy, Systematic Review and Future Directions(Springer, 2025) Gill, Sukhpal Singh; Golec, Muhammed; Hu, Jianmin; Xu, Minxian; Du, Junhui; Wu, Huaming; Uhlig, SteveEdge Artificial Intelligence (AI) incorporates a network of interconnected systems and devices that receive, cache, process, and analyse data in close communication with the location where the data is captured with AI technology. Recent advancements in AI efficiency, the widespread use of Internet of Things (IoT) devices, and the emergence of edge computing have unlocked the enormous scope of Edge AI. The goal of Edge AI is to optimize data processing efficiency and velocity while ensuring data confidentiality and integrity. Despite being a relatively new field of research, spanning from 2014 to the present, it has shown significant and rapid development over the last five years. In this article, we present a systematic literature review for Edge AI to discuss the existing research, recent advancements, and future research directions. We created a collaborative edge AI learning system for cloud and edge computing analysis, including an in-depth study of the architectures that facilitate this mechanism. The taxonomy for Edge AI facilitates the classification and configuration of Edge AI systems while also examining its potential influence across many fields through compassing infrastructure, cloud computing, fog computing, services, use cases, ML and deep learning, and resource management. This study highlights the significance of Edge AI in processing real-time data at the edge of the network. Additionally, it emphasizes the research challenges encountered by Edge AI systems, including constraints on resources, vulnerabilities to security threats, and problems with scalability. Finally, this study highlights the potential future research directions that aim to address the current limitations of Edge AI by providing innovative solutions.Article Citation - WoS: 7Citation - Scopus: 7Edgebus: Co-Simulation Based Resource Management for Heterogeneous Mobile Edge Computing Environments(Elsevier, 2024) Ali, Babar; Golec, Muhammed; Gill, Sukhpal Singh; Wu, Huaming; Cuadrado, Felix; Uhlig, SteveKubernetes has revolutionized traditional monolithic Internet of Things (IoT) applications into lightweight, decentralized, and independent microservices, thus becoming the de facto standard in the realm of container orchestration. Intelligent and efficient container placement in Mobile Edge Computing (MEC) is challenging subjected to user mobility, and surplus but heterogeneous computing resources. One solution to constantly altering user location is to relocate containers closer to the user; however, this leads to additional underutilized active nodes and increases migration's computational overhead. On the contrary, few to no migrations are attributed to higher latency, thus degrading the Quality of Service (QoS). To tackle these challenges, we created a framework named EdgeBus(1), which enables the co-simulation of container resource management in heterogeneous MEC environments based on Kubernetes. It enables the assessment of the impact of container migrations on resource management, energy, and latency. Further, we propose a mobility and migration cost-aware (MANGO) lightweight scheduler for efficient container management by incorporating migration cost, CPU cores, and memory usage for container scheduling. For user mobility, the Cabspotting dataset is employed, which contains real-world traces of taxi mobility in San Francisco. In the EdgeBus framework, we have created a simulated environment aided with a real-world testbed using Google Kubernetes Engine (GKE) to measure the performance of the MANGO scheduler in comparison to baseline schedulers such as IMPALA-based MobileKube, Latency Greedy, and Binpacking. Finally, extensive experiments have been conducted, which demonstrate the effectiveness of the MANGO in terms of latency and number of migrations.Article Citation - WoS: 8Citation - Scopus: 8Priceless: Privacy Enhanced AI-Driven Scalable Framework for IoT Applications in Serverless Edge Computing Environments(John Wiley & Sons Ltd, 2025) Golec, Muhammed; Golec, Mustafa; Xu, Minxian; Wu, Huaming; Gill, Sukhpal Singh; Uhlig, SteveServerless edge computing has emerged as a new paradigm that integrates the serverless and edge computing. By bringing processing power closer to the edge of the network, it provides advantages such as low latency by quickly processing data for time-sensitive Internet of Things (IoT) applications. Additionally, serverless edge computing also brings inherent problems of edge and serverless computing such as cold start, security and privacy that are still waiting to be solved. In this paper, we propose a new Blockchain-based AI-driven scalable framework called PRICELESS, to offer security and privacy in serverless edge computing environments while performing cold start prediction. In PRICELESS framework, we used deep reinforcement learning for the cold start latency prediction. For experiments, a cold start dataset is created using a heart disease risk-based IoT application and deployed using Google Cloud Functions. Experimental results show the additional delay that the blockchain module brings to cold start latency and its impact on cold start prediction performance. Additionally, the performance of PRICELESS is compared with the current state-of-the-art method based on energy cost, computation time and cold start prediction. Specifically, it has been observed that PRICELESS causes 19 ms of external latency, 358.2 watts for training, and 3.6 watts for prediction operations, resulting in additional energy consumption at the expense of security and privacy.

