1. Home
  2. Browse by Author

Browsing by Author "Wu, Huaming"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    ATOM: AI-Powered Sustainable Resource Management for Serverless Edge Computing Environments
    (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2024) Golec, Muhammed; Gill, Sukhpal Singh; Cuadrado, Felix; Parlikad, Ajith Kumar; Xu, Minxian; Wu, Huaming; Uhlig, Steve; AGÜ, Fen Bilimleri Enstitüsü, Elektrik ve Bilgisayar Mühendisliği Ana Bilim Dalı; Golec, Muhammed
    Serverless edge computing decreases unnecessary resource usage on end devices with limited processing power and storage capacity. Despite its benefits, serverless edge computing's zero scalability is the major source of the cold start delay, which is yet unsolved. This latency is unacceptable for time-sensitive Internet of Things (IoT) applications like autonomous cars. Most existing approaches need containers to idle and use extra computing resources. Edge devices have fewer resources than cloud-based systems, requiring new sustainable solutions. Therefore, we propose an AI-powered, sustainable resource management framework called ATOM for serverless edge computing. ATOM utilizes a deep reinforcement learning model to predict exactly when cold start latency will happen. We create a cold start dataset using a heart disease risk scenario and deploy using Google Cloud Functions. To demonstrate the superiority of ATOM, its performance is compared with two different baselines, which use the warm-start containers and a two-layer adaptive approach. The experimental results showed that although the ATOM required more calculation time of 118.76 seconds, it performed better in predicting cold start than baseline models with an RMSE ratio of 148.76. Additionally, the energy consumption and CO2 emission amount of these models are evaluated and compared for the training and prediction phases.
  • Loading...
    Thumbnail Image
    Article
    Edge AI: A Taxonomy, Systematic Review and Future Directions
    (SPRINGER, 2024) Gill, Sukhpal Singh; Golec, Muhammed; Hu, Jianmin; Xu, Minxian; Du, Junhui; Wu, Huaming; Walia, Guneet Kaur; Murugesan, Subramaniam Subramanian; Ali, Babar; Kumar, Mohit; Ye, Kejiang; Verma, Prabal; Kumar, Surendra; Cuadrado, Felix; Uhlig, Steve; AGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü; Golec, Muhammed
    Edge Artificial Intelligence (AI) incorporates a network of interconnected systems and devices that receive, cache, process, and analyse data in close communication with the location where the data is captured with AI technology. Recent advancements in AI efficiency, the widespread use of Internet of Things (IoT) devices, and the emergence of edge computing have unlocked the enormous scope of Edge AI. The goal of Edge AI is to optimize data processing efficiency and velocity while ensuring data confidentiality and integrity. Despite being a relatively new field of research, spanning from 2014 to the present, it has shown significant and rapid development over the last five years. In this article, we present a systematic literature review for Edge AI to discuss the existing research, recent advancements, and future research directions. We created a collaborative edge AI learning system for cloud and edge computing analysis, including an in-depth study of the architectures that facilitate this mechanism. The taxonomy for Edge AI facilitates the classification and configuration of Edge AI systems while also examining its potential influence across many fields through compassing infrastructure, cloud computing, fog computing, services, use cases, ML and deep learning, and resource management. This study highlights the significance of Edge AI in processing real-time data at the edge of the network. Additionally, it emphasizes the research challenges encountered by Edge AI systems, including constraints on resources, vulnerabilities to security threats, and problems with scalability. Finally, this study highlights the potential future research directions that aim to address the current limitations of Edge AI by providing innovative solutions.
  • Loading...
    Thumbnail Image
    Article
    EdgeBus: Co-Simulation based resource management for heterogeneous mobile edge computing environments
    (ELSEVIER, 2024) Ali, Babar; Golec, Muhammed; Gill, Sukhpal Singh; Wu, Huaming; Cuadrado, Felix; Uhlig, Steve; AGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü; Golec, Muhammed
    Kubernetes has revolutionized traditional monolithic Internet of Things (IoT) applications into lightweight, decentralized, and independent microservices, thus becoming the de facto standard in the realm of container orchestration. Intelligent and efficient container placement in Mobile Edge Computing (MEC) is challenging subjected to user mobility, and surplus but heterogeneous computing resources. One solution to constantly altering user location is to relocate containers closer to the user; however, this leads to additional underutilized active nodes and increases migration's computational overhead. On the contrary, few to no migrations are attributed to higher latency, thus degrading the Quality of Service (QoS). To tackle these challenges, we created a framework named EdgeBus1, which enables the co-simulation of container resource management in heterogeneous MEC environments based on Kubernetes. It enables the assessment of the impact of container migrations on resource management, energy, and latency. Further, we propose a mobility and migration cost-aware (MANGO) lightweight scheduler for efficient container management by incorporating migration cost, CPU cores, and memory usage for container scheduling. For user mobility, the Cabspotting dataset is employed, which contains real-world traces of taxi mobility in San Francisco. In the EdgeBus framework, we have created a simulated environment aided with a real-world testbed using Google Kubernetes Engine (GKE) to measure the performance of the MANGO scheduler in comparison to baseline schedulers such as IMPALA-based MobileKube, Latency Greedy, and Binpacking. Finally, extensive experiments have been conducted, which demonstrate the effectiveness of the MANGO in terms of latency and number of migrations.
  • Loading...
    Thumbnail Image
    Article
    PRICELESS: Privacy enhanced AI-driven scalable framework for IoT applications in serverless edge computing environments
    (JOHN WILEY & SONS LTD, 2024) Golec, Muhammed; Golec, Mustafa; Xu, Minxian; Wu, Huaming; Gill, Sukhpal Singh; Uhlig, Steve; 0000-0003-0146-9735; AGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü; Golec, Muhammed
    Serverless edge computing has emerged as a new paradigm that integrates the serverless and edge computing. By bringing processing power closer to the edge of the network, it provides advantages such as low latency by quickly processing data for time-sensitive Internet of Things (IoT) applications. Additionally, serverless edge computing also brings inherent problems of edge and serverless computing such as cold start, security and privacy that are still waiting to be solved. In this paper, we propose a new Blockchain-based AI-driven scalable framework called PRICELESS, to offer security and privacy in serverless edge computing environments while performing cold start prediction. In PRICELESS framework, we used deep reinforcement learning for the cold start latency prediction. For experiments, a cold start dataset is created using a heart disease risk-based IoT application and deployed using Google Cloud Functions. Experimental results show the additional delay that the blockchain module brings to cold start latency and its impact on cold start prediction performance. Additionally, the performance of PRICELESS is compared with the current state-of-the-art method based on energy cost, computation time and cold start prediction. Specifically, it has been observed that PRICELESS causes 19 ms of external latency, 358.2 watts for training, and 3.6 watts for prediction operations, resulting in additional energy consumption at the expense of security and privacy.