Our paper in the AI4Sys '24 conference at HPDC 2024 presents a novel technique using Generative AI (GenAI) to automate on-the-fly customizations of AI/ML solutions. The ECO-LLM system dynamically adjusts task placement between edge and cloud computing, resulting in minimal performance differences while significantly reducing manual effort and time in solving systems problems.
CLAP: Cost and Latency-Aware Placement of Microservices on the Computing Continuum
Our paper presents CLAP, a dynamic solution for optimizing microservice placement across edge and cloud computing in real-time applications. It addresses workload-induced latency issues and cost efficiency by utilizing Reinforcement Learning. Experiments on video analytics demonstrate significant cost reductions of 47% and 58% while maintaining acceptable latency levels.
LARA: Latency-Aware Resource Allocator for Stream Processing Applications
At the 32nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP 2024), we presented LARA, a tool that improves latency in stream processing applications, a crucial requirement for real-time video analytics. By using a regression-based resource allocation technique, LARA improves latency by up to 2.8X and delivers over 2X throughput compared to fixed allocation, surpassing the vertical pod autoscaler (VPA) in performance.
Improving Real-time Data Streams Performance on Autonomous Surface Vehicles using DataX
Our paper, presented at PDP 2024, discusses a containerized distributed processing platform for Autonomous Surface Vehicles to enhance real-time data processing in marine environments. Utilizing microservice management with DataX and Kubernetes, it addresses challenges such as limited connectivity and energy constraints. Experiments demonstrate its effectiveness in marine litter detection.
Scale Up while Scaling Out Microservices in Video Analytics Pipelines
Our paper, presented at POAT 2023 in Singapore, examines joint microservice scaling in Kubernetes, focusing on video analytics pipelines. It introduces DataX AutoScaleUp, which efficiently adjusts CPU resources while Horizontal Pod Autoscaler (HPA) operates. This method significantly enhances processing rates, achieving up to 1.45X improvement over traditional approaches.
Citizen Science for the Sea with Information Technologies: An Open Platform for Gathering Marine Data and Marine Litter Detection from Leisure Boat Instruments
Our IEEE eScience 2023 paper introduces the C4Sea-IT framework, an open platform for collecting marine data from leisure vessel instruments. It leverages the Internet of Things and Cloud Computing to enhance coastal data sharing, with a use case demonstrating marine litter tracking. The ultimate objective is to improve weather and ocean forecasts using AI.


