Arani Aravinthan and Madushi Pathmaperuma, University of Central Lancashire, UCL
Beauty is a power which allows people to express themselves, gain self-confidence and open to others. Usage of beauty products can help this by creating a new look to uplift the character. Choosing the right makeup product is not an easy task in this diverse range of products these days. Intelligent systems for beauty and makeup selection have gained significant research interest in recent years. Most existing models focus on detecting prominent facial features such as skin tone, lip colour, and overall facial structure. However, minor yet impactful areas, such as the delicate regions around the eyes, are often overlooked. These areas play a crucial role in defining facial aesthetics, influencing expressions, and enhancing overall appearance. To address this gap, this system is designed to provide targeted recommendations for eye-focused beauty enhancements, ensuring a more comprehensive and personalized approach to makeup selection. The proposed system will recommend makeup products considering personal traits of the user such as the length and volume of the eyelashes. A new approach has been devised in calculating the length of the eyelashes aiding the use of advanced computer vision techniques like edge detection, and a regression-based Convolutional Neural Network (CNN) model is trained for prediction. A Support Vector Machine (SVM) is used for the classification task in recommending products for eyelash care.
Edge Detection, Contour Detection, Support Vector Machine.
Massimiliano Barone, STMicroelectronics, Agrate Brianza, Milano, Italy
The goal of this paper is to present a new, fast, and sub-optimal technique for splitting overly large regions of interest (ROIs) or the original full image. ROIs are typically box areas containing significant image information. The aim is to generate ROIs based on a specific criterion for subsequent focused analysis. The proposed technique converts a generic ROI size or the original full image into a few smaller, fixed-size ROIs, fully covering all significant details, with a maximum interception over union and a high number of details for each new ROI. Where details typically come from a previous feature extraction or image segmentation. This method is particularly helpful for convolutional neural networks (CNN) with fixed-size input images and datasets requiring image portions with as much pertinent information as possible. The effectiveness is compared against dividing the original ROI into a regular grid of ROIs, using the number of ROIs and the average detail density per ROI.
Regions of interest, ROI, clustering, segments, computer vision, regional proposal.
Atoshe Islam Sumaya, M. Nessa, and T. Rahman, School of Engineering, BRAC University, Bangladesh
FIR Filter is widely used for image processing, wireless communication, radar systems, control systems, and biomedical signal processing for its intrinsic stability and linear phased attribute. This design implements a prime number coefficient value 41 to decrease periodic faults together with symmetric arti-facts that appear in the frequency response. The selected number achieves an optimal relationship between processing speed and signal quality. Which makes it appropriate for real-time and high-performance sys-tems. The design uses Skywater 130nm CMOS technology and an ensemble of open-source tools starting from RTL and ending at GDSII production. These tools include Openlane, OpenROAD, Magic, Klayout, Netgen, and Yosys. It also includes industry-standard DRC and LVS checks for verification. The imple- mented layout contains 642,151 square micrometers of space with 72.61 MHz operation speed and 0.266 microwatt power utilization. The proposed 41-tap fir filter design introduces significant improvement in performance and power consumption compared to existing work.
Sky130 PDK, Heatmap, OpenLane, Magic, Klayout.
Alexander Dylan Bodner, Jack Natan Spolski ,Antonio Santiago Tepsich, Santiago Pourteau ,Universidad de San Andres ,Argentina
In this paper, we present Convolutional Kolmogorov-Arnold Networks, a novel architecture that integrates the learnable spline-based activation functions of Kolmogorov-Arnold Networks (KANs) into convolutional layers. By replacing traditional fixed-weight kernels with learnable non-linear functions, Convolutional KANs offer a significant improvement in parameter efficiency and expressive power over standard Convolutional Neural Networks (CNNs). We empirically evaluate Convolutional KANs on the Fashion-MNIST dataset, demonstrating competitive accuracy with up to 50% fewer parameters compared to baseline classic convolutions. This suggests that the KAN Convolution can effectively capture complex spatial relationships with fewer resources, offering a promising alternative for parameter-efficient deep learning models.
Machine Learning, Kolomogorov-Arnold Networks, Convolutional Kolmogorov-Arnold Networks.
Sun Xiaoping and Rodolfo C. Raga Jr, National University, Philippines
This paper proposes an air quality prediction model ST-HyDM based on spatiotemporal data mining and hybrid deep learning. By constructing a dynamic graph neural network to capture spatiotemporal dependencies, fusing multimodal features, and using a dual-branch architecture of Transformer and TCN for feature extraction and prediction. At the same time, a missing value processing mechanism based on generative adversarial networks (GAN) is introduced to improve data integrity and model performance. Experimental results show that compared with traditional models such as linear regression and random forest, ST-HyDM has higher accuracy and robustness in air quality prediction, providing a more effective solution for air quality prediction.
Air quality prediction, spatiotemporal data mining, hybrid deep learning, dynamic graph neural network, missing value processing.
Ekta Gujral and Apurva Sinha, UC Riverside, UT Dallas
Community detection in social network graphs plays a vital role in uncovering group dynamics, influence pathways, and the spread of information. Traditional methods focus primarily on graph structural properties, but recent advancements in Large Language Models (LLMs) open up new avenues for integrating semantic and contextual information into this task. In this paper, we present a detailed investigation into how various LLM-based approaches perform in identifying communities within social graphs. We introduce a two-step framework called CommLLM, which leverages the GPT-4o model along with prompt-based reasoning to fuse language model outputs with graph structure. Evaluations are conducted on six real-world social network datasets, measuring performance using key metrics such as Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), Variation of Information (VOI), and cluster purity. Our findings reveal that LLMs, particularly when guided by graphaware strategies, can be successfully applied to community detection tasks in small to medium-sized graphs. We observe that the integration of instruction-tuned models and carefully engineered prompts significantly improves the accuracy and coherence of detected communities. These insights not only highlight the potential of LLMs in graph-based research but also underscore the importance of tailoring model interactions to the specific structure of graph data.
Large Language Model (LLM), Social Network Graphs, Community Detection, Data mining.
Nuwan Kaluarachchi1, Arathi Arakala1, Sevvandi Kandanaarachchi2 and Kristen Moore2, 1School of Mathematical and Geospatial Sciences, RMIT University, Melbourne, Australia, 2CSIRO’s Data61, Melbourne, Australia
Keystroke dynamics is a behavioural biometric modality that utilises individual typing patterns for user authentication. While it has been popular on single device authentication, its application in cross-device scenarios remains under-explored. This paper proposes a solution for keystroke dynamics for cross-device user authentication using a transfer learning framework. Specifically, we authenticate users on tablets by training the authentication unit mostly on smartphone keystroke dynamics. We call our framework TEDxBC, as it includes a Transfer Encoder, a Data-fusion module, and a Binary Classifier for cross-device scenarios. Leveraging 24 keystroke dynamic features incorporating spatial and traditional features, TEDxBC employs an inductive transfer encoder to map users from the smartphone to the tablet. We evaluate TEDxBC on participants from the publicly available BBMAS dataset. Our method achieves an average Equal Error Rate (EER) of 14% , surpassing state-of-the-art methods on the same database. Furthermore, we apply a biometric menagerie analysis to gain insights into the performance of different user groups. Our analysis reveals that users in the “doves” group that authenticate with high accuracy on a single device retain their high performance in a cross-device scenario. They achieve an EER of 5% , surpassing the overall TEDxBC performance.
Authentication, Transfer learning, Cross-device biometric authentication, Keystroke dynam- ics, biometric menagerie.
Doliana Celaj1 and Greta Jani2, 1Bayswater College, England, 2Department of Albanian Languages, Faculty of Education, “Aleksander Moisiu” University, Durres, Albania
This study explores the impact of Natural Language Processing (NLP) tools on speaking and vocabulary development among ESL learners. Over a 6-week intervention, two groups of students are assessed—one using traditional learning methods, and the other supported by NLP-enhanced tools. The results revealed significant improvements in vocabulary retention, speaking performance, and learner confidence in the NLP-supported group, underscoring the value of technology-driven approaches in ESL instruction..
Natural Language Processing (NLP), ESL Education, Speaking Skills, Vocabulary Acquisition, Speech Recognition, Pronunciation Training, Student Motivation.
LalithSriram Datla, Cloud Engineer
This paper explores how DevOps practices are transforming the landscape of software development and operations in healthcare and insurance sectors, with a strong focus on enhancing monitoring capabilities and optimizing Continuous Integration/Continuous Deployment (CI/CD) pipelines. In these industries, where system reliability, regulatory compliance, and scalability are critical, DevOps introduces a culture of collaboration, automation, and continuous feedback that directly addresses these demands. Traditional development models often suffer from siloed teams, manual deployment processes, and delayed feedback loops, all of which are detrimental in environments where patient data security and financial transaction integrity are non-negotiable. By integrating DevOps principles, organizations can establish real-time monitoring solutions that provide deep visibility into system performance, proactively detect anomalies, and ensure uptime—essential for meeting stringent healthcare and insurance service-level agreements. Moreover, DevOps-driven CI/CD pipelines help automate testing, validation, and deployment processes, reducing the likelihood of human error while accelerating delivery cycles. This is particularly valuable for maintaining compliance with standards such as HIPAA in healthcare and SOC 2 in insurance, as automation enhances audit readiness and ensures consistent policy enforcement. Scalability, too, is significantly improved through infrastructure-as-code and container orchestration tools that enable systems to adapt rapidly to changing user loads without compromising stability. In essence, the paper outlines a DevOps-centric roadmap that guides healthcare and insurance software teams to not only build and deploy faster but also operate more securely and reliably. Through real-world scenarios and best practices, it showcases how integrating continuous monitoring with CI/CD pipelines creates a feedback-rich, automated environment that empowers organizations to remain competitive while staying compliant and resilient in the face of evolving regulatory and operational challenges.
DevOps, Continuous Monitoring, CI/CD, Healthcare IT, Insurance Systems, Compliance Automation, Infrastructure as Code, Release Orchestration, Containerization, Microservices, Security Auditing, ITIL Processes, Regulatory Reporting, Health Data Integration, Policy Management Systems, SLA Enforcement.
Swetha Talakola, Quality Engineer III at Walmart, Inc, USA
Emphasizing the shift from conventional manual testing to thorough CI/CD automation, this article describes the evolution of a QA engineers role in order to improve the dependability and efficiency of the software development lifetime. From a simple novelty to a necessary necessity, the change to continuous integration and continuous delivery (CI/CD) has progressed from one depending on modern development cycles depends on improved scalability and accelerated releases. From initially struggling with repeated manual test cases to progressively embracing scripting, learning test automation tools, and integrating quality checks into CI/CD pipelines, the story follows the engineers evolution. The approach painstakingly examines issues like tool choice, framework design, team culture changes, and faith-building in automation. The story shows how a change in perspective, an emphasis on lifelong learning, and cooperative growth with artists help testing from a difficulty become a stimulus for innovation. Among the main outcomes are improved general quality of products, faster problem solving, faster feedback systems, and more frequent deployment. By the conclusion of this road, the QA engineer had accepted automation and greatly helped DevOps to succeed, thereby demonstrating that quality assurance can grow in an agile, dynamic setting with suitable tools and approaches. This post tries to motivate other QA experts with comparable difficulties by providing concrete advice for improving CI/CD maturity.
Quality Assurance, Manual Testing, CI/CD, Automation, DevOps, Test Automation, QA Transformation, Pipeline Integration, Agile, Continuous Testing, Shift Left Testing, Test Strategy, Deployment Automation, Software Quality, QA Engineering, Code Integration, Regression Testing, Build Verification, Agile Testing, QA Best Practices, DevOps Culture, Continuous Delivery, Testing Frameworks, Scripting, Release Cycle Optimization.
Krishna Chaitanya Chaganti, Associate Director At S&p GLOBAL
Modern software development features so many cloud-native designs, so efficient and automated deployment methods become ever more critical. Using cloud-native paradigms, companies are rapidly building scalable, strong, and adaptable systems; nevertheless, reaching these goals requires a basic transformation in the methods of software design, testing, and implementation. This study investigates the enormous value Continuous Integration and Continuous Deployment (CI/CD) pipelines offer in allowing the effective and consistent delivery of cloud-ready applications. As systems get more complex and development cycles shorten to keep pace without compromising stability or quality, automation is becoming ever more vital. CI/CD pipelines provide the foundation of agile methods and DevOps tools since they enable businesses to rapidly implement consistent, predictable outcomes, new features, upgrades, and patches. Reducing human error, automating code generation, testing, and deployment helps teams free more resources for innovation than for hand-made operational activities; it also helps to lower deployment-related stress. The article looks at the typical difficulties businesses experience using CI/CD pipelines for systems housed in the clouds. Among the primary challenges are guaranteeing consistent and dependable testing methods, combining multiple technologies inside the scope of development and operations, and applying rigorous security measures all through the deployment process. Good resolution of these problems needs for both strategic insight and technical ability. By means of a comprehensive analysis of accepted methods including pipeline-as-----code approaches, shift-left testing techniques, and creative use of containerizing, the paper offers pragmatic guidance for constructing strong and efficient pipelines. These approaches guarantee systems remain flexible and adaptive to satisfy future needs and improve deployment dependability. Actual implementation data demonstrating significant increases in system uptime, general developer productivity, deployment speed, and failure recovery timeframes is compiled in this report. CI/CD pipelines have some early complexity, but the long-term advantages in agility, scalability, and operational efficiency significantly transcend any initial difficulties. Particularly for businesses seeking to remain in fast-paced, competitive environments, CI/CD has become a basic pillar of modern software delivery. The work then addresses upcoming events expected to influence CI/CDs future development: policy-as-code for automatic security compliance, self-healing systems, and artificial intelligence-driven pipeline optimization. These advances provide a clear basis for companies aiming to ensure their deployment strategies since they guarantee flexibility and resilience in a technologically always changing environment.
CI/CD, DevOps, Cloud-Native, Deployment Automation, Continuous Integration, Continuous Delivery, Kubernetes, DevSecOps, GitOps, Agile Development.
Krishna Chaitanya Chaganti, Associate Director At S&p GLOBAL
Modern software development features so many cloud-native designs, so efficient and automated deployment methods become ever more critical. Using cloud-native paradigms, companies are rapidly building scalable, strong, and adaptable systems; nevertheless, reaching these goals requires a basic transformation in the methods of software design, testing, and implementation. This study investigates the enormous value Continuous Integration and Continuous Deployment (CI/CD) pipelines offer in allowing the effective and consistent delivery of cloud-ready applications. As systems get more complex and development cycles shorten to keep pace without compromising stability or quality, automation is becoming ever more vital. CI/CD pipelines provide the foundation of agile methods and DevOps tools since they enable businesses to rapidly implement consistent, predictable outcomes, new features, upgrades, and patches. Reducing human error, automating code generation, testing, and deployment helps teams free more resources for innovation than for hand-made operational activities; it also helps to lower deployment-related stress. The article looks at the typical difficulties businesses experience using CI/CD pipelines for systems housed in the clouds. Among the primary challenges are guaranteeing consistent and dependable testing methods, combining multiple technologies inside the scope of development and operations, and applying rigorous security measures all through the deployment process. Good resolution of these problems needs for both strategic insight and technical ability. By means of a comprehensive analysis of accepted methods including pipeline-as-----code approaches, shift-left testing techniques, and creative use of containerizing, the paper offers pragmatic guidance for constructing strong and efficient pipelines. These approaches guarantee systems remain flexible and adaptive to satisfy future needs and improve deployment dependability. Actual implementation data demonstrating significant increases in system uptime, general developer productivity, deployment speed, and failure recovery timeframes is compiled in this report. CI/CD pipelines have some early complexity, but the long-term advantages in agility, scalability, and operational efficiency significantly transcend any initial difficulties. Particularly for businesses seeking to remain in fast-paced, competitive environments, CI/CD has become a basic pillar of modern software delivery. The work then addresses upcoming events expected to influence CI/CDs future development: policy-as-code for automatic security compliance, self-healing systems, and artificial intelligence-driven pipeline optimization. These advances provide a clear basis for companies aiming to ensure their deployment strategies since they guarantee flexibility and resilience in a technologically always changing environment.
CI/CD, DevOps, Cloud-Native, Deployment Automation, Continuous Integration, Continuous Delivery, Kubernetes, DevSecOps, GitOps, Agile Development.
Abdul Jabbar Mohammad, UKG Lead Technical Consultant at Metanoia Solutions Inc, USA
Main participants in the human capital management (HCM) sector consistently supporting businesses with labour planning, payroll, timekeeping, and talent management are Kronos and UKG PRO. As UKG (Ultimate Kronos Group) develops with its cloud-first strategy, companies are progressively substituting the more integrated and scalable UKG PRO platform for the more outdated Kronos solutions. This significant transformation underscores strategic workforce analytics, enhanced data integration, and the growing requirement of better user experience of HCM solutions. This paper presents significant lessons, obstacles, and success factors based on comprehensive examination of more than 100 migration sites addressing Kronos-to-UKG PRO conversions. To reduce disturbance and maximize system performance, it emphasizes the importance of appropriate data mapping, agile transformation approaches, and tight change management. Key performance measures showing obvious differences between installations were payroll accuracy, time-to-deploy, employee self-service acceptability, and reporting efficiency. Combining automated technologies, phased migration plans, and stakeholder interaction helps us to reduce go-live issues and maximize long-term value realization. From reducing downtime to adjusting settings for multi-location companies, the outcomes produce consistent answers improving migration performance. The report underlines how urgently system migration is needed to align technology with the evolving needs of the employees and increase HCM maturity. These realizations provide HR managers, IT architects, and implementation partners handling the UKG transformation with helpful advice not only in terms of technology but also organizational flexibility by means of worker empowerment.
Kronos migration, UKG PRO, HCM systems, HRIS conversions, data integrity, deployment benchmarks, workforce management, system integration, enterprise software transitions, migration strategy, payroll accuracy, timekeeping systems, employee self-service, cloud HCM, performance metrics, implementation best practices, legacy systems, change management, digital transformation, scalable solutions, automation accelerators.
Yasodhara Varma Rangineeni, Vice President at JP Morgan & Chase, USA
This work intends to provide a whole framework for introducing more compliance into ML systems in controlled companies including legal, financial & also healthcare industries. The requirement of guaranteeing their compliance with strict regulatory criteria, including GDPR & HIPAA, becomes increasingly important when ML technologies change industries. The paper outlines the main challenges companies face in balancing their innovation with more compliance—including data security, openness & also their model explainability—including It also looks at the nuances of safeguarding their sensitive information, distributing permission & also maintaining audit trails. Using more sensible techniques like automated more compliance checks, thorough monitoring & also open reporting, the article offers a scalable governance framework that can be simply included into existing ML procedures. These practices might help companies lower regulatory infringement risks & also increase ML system dependability. The paper ends by stressing the significance of including compliance into the fundamental architecture of ML systems from the start thus ensuring that these systems may change while being in line with regulatory criteria. The recommended remedies ensure ethical & more legal compliance while encouraging accountability and openness, therefore helping companies to thrive in a digital environment under increased control.
Machine Learning (ML), Compliance, Governance, Regulated Industries, Data Privacy, Ethical AI, Regulatory Frameworks, GDPR, HIPAA, ML Auditing, Risk Management.
Ali Asghar Mehdi Syed, IT Engineer at World Bank Group, USA
DevOps has become a vital tool in the always changing field of software development as it allows operations and development be linked to maximize system performance and speed delivery. Modern software systems rely on fast iterations, constant integration, and clear deployment responsibilities; they cannot grow without this paradigm shift. Infrastructure as Code (IaC) is a basic element of the architecture of this evolution because it helps teams to automate and expand infrastructure management using code, thus allowing deployments with faster speed and more dependability. Growing complexity and diversity of the surroundings of the Infrastructure as Code approach make security an ever more critical problem. To safeguard sensitive data, track access control, and reduce vulnerabilities across many platforms and cloud services, Infrastructure as Code (IaC) pipelines need a strong and proactive security strategy. With an eye towards the growth of scalable and safe pipelines utilized in a range of deployment environments, this paper tries to investigate the junction between DevOps and safe Infrastructure as Code (IaC). Particularly with regard to the need of a diverse security architecture covering the infrastructure and application levels, our study focuses on the most efficient ways for integrating security throughout the pipeline. This work tries to provide basic methods for applying safe Infrastructure as Code (IaC) solutions, therefore lowering risks and ensuring compliance. Case studies and examples tailored for each industry enable one to achieve this. It also discusses solutions for the challenges teams face in many different environments. Although security automation in Infrastructure as Code is fairly significant, the findings show that even if it is widely appreciated, it needs constant monitoring, adaption, and improvement even. Emphasizing the importance of building infrastructure that are both scalable and safe and competent to accept fast changing technological conditions, the final part of the paper gives a vision for the future of DevOps and Infrastructure as Code.
DevOps, Infrastructure as Code (IaC), Security, Scalable Pipelines, Heterogeneous Environments, Automation, Multi-Cloud Management, Hybrid Cloud Infrastructure, CI/CD, Policy as Code (PaC), Secrets Management, Static Code Analysis, Compliance Automation, Kubernetes Orchestration, Cloud Security Posture Management (CSPM), Immutable Infrastructure, Configuration Management,Monitoring and Observability.
Vamsi Alla and Raghuram Katakam
Telecom billing systems are the financial backbone of communication service providers, processing millions of transactions daily. These systems are susceptible to a range of anoma- lies such as overcharges, duplicated entries, missed charges, and unauthorized usage, which can result in substantial revenue loss and erode consumer trust. Traditional supervised learn- ing methods require extensive labeled datasets, which are often unavailable or expensive to produce in the telecom domain due to the rarity and class imbalance of real anomalies. In this paper, we propose a novel anomaly detection framework based on Self-Supervised Learning (SSL), which eliminates the need for labeled anomalies. Our approach combines contrastive learning for latent representation and autoencoder-based reconstruction error to detect out- liers. We apply our model to both synthetic and real-world telecom billing datasets, achieving superior performance compared to baseline models. Furthermore, we integrate SHAP-based explanations to ensure interpretability, which is crucial for operational deployment in billing systems. This method reduces false positives by 28% and demonstrates strong generalizabil- ity and operational readiness, offering a practical solution to anomaly detection in large-scale billing systems.
Telecom Billing, Anomaly Detection, Self-Supervised Learning, Contrastive Learning, Autoencoders, SHAP, Explainability
Copyright © AMLA 2025