Arani Aravinthan and Madushi Pathmaperuma, University of Central Lancashire, UCL
Beauty is a power which allows people to express themselves, gain self-confidence and open to others. Usage of beauty products can help this by creating a new look to uplift the character. Choosing the right makeup product is not an easy task in this diverse range of products these days. Intelligent systems for beauty and makeup selection have gained significant research interest in recent years. Most existing models focus on detecting prominent facial features such as skin tone, lip colour, and overall facial structure. However, minor yet impactful areas, such as the delicate regions around the eyes, are often overlooked. These areas play a crucial role in defining facial aesthetics, influencing expressions, and enhancing overall appearance. To address this gap, this system is designed to provide targeted recommendations for eye-focused beauty enhancements, ensuring a more comprehensive and personalized approach to makeup selection. The proposed system will recommend makeup products considering personal traits of the user such as the length and volume of the eyelashes. A new approach has been devised in calculating the length of the eyelashes aiding the use of advanced computer vision techniques like edge detection, and a regression-based Convolutional Neural Network (CNN) model is trained for prediction. A Support Vector Machine (SVM) is used for the classification task in recommending products for eyelash care.
Edge Detection, Contour Detection, Support Vector Machine.
Massimiliano Barone, STMicroelectronics, Agrate Brianza, Milano, Italy
The goal of this paper is to present a new, fast, and sub-optimal technique for splitting overly large regions of interest (ROIs) or the original full image. ROIs are typically box areas containing significant image information. The aim is to generate ROIs based on a specific criterion for subsequent focused analysis. The proposed technique converts a generic ROI size or the original full image into a few smaller, fixed-size ROIs, fully covering all significant details, with a maximum interception over union and a high number of details for each new ROI. Where details typically come from a previous feature extraction or image segmentation. This method is particularly helpful for convolutional neural networks (CNN) with fixed-size input images and datasets requiring image portions with as much pertinent information as possible. The effectiveness is compared against dividing the original ROI into a regular grid of ROIs, using the number of ROIs and the average detail density per ROI.
Regions of interest, ROI, clustering, segments, computer vision, regional proposal.
Atoshe Islam Sumaya, M. Nessa, and T. Rahman, School of Engineering, BRAC University, Bangladesh
FIR Filter is widely used for image processing, wireless communication, radar systems, control systems, and biomedical signal processing for its intrinsic stability and linear phased attribute. This design implements a prime number coefficient value 41 to decrease periodic faults together with symmetric arti-facts that appear in the frequency response. The selected number achieves an optimal relationship between processing speed and signal quality. Which makes it appropriate for real-time and high-performance sys-tems. The design uses Skywater 130nm CMOS technology and an ensemble of open-source tools starting from RTL and ending at GDSII production. These tools include Openlane, OpenROAD, Magic, Klayout, Netgen, and Yosys. It also includes industry-standard DRC and LVS checks for verification. The imple- mented layout contains 642,151 square micrometers of space with 72.61 MHz operation speed and 0.266 microwatt power utilization. The proposed 41-tap fir filter design introduces significant improvement in performance and power consumption compared to existing work.
Sky130 PDK, Heatmap, OpenLane, Magic, Klayout.
Alexander Dylan Bodner, Jack Natan Spolski ,Antonio Santiago Tepsich, Santiago Pourteau ,Universidad de San Andres ,Argentina
In this paper, we present Convolutional Kolmogorov-Arnold Networks, a novel architecture that integrates the learnable spline-based activation functions of Kolmogorov-Arnold Networks (KANs) into convolutional layers. By replacing traditional fixed-weight kernels with learnable non-linear functions, Convolutional KANs offer a significant improvement in parameter efficiency and expressive power over standard Convolutional Neural Networks (CNNs). We empirically evaluate Convolutional KANs on the Fashion-MNIST dataset, demonstrating competitive accuracy with up to 50% fewer parameters compared to baseline classic convolutions. This suggests that the KAN Convolution can effectively capture complex spatial relationships with fewer resources, offering a promising alternative for parameter-efficient deep learning models.
Machine Learning, Kolomogorov-Arnold Networks, Convolutional Kolmogorov-Arnold Networks.
Alexander Dylan Bodner, Jack Natan Spolski ,Antonio Santiago Tepsich, Santiago Pourteau ,Universidad de San Andres ,Argentina
PulsePath addresses critical safety challenges faced by hearing impaired individuals, who often lack access to auditory cues essential for safe navigation in public settings [1]. PulsePath, a wearable device that features a navigation system and a blind spot detection, is capable of delivering real-time vibrational feedback to inform users of their surroundings and alert potential danger and obstacles. PulsePath pairs with a mobile app developed using Flutter and connects via Bluetooth Low Energy(BLE) to allow users to customize settings and monitor device status [2]. Key components include a time-of-flight sensor for object detection, ESP32 microcontroller for data processing, and haptic motors for tactile feedback output. Major challenges such as sensor placement, excessive wiring, BLE latency, and user experience were addressed through better design and use of more efficient microcontrollers such as ESP 32-S3. To improve user experience, a mobile app is developed and implemented by Flutter. Experimental results indicate enhanced battery efficiency and a 95% accuracy rate with reduced latency from ESP 32-S3 in comparison with RP 2040. Unlike visual and auditory aids, PulsePath allows discreet and intuitive navigation and support, offering an inclusive solution to enhance spatial awareness among individuals with hearing impairments.
Wearable Assistive Tech, Haptic Feedback, Hearing Impairment, Real-Time Navigation Networks.
Xiangxuan Zeng1, Ang Li2, 1Walnut High School, 400 Pierre Rd, Walnut, CA 91789, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840
ChemSynth is an interactive chemistry education platform designed to enhance student engagement and conceptual understanding through hands-on digital exploration. Built using Unity and C#, the game allows users to collect subatomic particles—protons, neutrons, and electrons—and synthesize atoms and molecules in a 3D environment. Unlike traditional learning tools or static periodic tables, ChemSynth enables players to actively construct matter from the nucleus outward, reinforcing foundational chemical principles. The platform also features an integrated AI system for validating atomic combinations and providing real-time feedback. Through experiments, we evaluate the system’s accuracy in detecting valid atomic structures and the effectiveness of its periodic table interface in improving student recall. Compared to prior approaches such as quiz-based learning or augmented reality visualization, ChemSynth offers a more immersive and constructive learning experience. The results demonstrate the platform’s potential to bridge the gap between abstract theory and practical understanding in early science education.
Machine Learning, Computer Vision, Chemistry, Compound.
Sun Xiaoping and Rodolfo C. Raga Jr, National University, Philippines
This paper proposes an air quality prediction model ST-HyDM based on spatiotemporal data mining and hybrid deep learning. By constructing a dynamic graph neural network to capture spatiotemporal dependencies, fusing multimodal features, and using a dual-branch architecture of Transformer and TCN for feature extraction and prediction. At the same time, a missing value processing mechanism based on generative adversarial networks (GAN) is introduced to improve data integrity and model performance. Experimental results show that compared with traditional models such as linear regression and random forest, ST-HyDM has higher accuracy and robustness in air quality prediction, providing a more effective solution for air quality prediction.
Air quality prediction, spatiotemporal data mining, hybrid deep learning, dynamic graph neural network, missing value processing.
Ekta Gujral and Apurva Sinha, UC Riverside, UT Dallas
Community detection in social network graphs plays a vital role in uncovering group dynamics, influence pathways, and the spread of information. Traditional methods focus primarily on graph structural properties, but recent advancements in Large Language Models (LLMs) open up new avenues for integrating semantic and contextual information into this task. In this paper, we present a detailed investigation into how various LLM-based approaches perform in identifying communities within social graphs. We introduce a two-step framework called CommLLM, which leverages the GPT-4o model along with prompt-based reasoning to fuse language model outputs with graph structure. Evaluations are conducted on six real-world social network datasets, measuring performance using key metrics such as Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), Variation of Information (VOI), and cluster purity. Our findings reveal that LLMs, particularly when guided by graphaware strategies, can be successfully applied to community detection tasks in small to medium-sized graphs. We observe that the integration of instruction-tuned models and carefully engineered prompts significantly improves the accuracy and coherence of detected communities. These insights not only highlight the potential of LLMs in graph-based research but also underscore the importance of tailoring model interactions to the specific structure of graph data.
Large Language Model (LLM), Social Network Graphs, Community Detection, Data mining.
Peter Zha1, Soroush Mirzaee2, 1Rancho San Joaquin Middle School, 4861 Michelson Dr, Irvine, CA 92612, 2California State Polytechnic University, Pomona, CA, 91768
Tennis players often face barriers to consistent practice due to weather, court availability, and scheduling conflicts [1]. This paper presents an intelligent video game developed with Unity and pose estimation to enable indoor tennis training [2]. Utilizing BlazePose and Unity Sentis, the system tracks player movements via webcam, translating them into a 3D avatar for interactive drills against a virtual ball dispenser [3]. Key challenges included optimizing pose estimation latency, sourcing 3D models, and rendering realistic graphics while keeping the performance high, which was addressed through Sentis integration, Blender tools, and iterative lighting adjustments. Experiments comparing BlazePose, OpenPose, and MoveNet revealed BlazePose’s superior latency (28ms) and accuracy over the other two, validating its efficiency with Sentis. This accessible, desktop-based solution outperforms traditional methods by eliminating environmental dependencies and reducing costs. It empowers players to maintain skill development, offering a practical tool for tennis enthusiasts globally.
Sentis, Pose estimation, Unity, Video game, Tennis Training.
Rena Cao1, Tyler Boulom2, 1University High School, 4771 Campus Drive, Irvine, CA 92612, 2California State Polytechnic University, Pomona, CA, 91768
Houseplants have long held cultural and personal significance, first gaining popularity among the Victorian middle class as symbols of morality and status. Today, over half of American households own at least one houseplant, yet many struggle to keep them alive—on average, households have killed seven plants, with nearly half of plant owners expressing concern about their plant’s survival. Despite this, houseplants offer numerous benefits, from improving air quality and acoustics to enhancing mental health, productivity, and social interaction. EcoMonitor addresses the common challenges of plant care, particularly the issue of overwatering, by using AI and soil moisture sensors to assess plant health and alert users through a color-coded LED system. This system provides timely, intuitive reminders that adapt to environmental conditions, unlike generic care guides. By keeping the user actively engaged in care tasks, EcoMonitor promotes both plant health and personal satisfaction. Experimental testing validated the accuracy of the AI classification for different watering conditions and evaluated the effectiveness of LED feedback, showing strong potential for reducing plant mortality while preserving the rewarding experience of plant care.
Plant Care, AI Monitoring, Overwatering Prevention, Smart Gardening.
Nuwan Kaluarachchi1, Arathi Arakala1, Sevvandi Kandanaarachchi2 and Kristen Moore2, 1School of Mathematical and Geospatial Sciences, RMIT University, Melbourne, Australia, 2CSIRO’s Data61, Melbourne, Australia
Keystroke dynamics is a behavioural biometric modality that utilises individual typing patterns for user authentication. While it has been popular on single device authentication, its application in cross-device scenarios remains under-explored. This paper proposes a solution for keystroke dynamics for cross-device user authentication using a transfer learning framework. Specifically, we authenticate users on tablets by training the authentication unit mostly on smartphone keystroke dynamics. We call our framework TEDxBC, as it includes a Transfer Encoder, a Data-fusion module, and a Binary Classifier for cross-device scenarios. Leveraging 24 keystroke dynamic features incorporating spatial and traditional features, TEDxBC employs an inductive transfer encoder to map users from the smartphone to the tablet. We evaluate TEDxBC on participants from the publicly available BBMAS dataset. Our method achieves an average Equal Error Rate (EER) of 14% , surpassing state-of-the-art methods on the same database. Furthermore, we apply a biometric menagerie analysis to gain insights into the performance of different user groups. Our analysis reveals that users in the “doves” group that authenticate with high accuracy on a single device retain their high performance in a cross-device scenario. They achieve an EER of 5% , surpassing the overall TEDxBC performance.
Authentication, Transfer learning, Cross-device biometric authentication, Keystroke dynam- ics, biometric menagerie.
Doliana Celaj1 and Greta Jani2, 1Bayswater College, England, 2Department of Albanian Languages, Faculty of Education, “Aleksander Moisiu” University, Durres, Albania
This study explores the impact of Natural Language Processing (NLP) tools on speaking and vocabulary development among ESL learners. Over a 6-week intervention, two groups of students are assessed—one using traditional learning methods, and the other supported by NLP-enhanced tools. The results revealed significant improvements in vocabulary retention, speaking performance, and learner confidence in the NLP-supported group, underscoring the value of technology-driven approaches in ESL instruction..
Natural Language Processing (NLP), ESL Education, Speaking Skills, Vocabulary Acquisition, Speech Recognition, Pronunciation Training, Student Motivation.
Sumit Mamtani, New York University, USA
Accurate text classification requires both deep contextual understanding and structural representation of language. This study explores a hybrid approach that integrates transformer-based embeddings with graph-based neural architectures to enhance text classification performance. By leveraging pre-trained language models for feature extraction and applying graph convolution techniques for relational modeling, the proposed method captures both semantic meaning and structural dependencies in text. Experimental results demonstrate improved classification accuracy over traditional approaches, highlighting the effectiveness of combining deep contextual learning with graph-based representations in NLP tasks.
Natural Language Processing (NLP), Text classification, Graph Neural Networks (GNNs), Graph Convolutional Networks (GCNs).
Andreas Ottem, Independent Researcher, Norway
Retrieval-Augmented Generation (RAG) systems typically face constraints because of their inherent mechanism: a simple top-k semantic search. The approach often leads to the incorporation of irrelevant or redundant information in the context, degrading performance and efficiency [10][11]. This paper presents MeVe, a novel modular architecture intended for Memory Verification and smart context composition. MeVe rethinks memory management as a tunable pipeline of five distinct phases: initial retrieval, relevance verification, fallback retrieval, context prioritization, and token budgeting. This architecture enables fine-grained control of what knowledge is made available to an LLM, enabling task-dependent filtering and adaptation. We release a reference implementation of MeVe as a proof of concept and evaluate its performance on knowledge-heavy QA tasks over a subset of English Wikipedia. My results demonstrate that by actively verifying information before composition, MeVe improves context efficiency by 57% compared to standard RAG implementations. This work provides a framework for more scalable and dependable LLM applications. By refining and distilling contextual information, MeVe offers a path towards better grounding and more accurate factual support [16].
Retrieval-Augmented Generation, Language Models, Context Management, Memory Verification, Modular Architecture.
Ariba Khan1 and Athar Yawar Hashmi2, 1Department of Computer Science, Aligarh Muslim University, 2Aligarh Muslim University, India
The medical field is fuelled by a knowledge enterprise that makes use of enormous and escalating quantities of narrative material from case histories composed by medical professionals, pathologists in particular, and radiologists, as well as summary discharge descriptions and reports. Electronic medical facilities typically store this knowledge in unorganized and inconsistent documents, which renders it problematic for the equipment to comprehend the facts and elements of the narrated data. Therefore, it might be difficult to obtain useful and relevant healthcare data to assist with selections. However, narrative data pertaining to healthcare has been organized using Natural Language Processing (NLP) methods. Consequently, NLP addresses have an opportunity to acquire unprocessed healthcare information, evaluate its grammatical organization, determine its significance of it, and transform the information to ensure it is readily comprehended by electronic medical systems. As a result, NLP tackles boost healthcare standards while decreasing expenses. Over the past few decades, significant advancement has been accomplished in the study of vision in computers, image processing, and recognition of patterns. Imaging for medical purposes has additionally received greater publicity over the past few decades as a consequence of its essential function for medical technologies. Researchers have made available an enormous amount of important scientific knowledge and statistics that demonstrate the evolution and applications of medical imaging in healthcare. Clinicians have moved from laboratory work to the bedside as an outcome of the growth of various scientific domains. This paper reviews the NLP and Computer Vision techniques in healthcare, and as well as their obstructions.
Electronic Healthcare system, Healthcare, Computer Vision, Natural Language Processing Techniques, Medical Image, Machine Learning algorithms.
Dare Erie and Ehigie, University of Birmingham, Edgbaston, England
This paper examines the intersection of historical educational exclusion with modern AI tools that support learners with dyslexia, utilizing Michel Peyramaure’s L’Orange de Noël as a literary lens. The novel’s portrayal of Malvina Delpeuch, likely dyslexic, reveals systemic failures in early 20th-century French education, highlighting cultural stigmas and institutional neglect faced by neurodivergent students in a strict, religious setting. It contrasts these issues with contemporary AI applications, such as annotation systems, diagnostic models, and personalized learning apps, which enhance accessibility and shift the culture from stigma to support. An interdisciplinary approach links literature, technology, and disability studies, envisioning learners like Malvina being recognized and empowered. Recommendations include incorporating AI into classrooms, training teachers, and ensuring the ethical use of AI. The paper concludes that AI can address historical inequalities and foster inclusive education when used thoughtfully, advancing justice, neurodiversity, and digital learning dialogues.
Dyslexia, Artificial Intelligence, Educational Inclusion, Literary Analysis, Neurodiversity, Historical Exclusion.
Ariba Khan1 Meilin Shen and Bobby Nguyen, California State Polytechnic University, USA
This paper presents a mobile application designed to improve the peer review and feedback process for student writers using a combination of community discussion tools and AI-guided support. The platform addresses challenges in receiving constructive feedback by offering structured peer engagement, instructional resources, and AI-generated suggestions through a guided prompt system. The app was developed using Flutter and Firebase, with components for user-authored drafts, a public feedback forum, and an AI Assistant [11]. Key challenges included ensuring feedback quality and controlling AI behavior, both of which were addressed through surveys, structured inputs, and testing. Experiments evaluated AI adherence to its role and system response times, confirming that prompt structure improves behavior and that system performance is impacted by network conditions. Compared to other scholarly methods, this solution emphasizes guided autonomy, trust calibration, and user-centered design [12]. Ultimately, the application enhances student learning, feedback exchange, and writing improvement while preventing overreliance on automated editing.
Writing, Editing, Narrative Editing, Discussion, Feedback.
Patrice Yemmene1, Prosper Djiaffeua2, Diffouo2, 1Department of Computer Science, University of Wisconsin Milwaukee, 2University of Yaounde I, USA
In this paper, we present a compelling argument for enhancing NLP capabilities for african under-resourced languages, particularly those spoken in Cameroon. We use the Ngiemboon language as a focal point for developing innovative tagging solutions. We lay the groundwork for creating a part-of-speech (POS) tag set for the Ngiemboon language, focusing on a descriptive study of its part-of-speech. We establish NLP as an interdisciplinary field that automates language understanding and generation, highlighting various applications such as machine translation and chatbots. We emphasize the role of POS tagging as a fundamental step in NLP. We highlight linguistic description of the language as a prerequisite for the development of POS. One aspect of linguistic description is the morpho syntactic analysis of the language, which is essential for understanding linguistic structures and enabling more complex language processing tasks. We underline the necessity of a well-structured tag set, which must be informed by detailed linguistic analysis.
Part of speech, Natural Language Processing (NLP), Under-resourced language.
LalithSriram Datla, Cloud Engineer
This paper explores how DevOps practices are transforming the landscape of software development and operations in healthcare and insurance sectors, with a strong focus on enhancing monitoring capabilities and optimizing Continuous Integration/Continuous Deployment (CI/CD) pipelines. In these industries, where system reliability, regulatory compliance, and scalability are critical, DevOps introduces a culture of collaboration, automation, and continuous feedback that directly addresses these demands. Traditional development models often suffer from siloed teams, manual deployment processes, and delayed feedback loops, all of which are detrimental in environments where patient data security and financial transaction integrity are non-negotiable. By integrating DevOps principles, organizations can establish real-time monitoring solutions that provide deep visibility into system performance, proactively detect anomalies, and ensure uptime—essential for meeting stringent healthcare and insurance service-level agreements. Moreover, DevOps-driven CI/CD pipelines help automate testing, validation, and deployment processes, reducing the likelihood of human error while accelerating delivery cycles. This is particularly valuable for maintaining compliance with standards such as HIPAA in healthcare and SOC 2 in insurance, as automation enhances audit readiness and ensures consistent policy enforcement. Scalability, too, is significantly improved through infrastructure-as-code and container orchestration tools that enable systems to adapt rapidly to changing user loads without compromising stability. In essence, the paper outlines a DevOps-centric roadmap that guides healthcare and insurance software teams to not only build and deploy faster but also operate more securely and reliably. Through real-world scenarios and best practices, it showcases how integrating continuous monitoring with CI/CD pipelines creates a feedback-rich, automated environment that empowers organizations to remain competitive while staying compliant and resilient in the face of evolving regulatory and operational challenges.
DevOps, Continuous Monitoring, CI/CD, Healthcare IT, Insurance Systems, Compliance Automation, Infrastructure as Code, Release Orchestration, Containerization, Microservices, Security Auditing, ITIL Processes, Regulatory Reporting, Health Data Integration, Policy Management Systems, SLA Enforcement.
Swetha Talakola, Quality Engineer III at Walmart, Inc, USA
Emphasizing the shift from conventional manual testing to thorough CI/CD automation, this article describes the evolution of a QA engineers role in order to improve the dependability and efficiency of the software development lifetime. From a simple novelty to a necessary necessity, the change to continuous integration and continuous delivery (CI/CD) has progressed from one depending on modern development cycles depends on improved scalability and accelerated releases. From initially struggling with repeated manual test cases to progressively embracing scripting, learning test automation tools, and integrating quality checks into CI/CD pipelines, the story follows the engineers evolution. The approach painstakingly examines issues like tool choice, framework design, team culture changes, and faith-building in automation. The story shows how a change in perspective, an emphasis on lifelong learning, and cooperative growth with artists help testing from a difficulty become a stimulus for innovation. Among the main outcomes are improved general quality of products, faster problem solving, faster feedback systems, and more frequent deployment. By the conclusion of this road, the QA engineer had accepted automation and greatly helped DevOps to succeed, thereby demonstrating that quality assurance can grow in an agile, dynamic setting with suitable tools and approaches. This post tries to motivate other QA experts with comparable difficulties by providing concrete advice for improving CI/CD maturity.
Quality Assurance, Manual Testing, CI/CD, Automation, DevOps, Test Automation, QA Transformation, Pipeline Integration, Agile, Continuous Testing, Shift Left Testing, Test Strategy, Deployment Automation, Software Quality, QA Engineering, Code Integration, Regression Testing, Build Verification, Agile Testing, QA Best Practices, DevOps Culture, Continuous Delivery, Testing Frameworks, Scripting, Release Cycle Optimization.
Krishna Chaitanya Chaganti, Associate Director At S&p GLOBAL
Modern software development features so many cloud-native designs, so efficient and automated deployment methods become ever more critical. Using cloud-native paradigms, companies are rapidly building scalable, strong, and adaptable systems; nevertheless, reaching these goals requires a basic transformation in the methods of software design, testing, and implementation. This study investigates the enormous value Continuous Integration and Continuous Deployment (CI/CD) pipelines offer in allowing the effective and consistent delivery of cloud-ready applications. As systems get more complex and development cycles shorten to keep pace without compromising stability or quality, automation is becoming ever more vital. CI/CD pipelines provide the foundation of agile methods and DevOps tools since they enable businesses to rapidly implement consistent, predictable outcomes, new features, upgrades, and patches. Reducing human error, automating code generation, testing, and deployment helps teams free more resources for innovation than for hand-made operational activities; it also helps to lower deployment-related stress. The article looks at the typical difficulties businesses experience using CI/CD pipelines for systems housed in the clouds. Among the primary challenges are guaranteeing consistent and dependable testing methods, combining multiple technologies inside the scope of development and operations, and applying rigorous security measures all through the deployment process. Good resolution of these problems needs for both strategic insight and technical ability. By means of a comprehensive analysis of accepted methods including pipeline-as-----code approaches, shift-left testing techniques, and creative use of containerizing, the paper offers pragmatic guidance for constructing strong and efficient pipelines. These approaches guarantee systems remain flexible and adaptive to satisfy future needs and improve deployment dependability. Actual implementation data demonstrating significant increases in system uptime, general developer productivity, deployment speed, and failure recovery timeframes is compiled in this report. CI/CD pipelines have some early complexity, but the long-term advantages in agility, scalability, and operational efficiency significantly transcend any initial difficulties. Particularly for businesses seeking to remain in fast-paced, competitive environments, CI/CD has become a basic pillar of modern software delivery. The work then addresses upcoming events expected to influence CI/CDs future development: policy-as-code for automatic security compliance, self-healing systems, and artificial intelligence-driven pipeline optimization. These advances provide a clear basis for companies aiming to ensure their deployment strategies since they guarantee flexibility and resilience in a technologically always changing environment.
CI/CD, DevOps, Cloud-Native, Deployment Automation, Continuous Integration, Continuous Delivery, Kubernetes, DevSecOps, GitOps, Agile Development.
Krishna Chaitanya Chaganti, Associate Director At S&p GLOBAL
Modern software development features so many cloud-native designs, so efficient and automated deployment methods become ever more critical. Using cloud-native paradigms, companies are rapidly building scalable, strong, and adaptable systems; nevertheless, reaching these goals requires a basic transformation in the methods of software design, testing, and implementation. This study investigates the enormous value Continuous Integration and Continuous Deployment (CI/CD) pipelines offer in allowing the effective and consistent delivery of cloud-ready applications. As systems get more complex and development cycles shorten to keep pace without compromising stability or quality, automation is becoming ever more vital. CI/CD pipelines provide the foundation of agile methods and DevOps tools since they enable businesses to rapidly implement consistent, predictable outcomes, new features, upgrades, and patches. Reducing human error, automating code generation, testing, and deployment helps teams free more resources for innovation than for hand-made operational activities; it also helps to lower deployment-related stress. The article looks at the typical difficulties businesses experience using CI/CD pipelines for systems housed in the clouds. Among the primary challenges are guaranteeing consistent and dependable testing methods, combining multiple technologies inside the scope of development and operations, and applying rigorous security measures all through the deployment process. Good resolution of these problems needs for both strategic insight and technical ability. By means of a comprehensive analysis of accepted methods including pipeline-as-----code approaches, shift-left testing techniques, and creative use of containerizing, the paper offers pragmatic guidance for constructing strong and efficient pipelines. These approaches guarantee systems remain flexible and adaptive to satisfy future needs and improve deployment dependability. Actual implementation data demonstrating significant increases in system uptime, general developer productivity, deployment speed, and failure recovery timeframes is compiled in this report. CI/CD pipelines have some early complexity, but the long-term advantages in agility, scalability, and operational efficiency significantly transcend any initial difficulties. Particularly for businesses seeking to remain in fast-paced, competitive environments, CI/CD has become a basic pillar of modern software delivery. The work then addresses upcoming events expected to influence CI/CDs future development: policy-as-code for automatic security compliance, self-healing systems, and artificial intelligence-driven pipeline optimization. These advances provide a clear basis for companies aiming to ensure their deployment strategies since they guarantee flexibility and resilience in a technologically always changing environment.
CI/CD, DevOps, Cloud-Native, Deployment Automation, Continuous Integration, Continuous Delivery, Kubernetes, DevSecOps, GitOps, Agile Development.
Abdul Jabbar Mohammad, UKG Lead Technical Consultant at Metanoia Solutions Inc, USA
Main participants in the human capital management (HCM) sector consistently supporting businesses with labour planning, payroll, timekeeping, and talent management are Kronos and UKG PRO. As UKG (Ultimate Kronos Group) develops with its cloud-first strategy, companies are progressively substituting the more integrated and scalable UKG PRO platform for the more outdated Kronos solutions. This significant transformation underscores strategic workforce analytics, enhanced data integration, and the growing requirement of better user experience of HCM solutions. This paper presents significant lessons, obstacles, and success factors based on comprehensive examination of more than 100 migration sites addressing Kronos-to-UKG PRO conversions. To reduce disturbance and maximize system performance, it emphasizes the importance of appropriate data mapping, agile transformation approaches, and tight change management. Key performance measures showing obvious differences between installations were payroll accuracy, time-to-deploy, employee self-service acceptability, and reporting efficiency. Combining automated technologies, phased migration plans, and stakeholder interaction helps us to reduce go-live issues and maximize long-term value realization. From reducing downtime to adjusting settings for multi-location companies, the outcomes produce consistent answers improving migration performance. The report underlines how urgently system migration is needed to align technology with the evolving needs of the employees and increase HCM maturity. These realizations provide HR managers, IT architects, and implementation partners handling the UKG transformation with helpful advice not only in terms of technology but also organizational flexibility by means of worker empowerment.
Kronos migration, UKG PRO, HCM systems, HRIS conversions, data integrity, deployment benchmarks, workforce management, system integration, enterprise software transitions, migration strategy, payroll accuracy, timekeeping systems, employee self-service, cloud HCM, performance metrics, implementation best practices, legacy systems, change management, digital transformation, scalable solutions, automation accelerators.
Yasodhara Varma Rangineeni, Vice President at JP Morgan & Chase, USA
This work intends to provide a whole framework for introducing more compliance into ML systems in controlled companies including legal, financial & also healthcare industries. The requirement of guaranteeing their compliance with strict regulatory criteria, including GDPR & HIPAA, becomes increasingly important when ML technologies change industries. The paper outlines the main challenges companies face in balancing their innovation with more compliance—including data security, openness & also their model explainability—including It also looks at the nuances of safeguarding their sensitive information, distributing permission & also maintaining audit trails. Using more sensible techniques like automated more compliance checks, thorough monitoring & also open reporting, the article offers a scalable governance framework that can be simply included into existing ML procedures. These practices might help companies lower regulatory infringement risks & also increase ML system dependability. The paper ends by stressing the significance of including compliance into the fundamental architecture of ML systems from the start thus ensuring that these systems may change while being in line with regulatory criteria. The recommended remedies ensure ethical & more legal compliance while encouraging accountability and openness, therefore helping companies to thrive in a digital environment under increased control.
Machine Learning (ML), Compliance, Governance, Regulated Industries, Data Privacy, Ethical AI, Regulatory Frameworks, GDPR, HIPAA, ML Auditing, Risk Management.
Ali Asghar Mehdi Syed, IT Engineer at World Bank Group, USA
DevOps has become a vital tool in the always changing field of software development as it allows operations and development be linked to maximize system performance and speed delivery. Modern software systems rely on fast iterations, constant integration, and clear deployment responsibilities; they cannot grow without this paradigm shift. Infrastructure as Code (IaC) is a basic element of the architecture of this evolution because it helps teams to automate and expand infrastructure management using code, thus allowing deployments with faster speed and more dependability. Growing complexity and diversity of the surroundings of the Infrastructure as Code approach make security an ever more critical problem. To safeguard sensitive data, track access control, and reduce vulnerabilities across many platforms and cloud services, Infrastructure as Code (IaC) pipelines need a strong and proactive security strategy. With an eye towards the growth of scalable and safe pipelines utilized in a range of deployment environments, this paper tries to investigate the junction between DevOps and safe Infrastructure as Code (IaC). Particularly with regard to the need of a diverse security architecture covering the infrastructure and application levels, our study focuses on the most efficient ways for integrating security throughout the pipeline. This work tries to provide basic methods for applying safe Infrastructure as Code (IaC) solutions, therefore lowering risks and ensuring compliance. Case studies and examples tailored for each industry enable one to achieve this. It also discusses solutions for the challenges teams face in many different environments. Although security automation in Infrastructure as Code is fairly significant, the findings show that even if it is widely appreciated, it needs constant monitoring, adaption, and improvement even. Emphasizing the importance of building infrastructure that are both scalable and safe and competent to accept fast changing technological conditions, the final part of the paper gives a vision for the future of DevOps and Infrastructure as Code.
DevOps, Infrastructure as Code (IaC), Security, Scalable Pipelines, Heterogeneous Environments, Automation, Multi-Cloud Management, Hybrid Cloud Infrastructure, CI/CD, Policy as Code (PaC), Secrets Management, Static Code Analysis, Compliance Automation, Kubernetes Orchestration, Cloud Security Posture Management (CSPM), Immutable Infrastructure, Configuration Management,Monitoring and Observability.
Vamsi Alla and Raghuram Katakam
Telecom billing systems are the financial backbone of communication service providers, processing millions of transactions daily. These systems are susceptible to a range of anoma- lies such as overcharges, duplicated entries, missed charges, and unauthorized usage, which can result in substantial revenue loss and erode consumer trust. Traditional supervised learn- ing methods require extensive labeled datasets, which are often unavailable or expensive to produce in the telecom domain due to the rarity and class imbalance of real anomalies. In this paper, we propose a novel anomaly detection framework based on Self-Supervised Learning (SSL), which eliminates the need for labeled anomalies. Our approach combines contrastive learning for latent representation and autoencoder-based reconstruction error to detect out- liers. We apply our model to both synthetic and real-world telecom billing datasets, achieving superior performance compared to baseline models. Furthermore, we integrate SHAP-based explanations to ensure interpretability, which is crucial for operational deployment in billing systems. This method reduces false positives by 28% and demonstrates strong generalizabil- ity and operational readiness, offering a practical solution to anomaly detection in large-scale billing systems.
Telecom Billing, Anomaly Detection, Self-Supervised Learning, Contrastive Learning, Autoencoders, SHAP, Explainability
Kyle He1, Garret Washburn2, 1Mount Si High School, 8651 Meadowbrook Way SE, Snoqualmie, WA 98065, 2California State Polytechnic University, Pomona, CA, 91768
The method proposed in this paper, the FitFlip mobile application, was devised as a solution to the problem that originates from online clothing websites like Grailed, whilst providing a useful marketplace to buy and sell, also heavily taxing the users both monetarily and in the user experience [2]. This issue became prominent as it would take up to 30% of the income users would get from selling clothes, despite only acting as a middleman. The FitFlip application intends to solve this problem as it is a community based, zero-fee platform where the transaction terms are stipulated strictly by the users and is not imposed upon by the FitFlip service in any way. This fixes the major issue prominent with services such as Grailed or Depop. Additionally, the FitFlip application seeks to resolve a few other issues prevalent within these popular services, such as inefficient search and filtering tools and the inability to create custom transactions where users can trade items instead of only selling. Within this paper, multiple experiments are performed that ensure FitFlip mobile application has a reliable and consistent user experience [1]. These experiments consisted of an experiment to ensure the response time was efficient, as well as another experiment to determine the consistency of response time given the quantity of data to be loaded. Ultimately, the FitFlip application is more than sufficient solution to the proposed problem, as it provides users with an efficient platform to connect and trade clothing items, which boosts a sense of community and encourages the reuse of clothes.
FitFlipp, Trading, Community-Driven, Online Clothing Marketplaces
Elijah E.G. Osimene and Gp Capt. (Dr) HTD Ihiabe rtd, Air Force Institute of Technology, Kaduna, Nigeria
Winglets are aerodynamic devices designed to mitigate induced drag, thereby enhancing aircraft efficiency and fuel economy. [7] This study employs Computational Fluid Dynamics (CFD) to assess the impact of varying winglet cant angles on aerodynamic performance, using the Boeing 737-800 wing as a baseline. Three winglet configurations—blended winglet, canted winglet 1, and canted winglet 2—were analysed to determine their influence on lift-to-drag (Cl/Cd) ratio, pressure distribution, and overall range performance. CFD simulations conducted in ANSYS Fluent revealed that the canted winglet 1, with a 45-degree cant angle, achieved the highest aerodynamic efficiency, with a Cl/Cd ratio of 11.456, leading to a 4.5% increase in range. These findings emphasize the importance of optimizing winglet cant angles to maximize aerodynamic efficiency, thereby enhancing performance and maximizing operational costs.
Winglet optimization, Cant angle, Computational Fluid Dynamics, Lift-to-drag ratio, Fuel efficiency
Bala Subramanyan, Verifoxx, London, UK
WebAssembly (WASM) [28] is a lightweight, portable binary format increasingly used for secure application execution across cloud, edge, and embedded platforms. While runtimes like WAMR [13] rely on software-based sandboxing, they remain susceptible to memory safety vulnerabilities such as buffer overflows, use-after-free errors, and speculative execution attacks. This paper presents cWAMR [24], a CHERI-based [2] WebAssembly runtime that replaces software isolation with hardware-enforced memory safety and fine-grained compartmentalization. Built atop the Capability Hardware Enhanced RISC Instructions (CHERI) architecture [1], cWAMR [24] enforces pointer provenance, bounded access, and per-module compartment isolation at the hardware level—eliminating entire classes of memory bugs. Unlike Trusted Execution Environments (TEEs) such as Intel SGX [11] or AWS Nitro [16], cWAMR [24] operates without cryptographic boundary enforcement or enclave overhead. cWAMR [24] supports both hybrid and purecap CHERI [2] modes and includes a capability-aware system interface (cWASI) [24] and secure externref [29] management for host interaction. Validation on CHERI Morello [23] confirms successful runtime integration, strict memory safety guarantees, and support for efficient Ahead-of-Time (AoT) execution. Developed as part of a competitively funded project under the UK’s Digital Security by Design (DSbD) [18] CHERI Morello programme, cWAMR [24] is the first WebAssembly runtime to natively integrate CHERI’s hardware-backed trust model—offering a scalable, secure foundation for privacy-preserving computation across untrusted environments.
WebAssembly Runtime [27] [28], Capability-Based Security, CHERI [1][2], Memory Safety, Compartmentalization [2].
Olivier Gatete, Senior Lecturer, Department of Mathematics & ICT, Texila American University, Zambia
The increasing use of machine learning in education offers new opportunities for personalized learning and predictive analytics. However, traditional centralized models pose serious privacy and data governance concerns, particularly when data is shared across institutions. This paper explores the use of Federated Learning (FL) to enable privacy-preserving student modeling across multiple educational institutions. FL allows collaborative model training without exposing raw student data, maintaining data sovereignty and compliance with privacy regulations. We present a federated framework tailored for education, evaluate its performance on real-world datasets, and address challenges including data heterogeneity, limited resources, and secure model aggregation. Results show that FL achieves performance comparable to centralized approaches while enhancing privacy and scalability. We also discuss practical considerations for implementation and propose future research directions to support ethical and effective use of FL in diverse educational contexts.
Federated Learning, Student Modeling, Privacy Preservation, Educational Data Mining, Distributed Machine Learning.
Bahareh Golchin, Banafsheh Rekabdar, and Kunpeng Liu, School of Computer Science, Portland State University, Portland, Oregon, United States
Anomaly detection in time series data is important for applications in finance, healthcare, sensor networks, and industrial monitoring. Traditional methods usually struggle with limited labeled data, high false-positive rates, and difficulty generalizing to novel anomaly types. To overcome these challenges, we propose a reinforcement learning-based framework that integrates dynamic reward shaping, Variational Autoencoder (VAE), and active learning. Our method uses an adaptive reward mechanism that balances exploration and exploitation by dynamically scaling the effect of VAE-based reconstruction error and classification rewards. This approach enables the agent to detect anomalies effectively in low-label systems while maintaining high precision and recall. Our experimental results on the Yahoo A1 and Yahoo A2 benchmark datasets demonstrate that the proposed method consistently outperforms state-of-the-art unsupervised and semi-supervised approaches. These findings show that our framework is a scalable and efficient solution for real-world anomaly detection tasks.
Keywords: Time Series Anomaly Detection, Deep Reinforcement Learning, Variational Autoencoders, Active Learning, Dynamic Reward Scaling, Adaptive Rewards, Generative AI.
Paul Chukwurah1, Daniel Chukwurah2, Uthman Oyebanji3, Ala Al-Kafri4, Mohammad Alkasasbeh5, 1The Owl Therapy Centre, Cardiff, United Kingdom, 2Afrinvest West Africa, Lagos, Nigeria, 3SSE Renewables, Glasgow, United Kingdom, 4Teesside University, Middlesbrough, United Kingdom, 5East Lancashire Hospitals NHS Trust, Burnley, United Kingdom
Disc bulge occurs when the inner component of the intervertebral disc protrudes from its outer wall and progresses over time, which can lead to additional disc degeneration problems such as spinal stenosis and sciatica. Serious bulges on the disc can put pressure on the surrounding nerve roots, causing pain to travel down the back and other parts of the body. In this paper, a convolutional neural network (CNN) model has been built to diagnose composite axial MRI scans. The dataset comprises 515 patients who reported lower back pain. It includes the last 3 lumbar spine discs, D3(L3-L4), D4(L4-L5), and D5(L5-S1) for each of the patients. The model achieved remarkable accuracy, recall, precision and F1-score of 89%. Local Interpretable Model-Agnostic Explanations (LIME) was applied to explain the model’s decision, hence eliminating the black box problem of models. This ensures the model provides interpretable insights, making it accurate and reliable.
Artificial Intelligence, Convolutional Neural Networks, Disc Bulge, Interpretable Diagnosis.
Shengying Zhang1, Jonathan Sahagun2, 1Rancho Cucamonga High School, 11801 Lark Dr, Rancho Cucamonga, CA 91701, 2California State Polytechnic University, Pomona, CA, 91768
Drowning remains a serious public safety issue, especially among young and inexperienced swimmers [1]. This paper presents SaveSplash, a comprehensive real-time drowning detection system composed of three main components: a wearable device, a Raspberry Pi-based hydrophone alarm system, and a FlutterFlow mobile interface [2]. The wearable detects irregular movement and emits a 100 Hz distress signal. A hydrophone connected to a Raspberry Pi listens for this signal and triggers an alarm while logging events to Firebase [3]. The mobile interface displays alerts and educational resources for users and lifeguards. Two experiments evaluated the system’s accuracy in detecting distress signals and motion events, showing high precision and reliability under controlled conditions. Compared to existing methodologies, SaveSplash offers a more scalable, responsive, and comfortable solution without relying on vision systems or invasive biometric tracking. It presents a promising approach to reducing water-related accidents through accessible, adaptive, and real-time monitoring technology.
Drowning Detection, Wearable Technology, Machine Learning, Real-Time Monitoring.
Copyright © DaMI 2025