Evolutionary Computation (EC) techniques, including Genetic Algorithms, Evolution Strategies, and Genetic Programming, have long demonstrated strong performance in solving complex, non-convex optimization problems; however, despite their inherent parallelism, their deployment at exascale supercomputing levels remains relatively underexplored. In this paper, we present a comprehensive study of EC applications on modern supercomputing architectures, emphasizing massively parallel and hybrid implementations, and propose a scalable framework that leverages heterogeneous computing resources by integrating multi-core CPUs and GPUs to accelerate evolutionary processes. The framework is evaluated on a suite of large-scale, real-world optimization problems, including the Traveling Salesman Problem, hyperparameter optimization for deep neural networks, and neural architecture search, with experimental results demonstrating significant improvements in scalability, convergence speed, and solution quality compared to traditional implementations.
Workflow management systems like Nextflow are increasingly popular among researchers building computational pipelines, but their default configurations rarely account for the realities of shared HPC clusters. Left untuned, these tools can flood schedulers with thousands of short-lived jobs, request resources they never use, or create bursty submission patterns that degrade cluster performance for all users. This presentation examines Nextflow resource management on SLURM clusters with a focus on the concerns that matter most to HPC operators: scheduler interaction, fair-share impact, resource efficiency, and cluster-wide utilization. Using a computationally demanding genome alignment pipeline as an example, we'll explore how executor configuration, process-level resource directives, and monitoring strategies affect not just individual pipeline performance but overall cluster health. We'll cover common anti-patterns we've encountered—over-provisioned memory requests, runaway task submissions, poor locality awareness—and the configuration and design patterns that prevent them. Whether you're supporting researchers who use workflow managers or evaluating how to integrate them into your site's policies and documentation, the goal is to give you practical knowledge for keeping these tools running well on shared infrastructure.
In this session we'll overview the landscape around LLMs and Generative AI and look at the Hugging Face Transformers library for working with LLMs. This session will also include a Jupyter Notebook lab that will take attendees through the process of using Falcon-7B for inference, memory efficient finetuning, and RAG.