As "vibe coding" and natural language interfaces become standard in advanced computing, the technical "how" is increasingly decoupled from the conceptual "what" o “why.” This shift creates a critical need for academic leaders who can bridge the gap between high-performance computing (HPC) capabilities and ethical, literate application. In Idaho, we are approaching this challenge by cultivating a culture of "ambassadorship" where faculty—particularly those from the humanities—model the critical inquiry necessary to navigate this new landscape. This presentation introduces the AI Catalyst model, co-developed with BSU Nursing professor Jason Blomquist, as a framework for decentralized AI leadership. AI Catalysts at each institution serve as bridges between technical infrastructure and pedagogical practice. I will discuss how faculty with humanities backgrounds are uniquely positioned to be these ambassadors. By applying the rigor of rhetoric, analysis, and critical thinking to "vibe coding" and AI-driven research, these catalysts model for students and peers how to be the "human in the loop." The AI Catalyst model offers a scalable blueprint for Workforce Development. It demonstrates how to move beyond top-down mandates toward a bottom-up, faculty-led movement that demystifies advanced computing. By empowering humanities-trained faculty as AI ambassadors, institutions can ensure that the next generation of researchers—regardless of their discipline—possesses the sophisticated problem-formulation skills required in a world where English has become the hottest new programming language.
In this presentation, we provide insights for Intel and GCC compiler optimizations and tunings for Xeon 6 with workload code examples. A case study on performance tuning of Torch.Inductor OpenMP code on Xeon 6 will be presented to show Intel new processor benefits.
Most research environments treat storage as a procurement decision. Agentic AI flips that. Workflow and storage decide whether object, file, and parallel file systems succeed or fail, and “one big, shared filesystem” often collapses under metadata-heavy orchestration. This session presents a workflow-first approach to infrastructure design for agentic AI and workflow-based pipelines. We characterize the I/O signatures that break classic HPC defaults, including small-file fan-out, high namespace churn, checkpoint bursts, and multi-tenant contention. We then outline a tiered architecture playbook: durable object for curated corpora, high-metadata file for orchestration surfaces, high-throughput scratch for transient staging, and policy-driven movement that preserves provenance. Throughout, we use explicit decision axes, including throughput, metadata ops, latency, and durability, so teams can justify choices to leadership and align investments to measurable bottlenecks.
The HPC community thrives on collaboration, curiosity, and fostering a forward-thinking talent pool. Student cluster competitions give the next generation of HPC professionals hands‑on supercomputing experience, real‑world problem‑solving skills, and direct exposure to the global HPC community. They blend technical skills, teamwork, and professional development in a way that few academic experiences can match. This Birds of a Feather session brings together students, faculty, and HPC professionals interested in establishing an RMACC Student Cluster Competition team. We’ll discuss what it takes to build a competitive team from the ground up, including technical skill development, mentorship opportunities, hardware and resource needs, securing vendor sponsorship, and strategies for preparing students for national HPC competitions. Participants will help shape the team’s structure, recruitment approach, and training roadmap. Whether you’re a student eager to get hands‑on experience or a professional excited to support emerging talent, this session can be the starting point for a vibrant, sustainable RMACC Student Cluster Competition team.
Running HPC workloads across on-premises clusters, cloud, and hybrid environments requires more than a scheduler. It requires a platform that treats infrastructure as flexible and workloads as portable. Fuzzball, developed by CIQ, is that platform. Built for multi-job workflow management in high-performance computing environments, Fuzzball gives researchers and engineers a single control plane to define, run, and move workloads across any compute infrastructure without rewriting pipelines. In this session, we'll cover what Fuzzball is and how it works, then move into recent developments including the Workflow Catalog and Service Endpoints. We'll close with a look at what's coming next and open the floor for attendees to share what they need from a next-generation computing platform.
Idaho National Laboratory’s High Performance Computing (HPC) resources, provided through the Nuclear Science User Facilities (NSUF), deliver over 630,000 CPU cores to a user base of over 1,700 researchers spanning national laboratories, industry, and academia. These systems support multiscale, multiphysics simulations for nuclear energy research alongside a range of other mission areas. The HPC software team underpins this capability by curating and maintaining a centralized software environment and enabling user-driven software deployment. This presentation describes a practical approach to building, organizing, and sustaining software environments for HPC systems using Lmod, Spack, and Apptainer. We outline how our HPC software team leverages Spack to manage complex dependency graphs and enable optimized builds, while integrating with Lmod to provide a flexible and discoverable module system for users. We discuss our strategy for centrally installed versus user-space software, native versus containerized installations, and user empowerment strategies such as Spack upstreams and Python virtual environments. We discuss some of the challenges we have faced and best practices we have cultivated to efficiently support an extensive set of software and users on our HPC systems.