Loading…
RMACC 2026 has ended
Type: Technical Sessions clear filter
arrow_back View All Dates
Wednesday, May 13
 

10:30am MDT

Building boisestate.ai: Lessons Learned from Developing a Cost-Effective Internal AI Platform for Higher Education
Wednesday May 13, 2026 10:30am - 11:00am MDT
As universities race to provide generative AI access to students and faculty, the cost of commercial subscriptions at institutional scale quickly becomes unsustainable. At Boise State University, we built boisestate.ai—an open source AI platform powered by AWS Bedrock. This presentation shares the practical lessons learned from developing and operating the platform, including the consumption-based paradigm shift associated with pay-per-token pricing, seven specific cost optimization strategies (from prompt caching to semantic tool filtering), and approaches for making AI institutionally aware through MCP servers and agent skills. We'll also discuss why 2026 is shaping up to be the year of the AI agent, and how progressive disclosure and codified institutional knowledge are key to building AI that doesn't just chat — but actually gets work done for your campus.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot A

10:30am MDT

English is the Hottest New Programming Language: The AI Catalyst Model for Advanced Computing Ambassadorship
Wednesday May 13, 2026 10:30am - 11:00am MDT
As "vibe coding" and natural language interfaces become standard in advanced computing, the technical "how" is increasingly decoupled from the conceptual "what" o “why.” This shift creates a critical need for academic leaders who can bridge the gap between high-performance computing (HPC) capabilities and ethical, literate application. In Idaho, we are approaching this challenge by cultivating a culture of "ambassadorship" where faculty—particularly those from the humanities—model the critical inquiry necessary to navigate this new landscape. This presentation introduces the AI Catalyst model, co-developed with BSU Nursing professor Jason Blomquist, as a framework for decentralized AI leadership. AI Catalysts at each institution serve as bridges between technical infrastructure and pedagogical practice. I will discuss how faculty with humanities backgrounds are uniquely positioned to be these ambassadors. By applying the rigor of rhetoric, analysis, and critical thinking to "vibe coding" and AI-driven research, these catalysts model for students and peers how to be the "human in the loop." The AI Catalyst model offers a scalable blueprint for Workforce Development. It demonstrates how to move beyond top-down mandates toward a bottom-up, faculty-led movement that demystifies advanced computing. By empowering humanities-trained faculty as AI ambassadors, institutions can ensure that the next generation of researchers—regardless of their discipline—possesses the sophisticated problem-formulation skills required in a world where English has become the hottest new programming language.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot B

10:30am MDT

Google Cloud's impact on HPC
Wednesday May 13, 2026 10:30am - 11:00am MDT
Discover how Google Cloud works with High Performance Computing by integrating specialized infrastructure and managed storage with powerful, cloud-native orchestration tools. This session highlights how these advancements accelerate scientific discovery and engineering design, providing the fastest time-to-insight for mission-critical research.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot C

10:30am MDT

NSF Office of Advanced Cyberinfrastructure: Resources, Programs, and Funding Opportunities for Research Computing
Wednesday May 13, 2026 10:30am - 11:00am MDT
The NSF Office of Advanced Cyberinfrastructure (OAC) supports over 25,000 researchers and students through an integrated ecosystem of computing, data, networking, and software infrastructure. This presentation provides an overview of OAC's major initiatives and resources relevant to the research computing community, including the NSF Leadership-Class Computing Facility (LCCF) — a distributed, national-scale system entering production in FY2027 with 2,000 GPU nodes, 390PB of flash storage, and an 800PB archive — and the National Research Platform (NRP), which aggregates GPU, CPU, and storage resources across 84 organizations for research and education. The presentation also covers the ACCESS program for allocating advanced computing and data resources, and the National AI Research Resource (NAIRR) Pilot, which connects researchers and educators to AI computing infrastructure, datasets, and training. Highlighted NAIRR projects span battlefield medicine, agricultural resilience, Alzheimer's disease prediction, and deepfake detection. Finally, the presentation surveys upcoming funding solicitations including IDSS, CICI, CSSI, FAIROS, Future CoRe, and TechAccess: AI-Ready America, offering pathways for institutions to engage with and contribute to the national cyberinfrastructure ecosystem.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Jordan Ballroom Room C

10:30am MDT

Unifying Access to Distributed Data for AI and High-Performance Computing
Wednesday May 13, 2026 10:30am - 11:00am MDT
Modern HPC and AI workloads increasingly depend on data that is distributed across multiple storage systems, tiers, and locations, including on-premises clusters, institutional storage, and cloud resources. While compute performance continues to scale rapidly, data access and data movement have become primary bottlenecks, limiting utilization and complicating workflow design.
This talk examines an open, standards-based approach to unifying access to distributed data for AI and HPC workloads—without requiring proprietary clients, forklift upgrades, or disruptive data migrations. Using Hammerspace as a concrete example, the session explores how modern parallel file system standards and automated data orchestration can be used to present a single, high-performance data namespace across otherwise siloed storage systems and sites.
Attendees will learn how global namespace architectures, combined with pNFS 4.2 and policy-driven data orchestration, enable linear scaling of IOPS and throughput using existing infrastructure. The result is simplified workflow design, improved data locality, and higher sustained utilization of expensive CPU and GPU resources—particularly for AI training, inference, and data-intensive simulation workloads.
Key topics include:
  • Parallel Global File Systems with pNFS 4.2 – Leveraging open standards to provide scalable, high-performance access to distributed datasets without proprietary file systems.
  • Automated Data Orchestration – Using policy-driven data placement and movement to align data dynamically with compute, while maintaining continuous access.
  • AI and HPC Workflow Optimization – Simplifying data access across clusters and sites to reduce staging, eliminate redundant copies, and maximize compute efficiency.
     

Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot D

11:15am MDT

Build AI, Dont Just Use It: The Vision for Creating AI Models and Autonomous Agents in Teaching and Research
Wednesday May 13, 2026 11:15am - 11:45am MDT
Most faculty are still early in their AI journey. Yet the future is already arriving: small, self-learning AI models that power autonomous agents capable of continuous improvement, collaboration, and generating entirely new capabilities for teaching and research.
This talk paints the compelling vision of where we are headed: a shift from consuming AI to building with it, from fragile trillion-line codebases to intelligent, living systems that replace much of traditional software while enabling personalized tutoring, interactive simulations, adaptive research assistants, and new forms of discovery.
We explore why small, efficient models (10 to 30 times cheaper, on-device fast, privacy-first) represent the practical foundation for campus-scale agents, how self-learning agents will transform classrooms and labs, and what petri-dish environments for safe experimentation could look like.
A live demonstration of projectEureka shows the foundation where this future can be built, seamlessly running custom AI models and autonomous agents across on-prem and cloud environments.
Join us to be inspired and motivated to prepare your institution for the new era of building AI models and autonomous agents in teaching and research.

Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Jordan Ballroom Room C

11:15am MDT

Campus generative AI services running on HPC
Wednesday May 13, 2026 11:15am - 11:45am MDT
Follow Montana State University’s journey to implement a flexible and open-source generative AI suite, backed by our Tempest HPC system. Learn how we were able to leverage existing infrastructure and open-source platforms to provide powerful tools to all our faculty, staff, and students with no direct cost or token restrictions. Takeaways and how the local service coheres with a broader AI portfolio will be discussed.
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot A

11:15am MDT

HPC Technology in a Turbulent Market
Wednesday May 13, 2026 11:15am - 11:45am MDT
Purchasing equipment has become very challenging, from prices changing almost daily, to rapidly growing power consumption. We take a quick walk through the current state of the market with input from our peers and vendors. And we get a look at market survey results from two leading companies in this field.
Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot D

11:15am MDT

Intel and GCC Compilers: Unleash the Power of Xeon 6
Wednesday May 13, 2026 11:15am - 11:45am MDT
In this presentation, we provide insights for Intel and GCC compiler optimizations and tunings for Xeon 6 with workload code examples. A case study on performance tuning of Torch.Inductor OpenMP code on Xeon 6 will be presented to show Intel new processor benefits.
Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot B

11:15am MDT

UV Package Manager--Get Your Sunscreen!
Wednesday May 13, 2026 11:15am - 11:45am MDT
Python dependency management getting you burned? Tired of slow pip installs and juggling virtual environments on HPC systems? This session introduces UV, a lightning-fast Python package manager designed for speed, simplicity, and reproducibility. 
We’ll explore UV’s core features including uv pip, uv venv, uv tool and demonstrate how they streamline package installation, environment creation, and tool management. We’ll also compare UV with traditional solutions such as pip and conda, highlighting where it excels in performance and usability, particularly in high-performance computing (HPC) environments. 

Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot C

2:40pm MDT

Building the Next Generation of HPC Talent: Forming an RMACC Student Cluster Competition Team
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
The HPC community thrives on collaboration, curiosity, and fostering a forward-thinking talent pool. Student cluster competitions give the next generation of HPC professionals hands‑on supercomputing experience, real‑world problem‑solving skills, and direct exposure to the global HPC community. They blend technical skills, teamwork, and professional development in a way that few academic experiences can match. This Birds of a Feather session brings together students, faculty, and HPC professionals interested in establishing an RMACC Student Cluster Competition team. We’ll discuss what it takes to build a competitive team from the ground up, including technical skill development, mentorship opportunities, hardware and resource needs, securing vendor sponsorship, and strategies for preparing students for national HPC competitions. Participants will help shape the team’s structure, recruitment approach, and training roadmap. Whether you’re a student eager to get hands‑on experience or a professional excited to support emerging talent, this session can be the starting point for a vibrant, sustainable RMACC Student Cluster Competition team.
Speakers
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot B

2:40pm MDT

LoRA, RAG, RL, Agentic AI - Making sense of the different acronyms to improve LLMs and fix hallucinations
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
New models from leading AI organizations come out seemingly every week, but despite the constant evolution there are always hallucinations and knowledge gaps. LoRA, RAG, RL, and Agentic AI are all popular methods to improve LLM performance and this presentation will give an overview of each method as well as discuss how "performance" can be measured in an LLM context. These methods will be compared and contrasted across several criteria such as ease of use, resources required, and the theory behind them. Open source models from the huggingface repository will be primarily used as examples since they are common in RMACC institutions that may have restricted access environments.
Speakers
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot D

2:40pm MDT

Student Poster Open House
Wednesday May 13, 2026 2:40pm - 3:40pm MDT

Do you have questions for our student poster presenters? Come to this open session to view the posters and ask your questions of the students.
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Jordan Ballroom

2:40pm MDT

System Administrator Meetup
Wednesday May 13, 2026 2:40pm - 3:40pm MDT

Meetup with members of the RMACC SysAdmin group for an informal discussion.
Speakers
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot C

3:50pm MDT

Accelerating Scale-Out HPC and Data-Intensive Research with a Modern Parallel File System
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
This session examines the role of modern parallel file systems in supporting scalable, data-intensive HPC environments commonly found in academic and regional research computing centers. It discusses how BeeGFS enables high-throughput, low-latency storage architectures that effectively support a wide range of workloads, including traditional simulation, modeling, data analytics, and emerging AI-driven research.
Attendees will gain insight into BeeGFS architecture and design principles, including scalable metadata services, flexible storage tiering, and integration with high-speed networks and NVMe-based storage. The session will also present real-world user case studies from research and HPC environments, highlighting practical deployment considerations, performance characteristics, and operational lessons learned.
By focusing on production deployments and real research workflows, this talk demonstrates how BeeGFS is used today to build reliable, high-performance storage platforms that scale with growing compute and data demands in academic and research-focused HPC environments.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot D

3:50pm MDT

Deploying and Operationalizing Intel Gaudi Systems with Kubernetes for AI Workloads
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
This session will cover ASU’s experience with the recent Intel Gaudi system donation, including the planning and buildout of new data center space to support the hardware. I will discuss the technical and operational challenges involved in bringing the systems online, lessons learned during deployment, the current status of the environment, and how we plan to integrate Gaudi into our broader research computing ecosystem to support AI and other high performance workloads.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot C

3:50pm MDT

Fuzzball: New Features and What’s Next for Portable Hybrid HPC Orchestration
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Running HPC workloads across on-premises clusters, cloud, and hybrid environments requires more than a scheduler.  It requires a platform that treats infrastructure as flexible and workloads as portable. Fuzzball, developed by CIQ, is that platform. Built for multi-job workflow management in high-performance computing environments, Fuzzball gives researchers and engineers a single control plane to define, run, and move workloads across any compute infrastructure without rewriting pipelines.
In this session, we'll cover what Fuzzball is and how it works, then move into recent developments including the Workflow Catalog and Service Endpoints. We'll close with a look at what's coming next and open the floor for attendees to share what they need from a next-generation computing platform.

Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot B

3:50pm MDT

Google Cloud's impact on HPC
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Discover how Google Cloud works with High Performance Computing by integrating specialized infrastructure and managed storage with powerful, cloud-native orchestration tools. This session highlights how these advancements accelerate scientific discovery and engineering design, providing the fastest time-to-insight for mission-critical research.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot A

3:50pm MDT

Practical Guide to Performance-Conscious Python
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Python is one of the most widely used languages in scientific computing, and its adoption on HPC systems continues to grow, particularly among users with limited training in HPC and performance-oriented software development. At the same time, the ecosystem of tools for high-performance Python has expanded rapidly, making it increasingly difficult for users—and the research computing support teams advising them—to identify effective strategies to improve performance. I will discuss the landscape of high-performance Python with an emphasis on decision-making: how to choose appropriate tools and approaches based on workload characteristics and performance goals, focusing on common performance pitfalls, practical tradeoffs, and guidance for selecting technologies. The content will be most useful for Python users and research computing and data (RCD) facilitators who support Python workflows on HPC systems.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Jordan Ballroom Room C

4:30pm MDT

Building and Launching Powerful AI Agents Quickly (Powered by NVIDIA NIMs and Blueprints)
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Please join Mark III and NVIDIA at RMACC for an overview, walk-through, and live demo of building an AI agent app quickly and easily using NVIDIA NIMs and Blueprints.  The first segment of this session will focus on a build of a simple AI agent bot built with a Llama 3 NIM.  The second segment will focus on a live build and demo of a more advanced AI agent build with an NVIDIA Blueprint (powered by multiple NIMs).  Lastly, the session will show how to quickly and easily finetune a model before being served up for API consumption by apps via NIMs.  This session will be of interest to participants of all levels and skillsets looking to deploy AI services into their existing and new cloud-native and modern apps and research as quickly and effectively as possible.
Speakers
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Jordan Ballroom Room C

4:30pm MDT

INL’s software stack buildout
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Idaho National Laboratory’s High Performance Computing (HPC) resources, provided through the Nuclear Science User Facilities (NSUF), deliver over 630,000 CPU cores to a user base of over 1,700 researchers spanning national laboratories, industry, and academia. These systems support multiscale, multiphysics simulations for nuclear energy research alongside a range of other mission areas. The HPC software team underpins this capability by curating and maintaining a centralized software environment and enabling user-driven software deployment. This presentation describes a practical approach to building, organizing, and sustaining software environments for HPC systems using Lmod, Spack, and Apptainer. We outline how our HPC software team leverages Spack to manage complex dependency graphs and enable optimized builds, while integrating with Lmod to provide a flexible and discoverable module system for users. We discuss our strategy for centrally installed versus user-space software, native versus containerized installations, and user empowerment strategies such as Spack upstreams and Python virtual environments. We discuss some of the challenges we have faced and best practices we have cultivated to efficiently support an extensive set of software and users on our HPC systems.
Speakers
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot B

4:30pm MDT

Linpack and other benchmarks on heterogeneous HPC clusters
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Benchmarking is a core part of HPC operations — whether you're validating a new cluster, justifying a procurement, or hunting down a performance regression. But running benchmarks well on heterogeneous hardware introduces challenges that the documentation doesn't always prepare you for. This presentation shares our experience running the NVIDIA HPC benchmark container on A100 GPUs and submitting results to the Top500 and Green500 lists, covering the practical details of tuning HPL parameters, navigating the submission process, and the surprises we encountered along the way. We also attempted runs across mixed A100 and H100 nodes, which raised questions about how GPU generational differences affect Linpack scaling, interconnect saturation, and whether heterogeneous submissions are even meaningful. Beyond Linpack, we'll discuss our use of the OSU Micro-Benchmarks (OMB) suite as a cluster diagnostics tool, using pairwise node communication tests to identify fabric bottlenecks, misconfigured adapters, and inconsistent latency across the cluster, and more.
Speakers
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot C

4:30pm MDT

Scalable Patent Search and Analysis Using Large Language Models with Function Calling
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
This work describes a scalable system for automated patent search and analysis that integrates large language models with function calling to support data retrieval and classification. The approach combines conventional data extraction from the US Patent and Trademark Office with semantic similarity search and structured function execution to enable accurate and reproducible patent management applicable to real-world institutional data analysis challenges.
 

Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot A

4:30pm MDT

The Next Evolution of Artificial Intelligence
Wednesday May 13, 2026 4:30pm - 5:00pm MDT

Artificial intelligence (AI) is rapidly transforming research, teaching, and operations. With AI-native platforms like Oracle Database 23ai—and innovations shaping Oracle Database 26ai—data infrastructure is evolving from passive storage to active participation in intelligent workflows. These platforms embed AI directly in the database, enabling more efficient, secure, and scalable development and governance.
At the same time, Oracle’s multi-cloud capabilities allow AI workloads to run across cloud environments while maintaining consistent performance, security, and data governance—supporting flexible, distributed innovation.
A new wave of agentic AI is emerging, with systems that act autonomously, reason, and orchestrate workflows across complex environments. As these agents increasingly access sensitive systems and data, establishing secure, accountable identities becomes critical.
This session explores agentic AI and emphasizes the need for strong identity frameworks grounded in least-privilege access, auditability, and integration with federated identity and role-based access control models to ensure responsible, scalable AI adoption.
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot D
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -