Loading…
RMACC 2026 has ended
arrow_back View All Dates
Wednesday, May 13
 

8:00am MDT

Breakfast- Sponsored by Arctiq
Wednesday May 13, 2026 8:00am - 9:00am MDT

Wednesday May 13, 2026 8:00am - 9:00am MDT
Jordan Ballroom

8:45am MDT

Student Poster Presentations
Wednesday May 13, 2026 8:45am - 9:00am MDT
Join us to hear 1- minute lightning talks from our student poster presentations.
Wednesday May 13, 2026 8:45am - 9:00am MDT
Jordan Ballroom

9:00am MDT

Accelerating America’s Nuclear Future: Building Advanced Nuclear Infrastructure at Speed and Scale
Wednesday May 13, 2026 9:00am - 10:15am MDT
The United States has entered a transformative period in nuclear energy development, driven by unprecedented load growth from data centers and AI infrastructure, coupled with the most ambitious federal nuclear directives in decades. This presentation examines the convergence of technology demonstration, industrial partnership, and policy acceleration that is reshaping America’s nuclear landscape. Drawing from ongoing work at Idaho National Laboratory, we’ll explore the reactor and fuel cycle pilot projects progressing from concept to concrete deployment, including INL’s role as the nation’s premier testbed for advanced reactor technologies. The presentation will detail emerging frameworks for industry collaboration that are enabling diverse off-takers, from hyperscale data centers to military installations, to partner in deploying next-generation nuclear systems. Finally, we’ll assess progress against the aggressive timelines established by last year’s landmark nuclear Executive Orders, which call for demonstrating multiple advanced reactor designs and achieving significant new nuclear capacity by decade’s end on the path to quadrupling American nuclear capacity by 2050.
Speakers
BS

Brian Smith

Idaho National Lab
Wednesday May 13, 2026 9:00am - 10:15am MDT
Jordan Ballroom

10:15am MDT

DLI Workshop - Data Parallelism: How to Train Deep Learning Models on Multiple GPUs
Wednesday May 13, 2026 10:15am - 5:00pm MDT
With support of the Deep Learning Institute from NVIDIA, a training workshop is offered to all RMACC attendees. Attendees will also be provided information on how to become certified instructors, such as with community support from Cyberinfrastructure Community-wide Mentorship Network (CCMNet), in order to offer this course and other DLI materials for their own communities. Course content and learning objectives follow:
 
Modern deep learning challenges leverage increasingly larger datasets and more complex models. As a result, significant computational power is required to train models effectively and efficiently. Learning to distribute data across multiple GPUs during deep learning model training makes possible an incredible wealth of new applications utilizing deep learning.
Learning Objectives
  • Understand how data parallel deep learning training is performed using multiple GPUs
  • Achieve maximum throughput when training, for the best use of multiple GPUs
  • Distribute training to multiple GPUs using Pytorch Distributed Data Parallel
  • Understand and utilize algorithmic considerations specific to multi-GPU training performance and accuracy

Speakers
Wednesday May 13, 2026 10:15am - 5:00pm MDT
Bishop Barnwell Room

10:30am MDT

Building boisestate.ai: Lessons Learned from Developing a Cost-Effective Internal AI Platform for Higher Education
Wednesday May 13, 2026 10:30am - 11:00am MDT
As universities race to provide generative AI access to students and faculty, the cost of commercial subscriptions at institutional scale quickly becomes unsustainable. At Boise State University, we built boisestate.ai—an open source AI platform powered by AWS Bedrock. This presentation shares the practical lessons learned from developing and operating the platform, including the consumption-based paradigm shift associated with pay-per-token pricing, seven specific cost optimization strategies (from prompt caching to semantic tool filtering), and approaches for making AI institutionally aware through MCP servers and agent skills. We'll also discuss why 2026 is shaping up to be the year of the AI agent, and how progressive disclosure and codified institutional knowledge are key to building AI that doesn't just chat — but actually gets work done for your campus.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot A

10:30am MDT

English is the Hottest New Programming Language: The AI Catalyst Model for Advanced Computing Ambassadorship
Wednesday May 13, 2026 10:30am - 11:00am MDT
As "vibe coding" and natural language interfaces become standard in advanced computing, the technical "how" is increasingly decoupled from the conceptual "what" o “why.” This shift creates a critical need for academic leaders who can bridge the gap between high-performance computing (HPC) capabilities and ethical, literate application. In Idaho, we are approaching this challenge by cultivating a culture of "ambassadorship" where faculty—particularly those from the humanities—model the critical inquiry necessary to navigate this new landscape. This presentation introduces the AI Catalyst model, co-developed with BSU Nursing professor Jason Blomquist, as a framework for decentralized AI leadership. AI Catalysts at each institution serve as bridges between technical infrastructure and pedagogical practice. I will discuss how faculty with humanities backgrounds are uniquely positioned to be these ambassadors. By applying the rigor of rhetoric, analysis, and critical thinking to "vibe coding" and AI-driven research, these catalysts model for students and peers how to be the "human in the loop." The AI Catalyst model offers a scalable blueprint for Workforce Development. It demonstrates how to move beyond top-down mandates toward a bottom-up, faculty-led movement that demystifies advanced computing. By empowering humanities-trained faculty as AI ambassadors, institutions can ensure that the next generation of researchers—regardless of their discipline—possesses the sophisticated problem-formulation skills required in a world where English has become the hottest new programming language.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot B

10:30am MDT

Google Cloud's impact on HPC
Wednesday May 13, 2026 10:30am - 11:00am MDT
Discover how Google Cloud works with High Performance Computing by integrating specialized infrastructure and managed storage with powerful, cloud-native orchestration tools. This session highlights how these advancements accelerate scientific discovery and engineering design, providing the fastest time-to-insight for mission-critical research.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot C

10:30am MDT

NSF Office of Advanced Cyberinfrastructure: Resources, Programs, and Funding Opportunities for Research Computing
Wednesday May 13, 2026 10:30am - 11:00am MDT
The NSF Office of Advanced Cyberinfrastructure (OAC) supports over 25,000 researchers and students through an integrated ecosystem of computing, data, networking, and software infrastructure. This presentation provides an overview of OAC's major initiatives and resources relevant to the research computing community, including the NSF Leadership-Class Computing Facility (LCCF) — a distributed, national-scale system entering production in FY2027 with 2,000 GPU nodes, 390PB of flash storage, and an 800PB archive — and the National Research Platform (NRP), which aggregates GPU, CPU, and storage resources across 84 organizations for research and education. The presentation also covers the ACCESS program for allocating advanced computing and data resources, and the National AI Research Resource (NAIRR) Pilot, which connects researchers and educators to AI computing infrastructure, datasets, and training. Highlighted NAIRR projects span battlefield medicine, agricultural resilience, Alzheimer's disease prediction, and deepfake detection. Finally, the presentation surveys upcoming funding solicitations including IDSS, CICI, CSSI, FAIROS, Future CoRe, and TechAccess: AI-Ready America, offering pathways for institutions to engage with and contribute to the national cyberinfrastructure ecosystem.
Speakers
Wednesday May 13, 2026 10:30am - 11:00am MDT
Jordan Ballroom Room C

10:30am MDT

Unifying Access to Distributed Data for AI and High-Performance Computing
Wednesday May 13, 2026 10:30am - 11:00am MDT
Modern HPC and AI workloads increasingly depend on data that is distributed across multiple storage systems, tiers, and locations, including on-premises clusters, institutional storage, and cloud resources. While compute performance continues to scale rapidly, data access and data movement have become primary bottlenecks, limiting utilization and complicating workflow design.
This talk examines an open, standards-based approach to unifying access to distributed data for AI and HPC workloads—without requiring proprietary clients, forklift upgrades, or disruptive data migrations. Using Hammerspace as a concrete example, the session explores how modern parallel file system standards and automated data orchestration can be used to present a single, high-performance data namespace across otherwise siloed storage systems and sites.
Attendees will learn how global namespace architectures, combined with pNFS 4.2 and policy-driven data orchestration, enable linear scaling of IOPS and throughput using existing infrastructure. The result is simplified workflow design, improved data locality, and higher sustained utilization of expensive CPU and GPU resources—particularly for AI training, inference, and data-intensive simulation workloads.
Key topics include:
  • Parallel Global File Systems with pNFS 4.2 – Leveraging open standards to provide scalable, high-performance access to distributed datasets without proprietary file systems.
  • Automated Data Orchestration – Using policy-driven data placement and movement to align data dynamically with compute, while maintaining continuous access.
  • AI and HPC Workflow Optimization – Simplifying data access across clusters and sites to reduce staging, eliminate redundant copies, and maximize compute efficiency.
     

Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot D

11:15am MDT

Build AI, Dont Just Use It: The Vision for Creating AI Models and Autonomous Agents in Teaching and Research
Wednesday May 13, 2026 11:15am - 11:45am MDT
Most faculty are still early in their AI journey. Yet the future is already arriving: small, self-learning AI models that power autonomous agents capable of continuous improvement, collaboration, and generating entirely new capabilities for teaching and research.
This talk paints the compelling vision of where we are headed: a shift from consuming AI to building with it, from fragile trillion-line codebases to intelligent, living systems that replace much of traditional software while enabling personalized tutoring, interactive simulations, adaptive research assistants, and new forms of discovery.
We explore why small, efficient models (10 to 30 times cheaper, on-device fast, privacy-first) represent the practical foundation for campus-scale agents, how self-learning agents will transform classrooms and labs, and what petri-dish environments for safe experimentation could look like.
A live demonstration of projectEureka shows the foundation where this future can be built, seamlessly running custom AI models and autonomous agents across on-prem and cloud environments.
Join us to be inspired and motivated to prepare your institution for the new era of building AI models and autonomous agents in teaching and research.

Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Jordan Ballroom Room C

11:15am MDT

Campus generative AI services running on HPC
Wednesday May 13, 2026 11:15am - 11:45am MDT
Follow Montana State University’s journey to implement a flexible and open-source generative AI suite, backed by our Tempest HPC system. Learn how we were able to leverage existing infrastructure and open-source platforms to provide powerful tools to all our faculty, staff, and students with no direct cost or token restrictions. Takeaways and how the local service coheres with a broader AI portfolio will be discussed.
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot A

11:15am MDT

HPC Technology in a Turbulent Market
Wednesday May 13, 2026 11:15am - 11:45am MDT
Purchasing equipment has become very challenging, from prices changing almost daily, to rapidly growing power consumption. We take a quick walk through the current state of the market with input from our peers and vendors. And we get a look at market survey results from two leading companies in this field.
Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot D

11:15am MDT

Intel and GCC Compilers: Unleash the Power of Xeon 6
Wednesday May 13, 2026 11:15am - 11:45am MDT
In this presentation, we provide insights for Intel and GCC compiler optimizations and tunings for Xeon 6 with workload code examples. A case study on performance tuning of Torch.Inductor OpenMP code on Xeon 6 will be presented to show Intel new processor benefits.
Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot B

11:15am MDT

UV Package Manager--Get Your Sunscreen!
Wednesday May 13, 2026 11:15am - 11:45am MDT
Python dependency management getting you burned? Tired of slow pip installs and juggling virtual environments on HPC systems? This session introduces UV, a lightning-fast Python package manager designed for speed, simplicity, and reproducibility. 
We’ll explore UV’s core features including uv pip, uv venv, uv tool and demonstrate how they streamline package installation, environment creation, and tool management. We’ll also compare UV with traditional solutions such as pip and conda, highlighting where it excels in performance and usability, particularly in high-performance computing (HPC) environments. 

Speakers
Wednesday May 13, 2026 11:15am - 11:45am MDT
Simplot C

11:45am MDT

Lunch- Sponsored by AMD
Wednesday May 13, 2026 11:45am - 1:00pm MDT
Please join us for lunch in the Jordan Ballroom including a short presentation by our Diamond Sponsor AMD.

HPC Meets AI at Scale: AMD’s Open Ecosystem for Converged Workloads
Presenter- Kenneth Chiu
Wednesday May 13, 2026 11:45am - 1:00pm MDT
Jordan Ballroom

1:00pm MDT

Visit the Stein Luminary
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
The Keith and Catherine Stein Luminary is an all-digital museum space, producing a
range of immersive, interactive and sensory experiences. Combining touch-activated
screens and immersive projection, we deliver cutting-edge content focused on visual
and performing arts and cultural exhibitions for the Boise State community.
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Atrium of the Visual Arts Building

1:00pm MDT

Accelerating Research and Learning with AWS Cloud in Higher Education
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Cloud computing has become a foundational enabler for academic institutions seeking to scale research workloads, modernize curricula, and reduce infrastructure overhead. This session explores how colleges and universities are leveraging Amazon Web Services (AWS) to address key challenges in higher education — from burst-capable HPC clusters for computational research, to cost-effective storage for growing datasets, to AI/ML platforms that bring cutting-edge tools into the classroom.
We will examine practical patterns for deploying research computing environments on AWS, including integration with schedulers like Slurm via AWS Parallel Computing Service, and strategies for managing multi-account environments across departments and research groups. We will also highlight the AWS Open Data program, which provides free access to large-scale public datasets — enabling researchers and students to focus on analysis rather than data acquisition and hosting costs.

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Jordan Ballroom Room C

1:00pm MDT

Agentic AI for Advanced Research; Data Storage; Data Management
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Most research environments treat storage as a procurement decision. Agentic AI flips that. Workflow and storage decide whether object, file, and parallel file systems succeed or fail, and “one big, shared filesystem” often collapses under metadata-heavy orchestration.
This session presents a workflow-first approach to infrastructure design for agentic AI and workflow-based pipelines. We characterize the I/O signatures that break classic HPC defaults, including small-file fan-out, high namespace churn, checkpoint bursts, and multi-tenant contention. We then outline a tiered architecture playbook: durable object for curated corpora, high-metadata file for orchestration surfaces, high-throughput scratch for transient staging, and policy-driven movement that preserves provenance. Throughout, we use explicit decision axes, including throughput, metadata ops, latency, and durability, so teams can justify choices to leadership and align investments to measurable bottlenecks.

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot B

1:00pm MDT

Building Sovereign AI Factories: A Blueprint for State-Level Economic Growth
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
AI is creating clear winners and losers across economies and regions. Sovereign AI Factories offer a powerful economic fulcrum for states seeking to attract talent, investment, and sustainable growth. By pooling resources at a state level, Sovereign AI Factories unite universities, K–12 systems, corporations, research institutes, and economic development agencies around a shared AI infrastructure that no single organization could afford on its own.
This presentation will outline the three pillars of a successful Sovereign AI Factory:
Defining a Sovereign AI Factory
  • Setting goals
  • Building the coalition
  • Promoting the benefits
Defining the Hardware Architecture
  • Scaling compute, storage, and network resources
  • Addressing the need for direct liquid cooling
  • Key data center considerations
Defining the User Experience
  • Delivering a self-service cloud experience
  • Ensuring user and resource security
  • Supporting AI and HPC workloads

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot C

1:00pm MDT

Compute Anywhere with Function-as-a-Service with Globus Compute
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Growing data volumes, new computing paradigms, and increasing hardware heterogeneity are driving the need to execute code on diverse distributed computing resources, many of which are outside the bounds of the researcher's institution. This need may be driven by (a) the desire to compute closer to data acquisition sources, (b) exploit specialized computing resources such as hardware accelerators, (c) provide real-time processing of data, (d) reduce energy consumption (e.g., by matching workload with hardware), and (e) scale simulations beyond the limits of a single computer.

Globus Compute addresses these needs by delivering a hybrid cloud platform implementing the Function-as-a-Service (FaaS) paradigm. Researchers first register their desired function with a cloud-hosted service, they can then request invocation of that function with arbitrary input arguments to be executed on remote cyberinfrastructure. Globus Compute manages the reliable and secure execution of the function, provisioning resources, staging function code and inputs, managing safe and secure execution (optionally using containers), monitoring execution, and asynchronously returning results to users via the cloud platform.

This tutorial will describe use cases for FaaS in science and demonstrate how Globus Compute can provide a common interface and approach for portable execution across different systems. Attendees will experiment with Globus Compute on virtual machines and learn how to deploy Globus Compute on their HPC cluster or other advanced computing system.
Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot A

1:00pm MDT

Workshop on using Generative AI in an HPC environment
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
In this session you will participate in an immersive, experiential learning environment designed to expand how you use generative AI in HPC systems. Large language models and coding agents are changing how people write, debug, and maintain code. In high-performance computing environments, however, that efficiency comes with added complexity: shared clusters, schedulers, quotas, modules, file systems, and policies where mistakes can create operational risk.
You will work through a practical HPC-style simulation involving job submission, environment setup, shell scripting, automation, and troubleshooting. The goal is not only to see where generative AI can help with scripting via skills but also to understand where it can make mistakes.
This workshop will emphasize a pair-programming style of collaboration with AI: you will use AI to generate inputs but will need to review and verify. Through controlled adversarial and defensive scenarios, you will build intuition for when to trust AI assistance, when to slow down, and how to check AI-generated shell commands, scripts, and code before running them in shared computing environments. You will be encouraged to use modern skills of context engineering and skill writing to improve your outputs.
This workshop is designed for researchers, educators, students, research computing staff, and HPC administrators. You do not need to identify as an expert programmer, but should bring curiosity and a willingness to write or modify small pieces of code and bash scripts, read output critically, and iterate with AI tools.
Participants should bring a laptop and an open mind toward using generative AI as a fast but fallible collaborator. By the end of the session, you should be able to describe key tradeoffs of using generative AI in HPC-adjacent work, apply simple verification habits to AI-generated commands and scripts, and reuse practical patterns from this simulation in your own work.

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot D

2:40pm MDT

Harnessing AI: Transforming High-Performance Computing for Next-Generation Innovation
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
This panel will explore the role of artificial intelligence in high-performance computing (HPC) environments, highlighting innovative applications that enhance computational efficiency and data analysis. Attendees will gain insights into future trends and collaborative strategies for leveraging AI.
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot A

2:40pm MDT

Building the Next Generation of HPC Talent: Forming an RMACC Student Cluster Competition Team
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
The HPC community thrives on collaboration, curiosity, and fostering a forward-thinking talent pool. Student cluster competitions give the next generation of HPC professionals hands‑on supercomputing experience, real‑world problem‑solving skills, and direct exposure to the global HPC community. They blend technical skills, teamwork, and professional development in a way that few academic experiences can match. This Birds of a Feather session brings together students, faculty, and HPC professionals interested in establishing an RMACC Student Cluster Competition team. We’ll discuss what it takes to build a competitive team from the ground up, including technical skill development, mentorship opportunities, hardware and resource needs, securing vendor sponsorship, and strategies for preparing students for national HPC competitions. Participants will help shape the team’s structure, recruitment approach, and training roadmap. Whether you’re a student eager to get hands‑on experience or a professional excited to support emerging talent, this session can be the starting point for a vibrant, sustainable RMACC Student Cluster Competition team.
Speakers
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot B

2:40pm MDT

LoRA, RAG, RL, Agentic AI - Making sense of the different acronyms to improve LLMs and fix hallucinations
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
New models from leading AI organizations come out seemingly every week, but despite the constant evolution there are always hallucinations and knowledge gaps. LoRA, RAG, RL, and Agentic AI are all popular methods to improve LLM performance and this presentation will give an overview of each method as well as discuss how "performance" can be measured in an LLM context. These methods will be compared and contrasted across several criteria such as ease of use, resources required, and the theory behind them. Open source models from the huggingface repository will be primarily used as examples since they are common in RMACC institutions that may have restricted access environments.
Speakers
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot D

2:40pm MDT

Student Poster Open House
Wednesday May 13, 2026 2:40pm - 3:40pm MDT

Do you have questions for our student poster presenters? Come to this open session to view the posters and ask your questions of the students.
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Jordan Ballroom

2:40pm MDT

System Administrator Meetup
Wednesday May 13, 2026 2:40pm - 3:40pm MDT

Meetup with members of the RMACC SysAdmin group for an informal discussion.
Speakers
Wednesday May 13, 2026 2:40pm - 3:40pm MDT
Simplot C

3:50pm MDT

Accelerating Scale-Out HPC and Data-Intensive Research with a Modern Parallel File System
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
This session examines the role of modern parallel file systems in supporting scalable, data-intensive HPC environments commonly found in academic and regional research computing centers. It discusses how BeeGFS enables high-throughput, low-latency storage architectures that effectively support a wide range of workloads, including traditional simulation, modeling, data analytics, and emerging AI-driven research.
Attendees will gain insight into BeeGFS architecture and design principles, including scalable metadata services, flexible storage tiering, and integration with high-speed networks and NVMe-based storage. The session will also present real-world user case studies from research and HPC environments, highlighting practical deployment considerations, performance characteristics, and operational lessons learned.
By focusing on production deployments and real research workflows, this talk demonstrates how BeeGFS is used today to build reliable, high-performance storage platforms that scale with growing compute and data demands in academic and research-focused HPC environments.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot D

3:50pm MDT

Deploying and Operationalizing Intel Gaudi Systems with Kubernetes for AI Workloads
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
This session will cover ASU’s experience with the recent Intel Gaudi system donation, including the planning and buildout of new data center space to support the hardware. I will discuss the technical and operational challenges involved in bringing the systems online, lessons learned during deployment, the current status of the environment, and how we plan to integrate Gaudi into our broader research computing ecosystem to support AI and other high performance workloads.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot C

3:50pm MDT

Fuzzball: New Features and What’s Next for Portable Hybrid HPC Orchestration
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Running HPC workloads across on-premises clusters, cloud, and hybrid environments requires more than a scheduler.  It requires a platform that treats infrastructure as flexible and workloads as portable. Fuzzball, developed by CIQ, is that platform. Built for multi-job workflow management in high-performance computing environments, Fuzzball gives researchers and engineers a single control plane to define, run, and move workloads across any compute infrastructure without rewriting pipelines.
In this session, we'll cover what Fuzzball is and how it works, then move into recent developments including the Workflow Catalog and Service Endpoints. We'll close with a look at what's coming next and open the floor for attendees to share what they need from a next-generation computing platform.

Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot B

3:50pm MDT

Google Cloud's impact on HPC
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Discover how Google Cloud works with High Performance Computing by integrating specialized infrastructure and managed storage with powerful, cloud-native orchestration tools. This session highlights how these advancements accelerate scientific discovery and engineering design, providing the fastest time-to-insight for mission-critical research.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Simplot A

3:50pm MDT

Practical Guide to Performance-Conscious Python
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Python is one of the most widely used languages in scientific computing, and its adoption on HPC systems continues to grow, particularly among users with limited training in HPC and performance-oriented software development. At the same time, the ecosystem of tools for high-performance Python has expanded rapidly, making it increasingly difficult for users—and the research computing support teams advising them—to identify effective strategies to improve performance. I will discuss the landscape of high-performance Python with an emphasis on decision-making: how to choose appropriate tools and approaches based on workload characteristics and performance goals, focusing on common performance pitfalls, practical tradeoffs, and guidance for selecting technologies. The content will be most useful for Python users and research computing and data (RCD) facilitators who support Python workflows on HPC systems.
Speakers
Wednesday May 13, 2026 3:50pm - 4:20pm MDT
Jordan Ballroom Room C

4:30pm MDT

Building and Launching Powerful AI Agents Quickly (Powered by NVIDIA NIMs and Blueprints)
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Please join Mark III and NVIDIA at RMACC for an overview, walk-through, and live demo of building an AI agent app quickly and easily using NVIDIA NIMs and Blueprints.  The first segment of this session will focus on a build of a simple AI agent bot built with a Llama 3 NIM.  The second segment will focus on a live build and demo of a more advanced AI agent build with an NVIDIA Blueprint (powered by multiple NIMs).  Lastly, the session will show how to quickly and easily finetune a model before being served up for API consumption by apps via NIMs.  This session will be of interest to participants of all levels and skillsets looking to deploy AI services into their existing and new cloud-native and modern apps and research as quickly and effectively as possible.
Speakers
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Jordan Ballroom Room C

4:30pm MDT

INL’s software stack buildout
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Idaho National Laboratory’s High Performance Computing (HPC) resources, provided through the Nuclear Science User Facilities (NSUF), deliver over 630,000 CPU cores to a user base of over 1,700 researchers spanning national laboratories, industry, and academia. These systems support multiscale, multiphysics simulations for nuclear energy research alongside a range of other mission areas. The HPC software team underpins this capability by curating and maintaining a centralized software environment and enabling user-driven software deployment. This presentation describes a practical approach to building, organizing, and sustaining software environments for HPC systems using Lmod, Spack, and Apptainer. We outline how our HPC software team leverages Spack to manage complex dependency graphs and enable optimized builds, while integrating with Lmod to provide a flexible and discoverable module system for users. We discuss our strategy for centrally installed versus user-space software, native versus containerized installations, and user empowerment strategies such as Spack upstreams and Python virtual environments. We discuss some of the challenges we have faced and best practices we have cultivated to efficiently support an extensive set of software and users on our HPC systems.
Speakers
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot B

4:30pm MDT

Linpack and other benchmarks on heterogeneous HPC clusters
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Benchmarking is a core part of HPC operations — whether you're validating a new cluster, justifying a procurement, or hunting down a performance regression. But running benchmarks well on heterogeneous hardware introduces challenges that the documentation doesn't always prepare you for. This presentation shares our experience running the NVIDIA HPC benchmark container on A100 GPUs and submitting results to the Top500 and Green500 lists, covering the practical details of tuning HPL parameters, navigating the submission process, and the surprises we encountered along the way. We also attempted runs across mixed A100 and H100 nodes, which raised questions about how GPU generational differences affect Linpack scaling, interconnect saturation, and whether heterogeneous submissions are even meaningful. Beyond Linpack, we'll discuss our use of the OSU Micro-Benchmarks (OMB) suite as a cluster diagnostics tool, using pairwise node communication tests to identify fabric bottlenecks, misconfigured adapters, and inconsistent latency across the cluster, and more.
Speakers
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot C

4:30pm MDT

Scalable Patent Search and Analysis Using Large Language Models with Function Calling
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
This work describes a scalable system for automated patent search and analysis that integrates large language models with function calling to support data retrieval and classification. The approach combines conventional data extraction from the US Patent and Trademark Office with semantic similarity search and structured function execution to enable accurate and reproducible patent management applicable to real-world institutional data analysis challenges.
 

Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot A

4:30pm MDT

The Next Evolution of Artificial Intelligence
Wednesday May 13, 2026 4:30pm - 5:00pm MDT

Artificial intelligence (AI) is rapidly transforming research, teaching, and operations. With AI-native platforms like Oracle Database 23ai—and innovations shaping Oracle Database 26ai—data infrastructure is evolving from passive storage to active participation in intelligent workflows. These platforms embed AI directly in the database, enabling more efficient, secure, and scalable development and governance.
At the same time, Oracle’s multi-cloud capabilities allow AI workloads to run across cloud environments while maintaining consistent performance, security, and data governance—supporting flexible, distributed innovation.
A new wave of agentic AI is emerging, with systems that act autonomously, reason, and orchestrate workflows across complex environments. As these agents increasingly access sensitive systems and data, establishing secure, accountable identities becomes critical.
This session explores agentic AI and emphasizes the need for strong identity frameworks grounded in least-privilege access, auditability, and integration with federated identity and role-based access control models to ensure responsible, scalable AI adoption.
Wednesday May 13, 2026 4:30pm - 5:00pm MDT
Simplot D

5:30pm MDT

Reception Sponsored by Intel
Wednesday May 13, 2026 5:30pm - 7:30pm MDT

5:30 pm- Take a picture on the iconic blue field of Albertson's Stadium, explore the Boise State Football Stadium
6:00-7:30 pm- Join us the Stueckle Sky Center for Appetizers and Drinks, sponsored by Intel
Wednesday May 13, 2026 5:30pm - 7:30pm MDT
The Stueckle Sky Center 1200 W University Dr, Boise, ID
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -