Loading…
RMACC 2026 has ended
Type: Tutorial clear filter
arrow_back View All Dates
Wednesday, May 13
 

1:00pm MDT

Accelerating Research and Learning with AWS Cloud in Higher Education
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Cloud computing has become a foundational enabler for academic institutions seeking to scale research workloads, modernize curricula, and reduce infrastructure overhead. This session explores how colleges and universities are leveraging Amazon Web Services (AWS) to address key challenges in higher education — from burst-capable HPC clusters for computational research, to cost-effective storage for growing datasets, to AI/ML platforms that bring cutting-edge tools into the classroom.
We will examine practical patterns for deploying research computing environments on AWS, including integration with schedulers like Slurm via AWS Parallel Computing Service, and strategies for managing multi-account environments across departments and research groups. We will also highlight the AWS Open Data program, which provides free access to large-scale public datasets — enabling researchers and students to focus on analysis rather than data acquisition and hosting costs.

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Jordan Ballroom Room C

1:00pm MDT

Agentic AI for Advanced Research; Data Storage; Data Management
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Most research environments treat storage as a procurement decision. Agentic AI flips that. Workflow and storage decide whether object, file, and parallel file systems succeed or fail, and “one big, shared filesystem” often collapses under metadata-heavy orchestration.
This session presents a workflow-first approach to infrastructure design for agentic AI and workflow-based pipelines. We characterize the I/O signatures that break classic HPC defaults, including small-file fan-out, high namespace churn, checkpoint bursts, and multi-tenant contention. We then outline a tiered architecture playbook: durable object for curated corpora, high-metadata file for orchestration surfaces, high-throughput scratch for transient staging, and policy-driven movement that preserves provenance. Throughout, we use explicit decision axes, including throughput, metadata ops, latency, and durability, so teams can justify choices to leadership and align investments to measurable bottlenecks.

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot B

1:00pm MDT

Building Sovereign AI Factories: A Blueprint for State-Level Economic Growth
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
AI is creating clear winners and losers across economies and regions. Sovereign AI Factories offer a powerful economic fulcrum for states seeking to attract talent, investment, and sustainable growth. By pooling resources at a state level, Sovereign AI Factories unite universities, K–12 systems, corporations, research institutes, and economic development agencies around a shared AI infrastructure that no single organization could afford on its own.
This presentation will outline the three pillars of a successful Sovereign AI Factory:
Defining a Sovereign AI Factory
  • Setting goals
  • Building the coalition
  • Promoting the benefits
Defining the Hardware Architecture
  • Scaling compute, storage, and network resources
  • Addressing the need for direct liquid cooling
  • Key data center considerations
Defining the User Experience
  • Delivering a self-service cloud experience
  • Ensuring user and resource security
  • Supporting AI and HPC workloads

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot C

1:00pm MDT

Compute Anywhere with Function-as-a-Service with Globus Compute
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Growing data volumes, new computing paradigms, and increasing hardware heterogeneity are driving the need to execute code on diverse distributed computing resources, many of which are outside the bounds of the researcher's institution. This need may be driven by (a) the desire to compute closer to data acquisition sources, (b) exploit specialized computing resources such as hardware accelerators, (c) provide real-time processing of data, (d) reduce energy consumption (e.g., by matching workload with hardware), and (e) scale simulations beyond the limits of a single computer.

Globus Compute addresses these needs by delivering a hybrid cloud platform implementing the Function-as-a-Service (FaaS) paradigm. Researchers first register their desired function with a cloud-hosted service, they can then request invocation of that function with arbitrary input arguments to be executed on remote cyberinfrastructure. Globus Compute manages the reliable and secure execution of the function, provisioning resources, staging function code and inputs, managing safe and secure execution (optionally using containers), monitoring execution, and asynchronously returning results to users via the cloud platform.

This tutorial will describe use cases for FaaS in science and demonstrate how Globus Compute can provide a common interface and approach for portable execution across different systems. Attendees will experiment with Globus Compute on virtual machines and learn how to deploy Globus Compute on their HPC cluster or other advanced computing system.
Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot A

1:00pm MDT

Workshop on using Generative AI in an HPC environment
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
In this session you will participate in an immersive, experiential learning environment designed to expand how you use generative AI in HPC systems. Large language models and coding agents are changing how people write, debug, and maintain code. In high-performance computing environments, however, that efficiency comes with added complexity: shared clusters, schedulers, quotas, modules, file systems, and policies where mistakes can create operational risk.
You will work through a practical HPC-style simulation involving job submission, environment setup, shell scripting, automation, and troubleshooting. The goal is not only to see where generative AI can help with scripting via skills but also to understand where it can make mistakes.
This workshop will emphasize a pair-programming style of collaboration with AI: you will use AI to generate inputs but will need to review and verify. Through controlled adversarial and defensive scenarios, you will build intuition for when to trust AI assistance, when to slow down, and how to check AI-generated shell commands, scripts, and code before running them in shared computing environments. You will be encouraged to use modern skills of context engineering and skill writing to improve your outputs.
This workshop is designed for researchers, educators, students, research computing staff, and HPC administrators. You do not need to identify as an expert programmer, but should bring curiosity and a willingness to write or modify small pieces of code and bash scripts, read output critically, and iterate with AI tools.
Participants should bring a laptop and an open mind toward using generative AI as a fast but fallible collaborator. By the end of the session, you should be able to describe key tradeoffs of using generative AI in HPC-adjacent work, apply simple verification habits to AI-generated commands and scripts, and reuse practical patterns from this simulation in your own work.

Speakers
Wednesday May 13, 2026 1:00pm - 2:30pm MDT
Simplot D
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -