The NSF Office of Advanced Cyberinfrastructure (OAC) supports over 25,000 researchers and students through an integrated ecosystem of computing, data, networking, and software infrastructure. This presentation provides an overview of OAC's major initiatives and resources relevant to the research computing community, including the NSF Leadership-Class Computing Facility (LCCF) — a distributed, national-scale system entering production in FY2027 with 2,000 GPU nodes, 390PB of flash storage, and an 800PB archive — and the National Research Platform (NRP), which aggregates GPU, CPU, and storage resources across 84 organizations for research and education. The presentation also covers the ACCESS program for allocating advanced computing and data resources, and the National AI Research Resource (NAIRR) Pilot, which connects researchers and educators to AI computing infrastructure, datasets, and training. Highlighted NAIRR projects span battlefield medicine, agricultural resilience, Alzheimer's disease prediction, and deepfake detection. Finally, the presentation surveys upcoming funding solicitations including IDSS, CICI, CSSI, FAIROS, Future CoRe, and TechAccess: AI-Ready America, offering pathways for institutions to engage with and contribute to the national cyberinfrastructure ecosystem.
Most faculty are still early in their AI journey. Yet the future is already arriving: small, self-learning AI models that power autonomous agents capable of continuous improvement, collaboration, and generating entirely new capabilities for teaching and research. This talk paints the compelling vision of where we are headed: a shift from consuming AI to building with it, from fragile trillion-line codebases to intelligent, living systems that replace much of traditional software while enabling personalized tutoring, interactive simulations, adaptive research assistants, and new forms of discovery. We explore why small, efficient models (10 to 30 times cheaper, on-device fast, privacy-first) represent the practical foundation for campus-scale agents, how self-learning agents will transform classrooms and labs, and what petri-dish environments for safe experimentation could look like. A live demonstration of projectEureka shows the foundation where this future can be built, seamlessly running custom AI models and autonomous agents across on-prem and cloud environments. Join us to be inspired and motivated to prepare your institution for the new era of building AI models and autonomous agents in teaching and research.
Cloud computing has become a foundational enabler for academic institutions seeking to scale research workloads, modernize curricula, and reduce infrastructure overhead. This session explores how colleges and universities are leveraging Amazon Web Services (AWS) to address key challenges in higher education — from burst-capable HPC clusters for computational research, to cost-effective storage for growing datasets, to AI/ML platforms that bring cutting-edge tools into the classroom. We will examine practical patterns for deploying research computing environments on AWS, including integration with schedulers like Slurm via AWS Parallel Computing Service, and strategies for managing multi-account environments across departments and research groups. We will also highlight the AWS Open Data program, which provides free access to large-scale public datasets — enabling researchers and students to focus on analysis rather than data acquisition and hosting costs.
Python is one of the most widely used languages in scientific computing, and its adoption on HPC systems continues to grow, particularly among users with limited training in HPC and performance-oriented software development. At the same time, the ecosystem of tools for high-performance Python has expanded rapidly, making it increasingly difficult for users—and the research computing support teams advising them—to identify effective strategies to improve performance. I will discuss the landscape of high-performance Python with an emphasis on decision-making: how to choose appropriate tools and approaches based on workload characteristics and performance goals, focusing on common performance pitfalls, practical tradeoffs, and guidance for selecting technologies. The content will be most useful for Python users and research computing and data (RCD) facilitators who support Python workflows on HPC systems.
Please join Mark III and NVIDIA at RMACC for an overview, walk-through, and live demo of building an AI agent app quickly and easily using NVIDIA NIMs and Blueprints. The first segment of this session will focus on a build of a simple AI agent bot built with a Llama 3 NIM. The second segment will focus on a live build and demo of a more advanced AI agent build with an NVIDIA Blueprint (powered by multiple NIMs). Lastly, the session will show how to quickly and easily finetune a model before being served up for API consumption by apps via NIMs. This session will be of interest to participants of all levels and skillsets looking to deploy AI services into their existing and new cloud-native and modern apps and research as quickly and effectively as possible.