Loading…
RMACC 2026 has ended
Wednesday May 13, 2026 10:30am - 11:00am MDT
Modern HPC and AI workloads increasingly depend on data that is distributed across multiple storage systems, tiers, and locations, including on-premises clusters, institutional storage, and cloud resources. While compute performance continues to scale rapidly, data access and data movement have become primary bottlenecks, limiting utilization and complicating workflow design.
This talk examines an open, standards-based approach to unifying access to distributed data for AI and HPC workloads—without requiring proprietary clients, forklift upgrades, or disruptive data migrations. Using Hammerspace as a concrete example, the session explores how modern parallel file system standards and automated data orchestration can be used to present a single, high-performance data namespace across otherwise siloed storage systems and sites.
Attendees will learn how global namespace architectures, combined with pNFS 4.2 and policy-driven data orchestration, enable linear scaling of IOPS and throughput using existing infrastructure. The result is simplified workflow design, improved data locality, and higher sustained utilization of expensive CPU and GPU resources—particularly for AI training, inference, and data-intensive simulation workloads.
Key topics include:
  • Parallel Global File Systems with pNFS 4.2 – Leveraging open standards to provide scalable, high-performance access to distributed datasets without proprietary file systems.
  • Automated Data Orchestration – Using policy-driven data placement and movement to align data dynamically with compute, while maintaining continuous access.
  • AI and HPC Workflow Optimization – Simplifying data access across clusters and sites to reduce staging, eliminate redundant copies, and maximize compute efficiency.
     

Wednesday May 13, 2026 10:30am - 11:00am MDT
Simplot D

Attendees (2)


Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link