Most research environments treat storage as a procurement decision. Agentic AI flips that. Workflow and storage decide whether object, file, and parallel file systems succeed or fail, and “one big, shared filesystem” often collapses under metadata-heavy orchestration. This session presents a workflow-first approach to infrastructure design for agentic AI and workflow-based pipelines. We characterize the I/O signatures that break classic HPC defaults, including small-file fan-out, high namespace churn, checkpoint bursts, and multi-tenant contention. We then outline a tiered architecture playbook: durable object for curated corpora, high-metadata file for orchestration surfaces, high-throughput scratch for transient staging, and policy-driven movement that preserves provenance. Throughout, we use explicit decision axes, including throughput, metadata ops, latency, and durability, so teams can justify choices to leadership and align investments to measurable bottlenecks.