In 2018, Idaho National Laboratory built DeepLynx, a data warehouse designed to organize large amounts of engineering and scientific data. As INL's projects grew more complex and AI became more central to their work, the original platform couldn't scale to meet new requirements. DeepLynx Nexus was built to address these limitations. Rather than replicating and storing large datasets, Nexus catalogs metadata and relationships about data that lives in existing systems. Think of it as a smart catalog that doesn't just tell you what data exists and where to find it, but also explains how different pieces relate to each other, where they came from, and what they mean. This rich context is exactly what AI agents need to actually do useful work. This presentation provides a hands-on walkthrough of getting Nexus running locally and cataloging your first datasets. We'll cover installation, configuration, creating a data schema, and a brief overview of Apache Airflow, a common ETL adapter architecture we use to bring metadata into Nexus. By the end, you'll have a practical understanding of how Nexus works and how it's being used to support lab initiatives.