In this session you will participate in an immersive, experiential learning environment designed to expand how you use generative AI in HPC systems. Large language models and coding agents are changing how people write, debug, and maintain code. In high-performance computing environments, however, that efficiency comes with added complexity: shared clusters, schedulers, quotas, modules, file systems, and policies where mistakes can create operational risk. You will work through a practical HPC-style simulation involving job submission, environment setup, shell scripting, automation, and troubleshooting. The goal is not only to see where generative AI can help with scripting via skills but also to understand where it can make mistakes. This workshop will emphasize a pair-programming style of collaboration with AI: you will use AI to generate inputs but will need to review and verify. Through controlled adversarial and defensive scenarios, you will build intuition for when to trust AI assistance, when to slow down, and how to check AI-generated shell commands, scripts, and code before running them in shared computing environments. You will be encouraged to use modern skills of context engineering and skill writing to improve your outputs. This workshop is designed for researchers, educators, students, research computing staff, and HPC administrators. You do not need to identify as an expert programmer, but should bring curiosity and a willingness to write or modify small pieces of code and bash scripts, read output critically, and iterate with AI tools. Participants should bring a laptop and an open mind toward using generative AI as a fast but fallible collaborator. By the end of the session, you should be able to describe key tradeoffs of using generative AI in HPC-adjacent work, apply simple verification habits to AI-generated commands and scripts, and reuse practical patterns from this simulation in your own work.