New models from leading AI organizations come out seemingly every week, but despite the constant evolution there are always hallucinations and knowledge gaps. LoRA, RAG, RL, and Agentic AI are all popular methods to improve LLM performance and this presentation will give an overview of each method as well as discuss how "performance" can be measured in an LLM context. These methods will be compared and contrasted across several criteria such as ease of use, resources required, and the theory behind them. Open source models from the huggingface repository will be primarily used as examples since they are common in RMACC institutions that may have restricted access environments.