With a solid understanding of Generative AI, it’s time to explore how to effectively interact with and optimize AI assistants—without writing a single line of code. In this hands-on session, participants will engage with a custom-built Generative AI assistant powered by Llama, Llama, and DeepSeek-R1. Through guided, no-code exercises, attendees will experience firsthand how to refine AI outputs and maximize performance.
We’ll begin by experimenting with prompt engineering, where participants will explore different ways to structure inputs to guide AI responses. By adjusting tone, specificity, and constraints, they will see how small modifications can significantly improve relevance, coherence, and accuracy.
Next, we will dive into response optimization techniques, demonstrating how settings like temperature, response length, and system instructions impact AI-generated content. Attendees will practice adjusting these parameters through interactive exercises, gaining intuition on how to tailor AI behavior for different use cases.
Finally, we’ll discuss efficiency and best practices, showing how to make interactions smoother, reduce hallucinations, and achieve more reliable outputs. By the end of this session, participants will have a hands-on understanding of how to effectively work with GenAI models, empowering them to leverage AI assistants like a pro.