Gen AI Crash Course

Building with Generative AI:
From Basics to Advanced Patterns

March 12, 2026 | University of Ioannina, Arta Branch 

At a Glance

2

Modules

2

Hours

100+

Participants

An Intensive Crach Course on Gen AI Applications

This intensive 2-hour lecture equips university students with both foundational knowledge and advanced expertise in Generative AI development using TypeScript. Blending essential theory with hands-on coding demonstrations, the session explores core principles and state-of-the-art techniques powering today’s production AI systems. Students will gain practical skills and deep insights to confidently build and innovate with Generative AI in real-world scenarios.

 

TRAINING COHORT

CRASH COURSE

DURATION

2 HOURS

TRAINING TYPE

IN PERSON INSTRUCTOR LED TRAINING

WHAT TO EXPECT

HANDS ON FRAMEWORKS AND USE CASES

What you Need to Know

11:30 - 12:30 | Practical Generative AI: TypeScript Essentials, LLMs, and Cost-Efficient Coding

This session establishes the technical groundwork for building real-world AI applications with modern development practices.

We begin with TypeScript essentials for AI development, focusing on the patterns and language features that matter most when working with APIs, structured outputs, async workflows, and strongly typed AI responses. Participants will understand how to design clean, scalable AI integrations using type safety as a competitive advantage—not an afterthought.

Next, we explore how Generative AI and Large Language Models (LLMs) actually work. Instead of abstract theory, this section explains the mechanics developers need to understand: tokens, context windows, temperature, embeddings, inference, and the probabilistic nature of text generation. By the end, attendees will clearly grasp what happens between prompt and response—and why outputs behave the way they do.

The session then moves into a live code demonstration, showing how to integrate an LLM into an application with real-time streaming responses. Participants will see how to:

  • Send structured prompts
  • Handle streaming tokens
  • Build responsive UI experiences
  • Manage asynchronous flows cleanly in TypeScript

 

Finally, we cover token management and cost estimation, a critical but often overlooked aspect of production AI systems. This includes:

  • How tokenization works
  • Estimating prompt + completion costs
  • Monitoring usage
  • Designing prompts for efficiency without sacrificing quality

 

By the end of this 50-minute foundation block, participants will understand not just how to call an LLM—but how to build production-ready AI features responsibly, efficiently, and at scale.

12:30–13:30 | Architecting AI Agents: From Advanced Prompting to Production-Ready Systems

This session moves beyond simple prompt-response patterns into structured reasoning systems and autonomous AI architectures designed for real-world deployment.

We begin with advanced prompt engineering techniques, introducing practical reasoning frameworks that significantly improve output quality and reliability:

  • Chain-of-Thought (CoT) – Encouraging step-by-step reasoning for complex problem solving
  • ReAct (Reason + Act) – Combining reasoning with tool usage for dynamic decision-making
  • Self-Ask – Breaking complex questions into structured sub-questions

 

Rather than treating prompting as trial-and-error, this section frames it as system design, where reasoning strategies are deliberately engineered to produce predictable outcomes.

Next, we explore AI agent architecture and design patterns. Participants will learn how modern agents are structured, including:

  • Planner–executor models
  • Tool-augmented agents
  • Memory layers (short-term vs long-term)
  • Retrieval-augmented workflows
  • Multi-step reasoning loops

This provides a clear mental model for building agents that move beyond chat interfaces into goal-driven systems.

 

The session then shifts to a live code demonstration, implementing a ReAct-style agent with tool integration. Attendees will see:

  • How the agent decides when to call a tool
  • How intermediate reasoning is handled
  • How tool responses are fed back into the reasoning loop

 

Finally, we address production considerations, focusing on what separates demos from deployable systems:

  • Ethical constraints and responsible AI usage
  • Observability and logging

 

By the end of this session, participants will understand how to design, implement, and deploy intelligent AI agents that are structured, controllable, and production-ready—not just impressive prototypes.