An intensive 8-hour programme teaching engineers to move beyond basic RAG and build production-grade GraphRAG systems that combine semantic search with knowledge graph reasoning — in TypeScript.
8-Hour Intensive
Intermediate–Advanced
TypeScript / Node.js
3 Working Codebases
RAG Limitation | Real-World Impact |
|---|---|
NO LOGICAL REASONING | Multi-hop queries like “Who is the CEO of the company that acquired John’s startup?” fail completely. |
WEAK NUMERICAL CONTEXT | Financial queries such as revenue growth above 20% in Q3 produce unreliable, untrustworthy results. |
POOR TEMPORAL LOGIC | Deadline and trend queries lack the precision needed for accurate business decisions. |
FRAGMENTED KNOWLEDGE | Answers requiring four or more documents are incomplete and cannot be synthesised reliably. |
NO RELATIONSHIP TRAVERSAL | “Which products depend on this failing service?” returns nothing useful from a flat vector store. |
Capability | Pure RAG | GraphRAG (Hybrid) |
|---|---|---|
General Q&A | ||
Multi-hop relationship queries | ||
Numerical & temporal reasoning | ||
Cross-document synthesis | Partial | |
Entity relationship traversal | ||
Production scalability |
Understand why RAG fails on complex queries and how knowledge graphs solve logical, numerical, and relational gaps.
Implement the full 6-step ingestion pipeline — from raw document to a queryable, mutually indexed knowledge graph in TypeScript.
Build a smart query planner that routes to the right retrieval strategy and fuses vector and graph results optimally.
Build three real-world systems — academic research, legal case law, and customer support — each powered by hybrid GraphRAG.
Optimise, harden, and monitor your GraphRAG system with resilience patterns, tuning, and production-grade observability.
Leave with a clear decision framework for when to use each approach and curated next projects to consolidate skills.
01
Split into 512-token sentence-boundary chunks with 64-token overlap for context continuity.
02
Generate OpenAI text-embedding-3-large vectors → store in Milvus with HNSW indexing.
03
Run BERT NER: PERSON, ORG, LOCATION, PRODUCT, DATE, MONEY entity types identified.
04
LLM identifies WORKS_AT, FOUNDED, ACQUIRED, CITES, SOLVED_BY typed relationship edges.
05
MERGE entities and relationships into Neo4j with typed nodes, edges, and Cypher properties.
06
Bidirectional MENTIONED_IN edges link graph entities ↔ text chunks for hybrid retrieval.
01
ACADEMIC RESEARCH · ~40 MIN
Paper ingestion → citation network → hybrid search across author, topic, and citation graph. Influence scoring via PageRank.
02
LEGAL / CASE LAW · ~40 MIN
Case law processor → precedent chain traversal → overruled case detection across 20 ingested cases.
03
CUSTOMER SUPPORT · ~40 MIN
500-ticket knowledge graph → solution recommender → auto-linking at 0.85 confidence threshold.
Curriculum starts from RAG's real limitations — every KAG concept feels necessary, not theoretical.
All implementation in real, deployable TypeScript — not Python notebooks or pseudo-code.
Every choice — HNSW params, fusion weights, chunk strategy — explained with the reasoning behind it.
Working codebases across three industries — not exercises. Templates adaptable to real enterprise problems.
Circuit breakers, 4-level fallback chains, Redis caching, and p95 monitoring built in from day one.
Milvus, Neo4j, OpenAI embeddings, BERT NER, Redis, Docker — one cohesive stack in a single day.
Core Principle | Schema First | Chunk Semantically | Index Mutually | Route Intelligently | Monitor Everything |
|---|---|---|---|---|---|
30 second timeout — vector + graph in parallel via Promise.all()
10 second timeout — Milvus HNSW search only
Subgraph cache lookup — near-instant if warm
08:30 | Registration & Environment Setup 30 min — Docker Compose spin-up for Milvus and Neo4j, .env config, API key validation |
09:00 | Block 1 — Foundations: From RAG to GraphRAG 1 hr 45 min — RAG limitations, KAG components, knowledge graph fundamentals |
10:45 | Break |
11:00 | Block 2 — GraphRAG Architecture & Hybrid Pipeline 1 hr 30 min — Full 6-step ingestion pipeline, Milvus HNSW, Neo4j, BERT NER, mutual indexing |
12:30 | Block 3 — Intelligent Query Processing 1 hr — Semantic/structural/hybrid routing, parallel search, 60/40 result fusion |
13:15 | Lunch Break |
14:00 | Block 4 — Three Production Use Cases + Labs 2 hrs — Research assistant, legal document search, intelligent support system |
16:00 | Block 5 — Production, Performance & Best Practices 1 hr — HNSW tuning, 4-level fallback chain, circuit breakers, p50/p95/p99 monitoring |
17:00 | Wrap-Up, Architecture Decisions & Next Steps 15 min — Decision matrix, project ideas, community resources |
TypeScript / Node.js
Milvus (HNSW)
Neo4j + Cypher
OpenAI gpt-4o
text-embedding-3-large
@xenova/transformers
Docker Compose
Zod
Redis
OpenAI gpt-4o