Course Content
Overview
This course provides a deep dive into Large Language Models (LLMs) and Generative AI, covering essential concepts, frameworks, and advanced applications. Participants will learn how to build AI-powered applications, optimize workflows, and implement security best practices in AI-driven systems.
Since this a practical, project focused course, we are not going to focus on math part rather our goal is to use AI tech stack to build application/products.
What You Will Learn
- Introduction to LLM and Generative AI – Understanding the fundamentals of LLMs and their capabilities.
- AI Agents and Agentic Workflows – Implementing intelligent, autonomous AI agents.
- Building Basic Chat Applications – Using LangChain to develop AI-driven chatbots.
- Chat Over Large Documents – Leveraging vector stores such as Qdrant DB, PG Vector, and Pinecone for efficient document retrieval.
- Retrieval-Augmented Generation (RAG) – Enhancing AI responses with dynamic information retrieval.
- Context-Aware AI Applications – Developing AI solutions that adapt to different contexts.
- Memory-Aware AI Agents – Utilizing Qdrant DB and Neo4j Graph for persistent AI memory.
- Document-to-Graph DB and Embeddings – Transforming structured and unstructured data into graph-based representations.
- Multi-Modal LLM Applications – Integrating text, images, and other data modalities.
- Security and Guardrails – Implementing self-hosted models like Llama-3 or Gemma to ensure AI safety and compliance.
- AI Agent Orchestration with LangGraph – Managing multiple AI agents and workflows.
- Checkpointing in LangGraph – Ensuring fault tolerance and reproducibility in AI pipelines.
- Human-in-the-Loop Interruptions – Allowing human oversight in AI-driven decisions.
- Tool Binding and API Calling – Enabling AI agents to interact with external tools and services.
- Autonomous vs. Controlled Workflows – Understanding different agent workflow strategies.