This comprehensive course explores the key concepts needed to build production-grade AI applications. Through a combination of theoretical foundations and practical applications, you'll build the skills necessary to understand and create AI-powered solutions.
AI is not a single technology β it spans a spectrum of capabilities and purposes. Understanding these flavors helps clarify what today's systems can and cannot do, and where the field is heading.
| Flavor | Question it answers | Examples |
|---|---|---|
| π Predictive | What will happen? | Fraud detection, demand forecasting, recommendations |
| π΅ Prescriptive | What should we do? | Dynamic pricing, supply chain optimization, treatment plans |
| π‘ Causal | Why did it happen? | A/B test analysis, root cause analysis, scientific discovery |
| π£ Generative β | What can be created? | Claude, ChatGPT, DALLΒ·E, Amazon Nova, Sora |
| π΄ Agentic β | What actions should I take? | AI coworkers, autonomous research agents, software agents |
β This course focuses on Generative and Agentic AI β the two flavors driving today's wave of AI application development.
AI has evolved through the convergence of algorithmic breakthroughs and advances in computing infrastructure. From John McCarthy's 1956 Dartmouth workshop to today's agentic systems β here's how we got here.
The Birth of AI
Machine Learning Era
AlexNet & the Deep Learning Moment
"Attention Is All You Need" β The Transformer
The ChatGPT Moment β GenAI Goes Mainstream
Multimodal AI & Cloud Platforms
Agentic AI & The Coding Inflection Point
This course is designed specifically for scientists, software and data engineers. Through a carefully structured learning path, you'll gain both theoretical knowledge and practical skills needed to build production-grade AI applications.
Master the core concepts of modern AI development. Starting from fundamentals, you'll progressively build knowledge of language models, prompt engineering, AI agents, embeddings & RAG, and MCP β the essential building blocks for creating AI applications on AWS.
Learn about the architecture, capabilities, and limitations of Large Language Models. Understand the fundamental concepts behind these powerful AI systems that are driving innovation across industries.
Master the art of effectively communicating with AI models through carefully crafted prompts. Learn strategies and techniques to get the most accurate and useful responses from language models.
Discover how agentic LLM applications extend language models with memory, tool use, and iterative decision cycles to autonomously solve complex, multi-step tasks. This module covers how agentic LLM applications go beyond single-step or workflow-based applications to deliver adaptable, goal-driven AI solutions.
Understand how embeddings encode semantic meaning, how vector stores enable similarity search, and how to build RAG pipelines that give LLMs access to your private data β without fine-tuning. Covers Amazon Titan Embeddings, OpenSearch, and AWS Bedrock Knowledge Bases.
Learn how to use MCP to build robust, maintainable AI applications. Understand the core principles of MCP and how it enables standardized communication between AI models and tools.
Part 1 gave you the concepts. Part 2 is where you build. Each module introduces a real tool or practice, and the part culminates in a capstone project β a working chat agent for this course, built on AWS using everything you've learned.
Get hands-on with the AWS services that power production AI applications. Covers Amazon Bedrock (foundation model API, model selection), Bedrock Agents (action groups, knowledge base integration), AgentCore Runtime and Gateway, and the Strands SDK for building agentic applications on AWS.
Learn to build AI applications the right way β starting from a spec. Kiro is Amazon's AI-powered developer tool that supports both vibe coding and structured spec-driven development. This module covers the Kiro CLI and IDE, how to write effective specs, and how spec-driven workflows lead to better, more maintainable AI-powered code.
Understand the operational side of AI applications. Learn how to trace and monitor agent behavior with CloudWatch, evaluate output quality, manage prompt versions with Bedrock Prompt Management, and enforce safety boundaries with Bedrock Guardrails. What you learn here applies directly to the capstone project.
Put it all together. Crawl and index this course's content into a Bedrock Knowledge Base, build a Strands SDK agent grounded in that knowledge, expose it through AgentCore Gateway as an MCP endpoint, instrument it with LLMOps tooling, and write the whole thing spec-first using Kiro. The result: a working AI assistant that knows this course, deployed on AWS.