Course Outline

Overview of LLM Architecture and Attack Surface

  • How LLMs are built, deployed, and accessed via APIs
  • Key components in LLM app stacks (e.g., prompts, agents, memory, APIs)
  • Where and how security issues arise in real-world use

Prompt Injection and Jailbreak Attacks

  • What is prompt injection and why it’s dangerous
  • Direct and indirect prompt injection scenarios
  • Jailbreaking techniques to bypass safety filters
  • Detection and mitigation strategies

Data Leakage and Privacy Risks

  • Accidental data exposure through responses
  • PII leaks and model memory misuse
  • Designing privacy-conscious prompts and retrieval-augmented generation (RAG)

LLM Output Filtering and Guarding

  • Using Guardrails AI for content filtering and validation
  • Defining output schemas and constraints
  • Monitoring and logging unsafe outputs

Human-in-the-Loop and Workflow Approaches

  • Where and when to introduce human oversight
  • Approval queues, scoring thresholds, fallback handling
  • Trust calibration and role of explainability

Secure LLM App Design Patterns

  • Least privilege and sandboxing for API calls and agents
  • Rate limiting, throttling, and abuse detection
  • Robust chaining with LangChain and prompt isolation

Compliance, Logging, and Governance

  • Ensuring auditability of LLM outputs
  • Maintaining traceability and prompt/version control
  • Aligning with internal security policies and regulatory needs

Summary and Next Steps

Requirements

  • An understanding of large language models and prompt-based interfaces
  • Experience building LLM applications using Python
  • Familiarity with API integrations and cloud-based deployments

Audience

  • AI developers
  • Application and solution architects
  • Technical product managers working with LLM tools
 14 Hours

Number of participants


Price per participant

Upcoming Courses

Related Categories


Fatal error: Uncaught TypeError: _isl_get_excluded_site(): Return value must be of type ?array, none returned in /apps/hitra7/backdrop/modules/_custom/frontend/islc7/isl_common.inc:38 Stack trace: #0 /apps/hitra7/backdrop/modules/_custom/frontend/islc7/isl_common.inc(30): _isl_get_excluded_site() #1 /apps/hitra7/backdrop/modules/_custom/frontend/islc7/isl_common.inc(17): isl_get_excluded_site() #2 /apps/hitra7/backdrop/modules/_custom/frontend/islc7/islc7.module(51): get_outline_isls() #3 /apps/hitra7/backdrop/modules/_custom/frontend/islc7/islc7.module(7): islc_prepare_links() #4 /apps/hitra7/npfrontend/nptemplates/default.php(272): islc7_sites_links_array_v3() #5 /apps/hitra7/npfrontend/modules/course/course.php(143): require_once('...') #6 /apps/hitra7/npfrontend/core/routes.php(15): course_menu_callback() #7 /apps/hitra7/npfrontend/__index.php(78): require_once('...') #8 /apps/hitra7/npfrontend/index.php(15): include_once('...') #9 /apps/hitra7/index.php(66): include_once('...') #10 {main} thrown in /apps/hitra7/backdrop/modules/_custom/frontend/islc7/isl_common.inc on line 38