![]() |
Hands On AI (LLM) Red Teaming - Printable Version +- Softwarez.Info - Software's World! (https://softwarez.info) +-- Forum: Library Zone (https://softwarez.info/Forum-Library-Zone) +--- Forum: Video Tutorials (https://softwarez.info/Forum-Video-Tutorials) +--- Thread: Hands On AI (LLM) Red Teaming (/Thread-Hands-On-AI-LLM-Red-Teaming) |
Hands On AI (LLM) Red Teaming - BaDshaH - 02-14-2025 ![]() Published 2/2025 MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz Language: English | Size: 7.08 GB | Duration: 8h 24m Learn AI Red Teaming from Basics of LLMs, LLM Architecture, AI/GenAI Apps and all the way to AI Agents What you'll learn Fundamentals of LLMs Jailbreaking LLMs OWASP Top 10 LLM & GenAI Hands On - LLM Red Teaming with tools Writing Malicious Prompts (Adversarial Prompt Engineering) Requirements Basics of Python Programming Cybersecurity Fundamentals Description ObjectiveThis course provides hands-on training in AI security, focusing on red teaming for large language models (LLMs). It is designed for offensive cybersecurity researchers, AI practitioners, and managers of cybersecurity teams. The training aims to equip participants with skills to:Identify and exploit vulnerabilities in AI systems for ethical purposes.Defend AI systems from attacks.Implement AI governance and safety measures within organizations.Learning GoalsUnderstand generative AI risks and vulnerabilities.Explore regulatory frameworks like the EU AI Act and emerging AI safety standards.Gain practical skills in testing and securing LLM systems.Course StructureIntroduction to AI Red Teaming:Architecture of LLMs.Taxonomy of LLM risks.Overview of red teaming strategies and tools.Breaking LLMs:Techniques for jailbreaking LLMs.Hands-on exercises for vulnerability testing.Prompt Injections:Basics of prompt injections and their differences from jailbreaking.Techniques for conducting and preventing prompt injections.Practical exercises with RAG (Retrieval-Augmented Generation) and agent architectures.OWASP Top 10 Risks for LLMs:Understanding common risks.Demos to reinforce concepts.Guided red teaming exercises for testing and mitigating these risks.Implementation Tools and Resources:Jupyter notebooks, templates, and tools for red teaming.Taxonomy of security tools to implement guardrails and monitoring solutions.Key OutcomesEnhanced Knowledge: Develop expertise in AI security terminology, frameworks, and tactics.Practical Skills: Hands-on experience in red teaming LLMs and mitigating risks.Framework Development: Build AI governance and security maturity models for your organization.Who Should Attend?This course is ideal for:Offensive cybersecurity researchers.AI practitioners focused on defense and safety.Managers seeking to build and guide AI security teams.Good luck and see you in the sessions! Cybersecurity Professionals who wants to secure LLMs and AI Agents Homepage |