Softwarez.Info - Software's World!
Master Llms With Langchain - Printable Version

+- Softwarez.Info - Software's World! (https://softwarez.info)
+-- Forum: Library Zone (https://softwarez.info/Forum-Library-Zone)
+--- Forum: Video Tutorials (https://softwarez.info/Forum-Video-Tutorials)
+--- Thread: Master Llms With Langchain (/Thread-Master-Llms-With-Langchain--708027)



Master Llms With Langchain - AD-TEAM - 12-06-2024

[Image: d9fb6d326a32fa78b4f98589f01215b2.jpg]
Master Llms With Langchain
Published 11/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 3.90 GB | Duration: 8h 9m

Modern Generative AI and NLP Solutions! Build real-world projects using advanced LLMs like ChatGPT, Llama and Phi

[b]What you'll learn[/b]

Understand the theory behind LLMs and key concepts from LangChain and Hugging Face

Integrate proprietary LLMs (like OpenAI's ChatGPT) and open-source models such as Meta's Llama and Microsoft's Phi

Learn about LangChain components, including chains, templates, RAG modules, agents, and tools

Explore RAG step-by-step for storage and retrieval using vector stores, with access to documents and web pages

Implement agents and tools to add features like conducting internet searches and retrieving up-to-date information

Deploy solutions in a local environment, enabling the use of open-source models without internet connection

Build an application that automatically summarizes videos and responds to questions about them

Develop a complete custom chatbot with memory and create a user-friendly interface using Streamlit

Create an advanced RAG application to interact with documents and extract relevant information using a chat interface

[b]Requirements[/b]

Programming logic

Basic Python programming

[b]Description[/b]

In this course, you will dive deep into the world of Generative AI with LLMs (Large Language Models), exploring the potential of combining LangChain with Python. You will implement proprietary solutions (like ChatGPT) and modern open-source models like Llama and Phi. Through practical, real-world projects, you'll develop innovative applications, including a custom virtual assistant and a chatbot that interacts with documents and videos. We'll explore advanced techniques such as RAG and agents, and use tools like Streamlit to create intuitive interfaces. You'll learn how to use these technologies for free in Google Colab and also how to run projects locally.In the introduction, you'll be introduced to the theory of Large Language Models (LLMs) and their fundamental concepts. Additionally, we'll explore the Hugging Face ecosystem, which offers modern solutions for Natural Language Processing (NLP). You'll learn to implement LLMs using both the Hugging Face pipeline and the LangChain library, understanding the advantages of each approach.The second part is focused on mastering LangChain. You'll learn to access open-source models, like Meta's Llama and Microsoft's Phi, as well as proprietary LLMs, like OpenAI's ChatGPT. We'll explain model quantization to enhance performance and scalability. Key LangChain components, such as chains, templates, and tools, will be presented, along with how to use them to develop robust NLP solutions. Prompt engineering techniques will be covered to help you achieve more accurate results. The concept of RAG (Retrieval-Augmented Generation) will be explored, including information storage and retrieval processes. You'll learn to implement vector stores and understand the importance of embeddings and how to use them effectively. We'll also demonstrate how to use RAG to interact with PDF documents and web pages. Additionally, you'll have the opportunity to explore integrating agents and tools, like using LLMs to perform web searches and retrieve recent information. Solutions will be implemented locally, enabling access to open-source models even without an internet connection.In the project development phase, you'll learn to create a custom chatbot with an interface and memory for Q&A. You'll also learn to develop interactive applications using Streamlit, making it easy to build intuitive interfaces. One project involves developing an advanced application using RAG to interact with multiple documents and extract relevant information through a chat interface. Another project will focus on building an application that automatically summarizes videos and answers related questions, resulting in a powerful tool for instant, automated video comprehension.

Overview

Section 1: Introduction

Lecture 1 Course content

Lecture 2 Course materials

Lecture 3 What are LLMs?

Lecture 4 How LLMs work 1

Lecture 5 How LLMs work 2

Lecture 6 Embeddings and tokens

Lecture 7 Evolution and historical context

Lecture 8 Examples of applications

Lecture 9 Challenges, limitations and ethics

Lecture 10 LLM models

Section 2: LLM using Hugging Face

Lecture 11 Hugging Face account and token

Lecture 12 Types of models

Lecture 13 Installation and configuration

Lecture 14 Parameters to text generation

Lecture 15 Prompt templates

Lecture 16 Exploring prompt engineering

Lecture 17 Message format

Lecture 18 Optimizing with quantization

Section 3: LLM using LangChain

Lecture 19 LangChain - intuition

Lecture 20 Installing LangChain

Lecture 21 LangChain models

Lecture 22 Other open source models

Lecture 23 Chat models

Lecture 24 Prompt templates

Lecture 25 Chains and custom functions

Lecture 26 Streaming

Lecture 27 Other model services

Lecture 28 Running on local machine

Lecture 29 Ollama in local machine

Section 4: LangChain - RAG

Lecture 30 RAG - intuition

Lecture 31 Preparing the environment

Lecture 32 Tests with RAG

Lecture 33 Debugging

Lecture 34 Indexing - intuition

Lecture 35 Indexing - implementation

Lecture 36 Text retrieval and generation - intuition

Lecture 37 Text retrieval and generation - implementation

Section 5: LangChain - Agents and Tools

Lecture 38 Agents and Tools - intuition

Lecture 39 Wikipedia tool

Lecture 40 Custom tool

Lecture 41 ReAct

Lecture 42 Creating and running the agent

Lecture 43 Tests with ChatGPT

Lecture 44 Tests with Tavily

Lecture 45 Chat templates

Lecture 46 Langsmith

Section 6: Project 1: Video transcription

Lecture 47 Preparing the environment

Lecture 48 Video transcription

Lecture 49 Loading the model

Lecture 50 Prompt template

Lecture 51 Chain, response, and translation

Lecture 52 Complete pipeline

Lecture 53 Markdown for visualization

Section 7: Project 2: Chatbot with memory and interface

Lecture 54 Preparing the environment

Lecture 55 Prompt, chain, and response

Lecture 56 State session

Lecture 57 User input and conversation

Lecture 58 Google Colab code

Section 8: Project 3: Talk to your documents

Lecture 59 Preparing the environment

Lecture 60 Panel to select documents

Lecture 61 Indexing and retrieval

Lecture 62 Advanced chain for conversation

Lecture 63 Session variables

Lecture 64 Conversation

Lecture 65 Google Colab code

Section 9: Final remarks

Lecture 66 Final remarks

Lecture 67 BONUS

Professionals and enthusiasts in the field of artificial intelligence interested in exploring the use of LLMs,Professionals looking to implement LLMs in their own applications,Students aiming to gain deeper knowledge in NLP and learn to implement modern solutions,Professionals from other fields who want to learn how to use language models in real-world applications,Developers seeking to expand their skills with generative AI,Researchers interested in exploring advances in LLMs and their practical applications

[Image: HfiBuCqz_o.jpg]

Fikper

[To see links please register or login]

RapidGator

[To see links please register or login]

NitroFlare

[To see links please register or login]