본문 바로가기
Python

랭체인 시도해보기

by 앗사비 2024. 8. 21.
728x90

외부 서버에 ollama가 설치되었다고 가정

 

가장 단순한 방법

from langchain_community.llms import Ollama

llm = Ollama(base_url="http://192.168.10.12:11434", model="gemma2:9b-instruct-q4_K_M")
str = llm.invoke("반가워")
print(str)

 

 

답변을 스트리밍으로 보기

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.ollama import Ollama

llm = Ollama(
    base_url="http://192.168.10.12:11434",
    model="gemma2:9b-instruct-q4_K_M",
    callback_manager=CallbackManager(
        [StreamingStdOutCallbackHandler()], #실시간 출력
    ),
)

prompt = "반가워"
response = llm.invoke(prompt)
print(response)

 

 

시스템 프롬프트 추가

다중 턴 대화에서는 ChatPromptTemplate.from_messages 사용

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.ollama import Ollama
from langchain_core.prompts import ChatPromptTemplate

llm = Ollama(
    base_url="http://192.168.10.12:11434",
    model="gemma2:9b-instruct-q4_K_M",
    callback_manager=CallbackManager(
        [StreamingStdOutCallbackHandler()],  # 실시간 출력
    ),
)

prompt = ChatPromptTemplate.from_template("너는 sql 전문가야. <Question>: {input}")
chain = prompt | llm
chain.invoke({"input": "PostgreSQL 에서 모든 테이블의 정보를 확인하는 쿼리는?"})

 

 

txt 파일 분석하기

from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms.ollama import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain_community.document_loaders import TextLoader

llm = Ollama(
    base_url="http://192.168.10.12:11434",
    model="gemma2:9b-instruct-q4_K_M",
    callback_manager=CallbackManager(
        [StreamingStdOutCallbackHandler()],  # 실시간 출력
    ),
)

# Load the text file
loader = TextLoader("test.txt")  # Update with the path to your text file
documents = loader.load()  # Load the content
input_text = "\n".join(doc.page_content for doc in documents)
prompt = ChatPromptTemplate.from_template(
    "너는 cs 담당자야. 주제에 따라 그룹으로 나눠주세요. <Question>: {input}"
)
chain = prompt | llm
chain.invoke({"input": input_text})

 

긴 분량의 txt 요약 (rag 사용)

from langchain_community.document_loaders import TextLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.chat_models import ChatOllama
from langchain_community.embeddings import OllamaEmbeddings
from langchain_chroma import Chroma
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

# 모델
local_embeddings = OllamaEmbeddings(
    base_url="http://192.168.10.12:11434",
    model="bge-m3:latest",
)
model = ChatOllama(
    base_url="http://192.168.10.12:11434",
    model="mistral-small:latest",
)

# 소스 파일
loader = TextLoader("test.txt")
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
vectorstore = Chroma.from_documents(documents=all_splits, embedding=local_embeddings)

retriever = vectorstore.as_retriever()


def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)


RAG_TEMPLATE = """
질문 답변 작업의 어시스턴트입니다.
다음 검색된 컨텍스트를 사용하여 질문에 답하세요.
답을 모르면 모른다고 말하세요.
한국어로 답변하세요.
답을 간결하게 유지하세요.

<context>
{context}
</context>

Answer the following question:

{question}"""

rag_prompt = ChatPromptTemplate.from_template(RAG_TEMPLATE)

retriever = vectorstore.as_retriever()

qa_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | rag_prompt
    | model
    | StrOutputParser()
)

question = "외부 교육 신청 방법은?"

output = qa_chain.invoke(question)
print(output)

 

rag 참고

https://python.langchain.com/docs/tutorials/local_rag/

https://rudaks.tistory.com/entry/langchain-RunnablePassthrough%EC%9D%80-%EB%AC%B4%EC%97%87%EC%9D%B8%EA%B0%80

 

728x90