Location>code7788 >text

Have you figured out LangChain's LCEL and Runnable yet?

Popularity:773 ℃/2024-08-01 20:36:13

LangChain's LCEL estimates that friends in the industry have heard of, but LCEL RunnablePassthrough, RunnableParallel, RunnableBranch, RunnableLambda and what is the meaning? What are the scenarios?

1、Definition and principle of LCEL

At the heart of LangChain is Chain, a series of calls to multiple components.

LCEL is an expression language defined by LangChain as a more efficient and concise way to invoke a set of components.

LCELThe way to use this is: take a bunch ofPipe character ("|")Tandem all realizedRunnable interfaceThe components of the

Like this:

prompt_tpl = ChatPromptTemplate.from_messages(
    [
        ("system", "{parser_instructions}"),
        ("human", "make a list{cityName}(used form a nominal expression){viewPointNum}landmark。"),
    ]
)

output_parser = CommaSeparatedListOutputParser()
parser_instructions = output_parser.get_format_instructions()

model = ChatOpenAI(model="gpt-3.5-turbo")

chain = prompt_tpl | model | output_parser

response = (
    {"cityName": "capital of China at different historical periods", "viewPointNum": 3, "parser_instructions": parser_instructions}
)

So LangChain plans to implement the Runnable interface for all components in order to allow components to be called in a fast and concise LCEL way. For example, we commonly use thePromptTemplateLLMChainStructuredOutputParser And so on.

Pipe character ("|")In Python it's something likeorOperations (or arithmetic) such asA|BThat's(B)

That corresponds toLangChainin the Runnable interface of theorHow are the operations implemented? See the source code together:

LangChain was created through theorString all the Runnables together in a single line after passing theinvokeGo ahead and execute them one by one, with the output of the previous component, used as input for the next one.

LangChain this style how a bit like a neural network ah, I have to say, the world is full of similar straw man. I have to say that the world is full of similar straw men!

To summarize: each component of LangChain implements Runnable, through the LCEL method, multiple components are linked together, and finally one by one, the invoke method of each component is executed. The output of the previous component is the input of the next component.

2, the meaning of Runnable and application scenarios

2.1、RunnablePassthrough

define

RunnablePassthrough Mainly used to pass data in a chain.RunnablePassthroughGenerally used in the first position of the chain to receive input from the user. If in the middle position, it is used to receive the output of the previous step.

application scenario

For example, still using the above example, accept the user input city, if the input city is Nanjing, then replace it with Beijing, the rest unchanged. The code is as follows. Here{}cap (a poem)()It's the same semantics.

chain = (
    {
        "cityName": lambda x: 'Beijing, capital of People's *' if x["cityName"] == 'capital of China at different historical periods' else x["cityName"],
        "viewPointNum": lambda x: x["viewPointNum"],
        "parser_instructions": lambda x: x["parser_instructions"],
    }
    | prompt_tpl
    | model
    | output_parser
)

2.2、RunnableParallel

define

RunnableParallelLook in the name.ParallelJust a guess or two for executing multiple components in parallel. This is accomplished through theRunnableParallel, which can realize the need for concurrent execution of some components or all components.

application scenario

For example, two tasks are to be performed at the same time, one listing the famous attractions of the city and the other listing the famous books of the city.

prompt_tpl_1 = ChatPromptTemplate.from_messages(
    [
        ("system", "{parser_instructions}"),
        ("human", "make a list{cityName}(used form a nominal expression){viewPointNum}landmark。"),
    ]
)
prompt_tpl_2 = ChatPromptTemplate.from_messages(
    [
        ("system", "{parser_instructions}"),
        ("human", "make a list关于{cityName}历史(used form a nominal expression){viewPointNum}Notable books。"),
    ]
)

output_parser = CommaSeparatedListOutputParser()
parser_instructions = output_parser.get_format_instructions()

model = ChatOpenAI(model="gpt-3.5-turbo")

chain_1 = prompt_tpl_1 | model | output_parser
chain_2 = prompt_tpl_2 | model | output_parser
chain_parallel = RunnableParallel(view_point=chain_1, book=chain_2)

response = chain_parallel.invoke(
    {"cityName": "capital of China at different historical periods", "viewPointNum": 3, "parser_instructions": parser_instructions}
)

2.3、RunnableBranch

define

RunnableBranchIt is mainly used in the scenario of multi-branch sub-chains, providing a routing function for chain calls, which is somewhat similar to LangChain'srouting chain. We can create multiple sub-chains and then choose to execute a particular sub-chain based on conditions.

application scenario

For example, if there are multiple chains for answering a question, first find the classification based on the question and then answer the question using the specific chain.

model = ChatOpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

# Prepare 2 destination chains: a physical chain and a math chain
# 1. Physics chain
physics_template = """
You are a physicist who specializes in answering physics-related questions, and when you don't know the answer to a question, you answer no.
The specific questions are as follows:
{input}
"""
physics_chain = PromptTemplate.from_template(physics_template) | model | output_parser

# 2. math_chain
math_template = """
You are a mathematician who is good at answering math-related questions, and when you don't know the answer to a question, you answer no.
The specific question is as follows:
{input}
"""
math_chain = PromptTemplate.from_template(math_template) | model | output_parser

# 4. other_chain
other_template = """
You are an AI assistant and you will answer the questions.
The specific questions are as follows:
{input}
"""
other_chain = PromptTemplate.from_template(other_template) | model | output_parser


classify_prompt_template = """
You are asked to classify the following problem as "Math", "Physics", or "Other". You do not need to return more than one classification, just one.
The questions are as follows:
{input}

Categorize the result:
"""
classify_chain = PromptTemplate.from_template(classify_prompt_template) | model | output_parser

answer_chain = RunnableBranch(
    (lambda x: "Math" in x["topic"], math_chain),
    (lambda x: "Physics" in x["topic"], physics_chain),
    other_chain
)

final_chain = {"topic": classify_chain, "input": itemgetter("input")} | RunnableLambda(print_info) | answer_chain
# final_chain.invoke({"input": "What is the radius of the Earth?"})
final_chain.invoke({"input": "What is the result of the derivative of y=x?"})

2.4、RunnableLambda

define

You've got to be kidding me.RunnableLambdaIt converts Python functions intoRunnableobjects. This conversion allows any function to be seen as part of the LCEL chain, and we pass the functionality we need through the custom function +RunnableLambdaway to package it and integrate it into the LCEL chain, so that it is sort ofInterfaces with any external systemUp.

application scenario

For example, if you want to insert a custom function in the middle of the execution (e.g., print logs, etc.), you can do so with a custom function + RunnableLambda.

def print_info(info: str):
    print(f"info: {info}")
    return info

prompt_tpl_1 = ChatPromptTemplate.from_messages(
    [
        ("system", "{parser_instructions}"),
        ("human", "make a list{cityName}(used form a nominal expression){viewPointNum}landmark。"),
    ]
)

output_parser = CommaSeparatedListOutputParser()
parser_instructions = output_parser.get_format_instructions()

model = ChatOpenAI(model="gpt-3.5-turbo")

chain_1 = prompt_tpl_1 | model | RunnableLambda(print_info) | output_parser


response = chain_1.invoke(
    {"cityName": "capital of China at different historical periods", "viewPointNum": 3, "parser_instructions": parser_instructions}
)

3. Summary

This post mainly talks about LangChain's LCEL expression, and the principle of LangChain chain, as well as the definition and application scenarios of several commonly used Runnable, I hope it will be helpful to you.

In the near future, I'm going to launch a column on "Helping Developers Add AI Technology", interested partners can add the microblogging exchange.

This post is finished! Welcome Follow, add micro letter (yclxiao) exchange, QR code as follows!

Link to original article:/s/l-EPH0hsmzQousPz8-MXcQ