Location>code7788 >text

Semantic Kernel/C#: A Generalized Function Calling Approach with a Tested Large Model at the End of the Article

Popularity:305 ℃/2024-08-29 07:55:05

Introduction to Funcion Calling

Function calls allow you to connect models such as gpt-4o to external tools and systems. This is useful for many things, such as empowering AI assistants or creating deep integrations between your apps and models.

If you know or have used Semantic Kernel may find that in addition to OpenAI support Function Calling models, automatic function calling does not seem to be good, domestic big models almost can not be used, due to want to solve this problem, in GitHub to find a big brother's method.

GitHub Address:/Jenscaasen/UniversalLLMFunctionCaller

Biggie is doing it by prompting the engineering with the principle of calling local functions in Semantic Kernel. I looked at Biggie's code and changed the prompt word to Chinese, which may be more applicable to the big domestic models.

Wrote a post about it earlier:How to make it possible for other models to call local functions in SemanticKernel as well!The methodology was introduced.

But at that time they did not have an open source project, interested friends, there is no way to quickly get started to experience, only to do it all over again, and now has been integrated into this part of my open source project SimpleRAG, interested friends only need to fill in their own API Key can be a quick experience, you can also easily view the code.

GitHub Address:/Ming-jiayou/SimpleRAG

A Generic Approach to Function Calling

Before we start the presentation, let's look at the effect:

Compare the effect of not using FunctionCalling:

image-20240828162455519

One more example:

Compare the effect of not using Function Calling:

image-20240828162754671

The exact code can be viewed in GitHub, and the focus here is on the implementation.

Here is an example of the Qwen2-7B-Instruct.

First create a Kernel:

image-20240828163952619

Import plugins in Kernel:

image-20240828164048682

The above are just simulation functions used for testing.

Just write it like this:

image-20240828165221419

Now explore the process inside.

First convert the plugin to text:

image-20240828165354209

image-20240828165413278

Add examples to the dialog history:

image-20240828165513048

image-20240828165557673

Add a command to the dialog history:

image-20240828165704135

image-20240828165801213

Embed all the available functions into this Prompt now, as shown below:

image-20240828165901365

Add the command to the dialog history now, as shown below:

image-20240828170031287

Let LLM choose which function should be called first or not, depending on the task:

image-20240828170139513

LLM returns the function that needs to be called to accomplish this task:

image-20240828170317084

Verify this function:

image-20240828170348135

Calls a function in the plugin:

image-20240828170514946

image-20240828170607398

image-20240828170626711

The result returned by the first function:

image-20240828170658135

Sending another request to the LLM, which function should now be called, the LLM returns as shown below:

image-20240828170756097

Similarly execute the second function in the plugin:

image-20240828170846964

The return of the second function:

image-20240828170917273

The request is then sent to the LLM:

image-20240828171024714

The function called is named Finished, indicating that the process is complete and you can jump out, as shown below:

image-20240828171128972

Got the last of the information:

image-20240828171224711

The results are shown below:

image-20240828171253353

The above is the approximate flow of this method, the specific implementation can be seen in the GitHub open source code.

After testing this method available LLM

flat-roofed building usable model
Silicon-based flow Llama-3.1-405/70/8B、Llama-3-70/8B-Instruct、DeepSeek-V2-Chat、deepseek-llm-67b-chat、Qwen2-72/57/7/1.5B-Instruct、Qwen2-57B-A14B-Instruct、Qwen1.5-110/32/14B-Chat、Qwen2-Math-72B-Instruct、Yi-1.5-34/9/6B-Chat-16K、internlm2_5-20/7b-chat
Xunfei Starburst (1940-), Chinese science fiction writer and dramatist Spark Lite、Spark Pro-128K、Spark Max、Spark4.0 Ultra
lit. zero-one million things yi-large、yi-medium、yi-spark、yi-large-rag、yi-large-fc、yi-large-turbo
The Dark Side of the Moon moonshot-v1-8k、moonshot-v1-32k、moonshot-v1-128k
Smart Spectrum AI glm-4-0520、glm-4、glm-4-air、glm-4-airx、glm-4-flash、glm-4v、glm-3-turbo
DeepSeek deepseek-chat、deepseek-coder
jumping stars step-1-8k、step-1-32k、step-1-128k、step-2-16k-nightly、step-1-flash
Minimax abab6.5s-chat、abab5.5-chat
Alibaba's hundreds of refinements qwen-max、qwen2-math-72b-instruct、qwen-max-0428、qwen2-72b-instruct、qwen2-57b-a14b-instruct、qwen2-7b-instruct

The above is not necessarily complete, there are still some models that have not been tested, so feel free to continue to add.