Location>code7788 >text

Implementing Big Model Function Calling with Semantic Kernel Framework and C#.NET

Popularity:29 ℃/2025-03-18 08:33:49
def get_current_weather(location, unit='Celsius'):
    #Implement the logic of obtaining weather information here
    return {"location": location, "temperature": "22", "unit": unit, "description": "sunny"}

Description function: Provides a description of the function for GPT-4, including function name, function description and parameter information. This helps the model understand what a function is for and how it is called.

function_descriptions = [
    {
        "name": "get_current_weather",
        "description": "Get current weather information for a specified location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "Location name"
                },
                "unit": {
                    "type": "string",
                    "enum": ["Celsius", "Fahrenheit"],
                    "description": "Temperature unit, default to degrees Celsius"
                }
            },
            "required": ["location"]
        }
    }
]

Interact with GPT-4: Pass user input, function description, and model name to GPT-4. The model will decide whether to call the function based on user input and function description, and return the corresponding response.

import openai
import json

openai.api_key = 'YOUR_OPENAI_API_KEY'

def chat_with_gpt(messages, functions):
    response = (
        model="gpt-4-0613",
        messages=messages,
        functions=functions,
        function_call="auto"  #The model will decide whether to call the function as needed
    )
    return response

#User input
user_message = {"role": "user", "content": "Please tell me about the current weather in Beijing."}

#Interact with the model
response = chat_with_gpt([user_message], function_descriptions)

Processing model responses: Check the response of the model to determine whether the function needs to be called. If the model returns function call information, the function name and parameters are extracted and the corresponding function is called.

response_message = response["choices"][0]["message"]

if "function_call" in response_message:
    #Extract function names and parameters
    function_name = response_message["function_call"]["name"]
    function_args = (response_message["function_call"]["arguments"])

    #Call the corresponding function
    if function_name == "get_current_weather":
        function_response = get_current_weather(
            location=function_args.get("location"),
            unit=function_args.get("unit", "Celsius")
        )

        #Pass the function response back to the model to get the final answer
        messages = [
            user_message,
            response_message,  #Contains function call information
            {
                "role": "function",
                "name": function_name,
                "content": (function_response)
            }
        ]
        final_response = chat_with_gpt(messages, function_descriptions)
        answer = final_response["choices"][0]["message"]["content"]
        print(answer)
else:
    #The model provides the answer directly
    answer = response_message["content"]
    print(answer)

GPT-4 does not directly execute function calls, but generates a JSON object containing function names and parameters based on the provided function description. Then we need to parse the object in the application and actually call the corresponding function.

According to the result returned by the function, put it in Prompt, call the big model API, and generate new content and return it to the user.

2. Use the Semantic Kernel framework and C#.NET to implement Function Calling

existSemantic KernelIn the framework, the big model can be passedFunction Calling(Function call) to execute functions in plugins. The following example shows how to make a big model call a in Semantic KernelPlugin Function Call

Design a computing plugin that contains aadd_numbersMethod, allowing the big model to call it to perform addition operations.

First install the Semantic Kernel Nuget package

dotnet add package

In Semantic Kernel, a plug-in is a C# class that contains methods and uses[KernelFunction]Make labels.

using ;
using ;

public class CalculatorPlugin
{
    [KernelFunction("add_numbers")]
    public int AddNumbers(int a, int b)
    {
        return a + b;
    }
}

existorMainIn the method, initializeSemantic KernelAnd register this plugin.

using System;
using ;
using ;
using ;

class Program
{
    static async Task Main(string[] args)
    {
        //1. Create a Kernel instance
        var kernel = ()
            .AddOpenAIChatCompletion(
                "gpt-4-turbo",  //OpenAI model name
                "your-openai-api-key") //Replace with your API Key
            .Build();

        //2. Load the plugin (CalculatorPlugin)
        var plugin = (new CalculatorPlugin(), "Calculator");

        //3. Let the mockup call `add_numbers`
        var result = await ("Calculator", "add_numbers", new()
        {
            { "a", 5 },
            { "b", 10 }
        });

        ($"Function Call Result: {result}");
    }
}

implementdotnet run, Output result: Function Call Results: 15

Description of the code execution principle

  • Semantic KernelProvidedPluginsMechanism that allows the big model to call.NETMethods in the code.
  • [KernelFunction("add_numbers")]Let the big model know that this function can be called.
  • (new CalculatorPlugin(), "Calculator")BundleCalculatorPluginLoad as a plugin into Semantic Kernel.
  • ("Calculator", "add_numbers", new() { { "a", 5 }, { "b", 10 } })Let the big model calladd_numbersAnd pass in parameters.

 

Zhou Guoqing

2025/3/18