Location>code7788 >text

SimpleAIAgent: Start building simple AI Agent apps with free glm-4-flash!

Popularity:433 ℃/2024-09-25 12:43:53

SimpleAIAgent is an AI Agent exploration application built on C# Semantic Kernel and WPF. It is mainly used to build AI Agent application exploration learning using domestic large language model or open source large language model, hope it can help interested friends.

Next I would like to share my AI Agent application practice.

Translate the text and put it into a file

The first example translates the text and deposits it into the specified file.

Enter the following:

image-20240925113714519

implementation process

In the first step, LLM determines the function that should be called with the following parameters:

image-20240925113837225

In the second step, LLM calls this function for us and returns the result:

image-20240925113939862

In the third step, LLM again determines the function to be called with parameters:

image-20240925114202861

In the fourth step, the LLM calls this function and returns the function return value:

image-20240925114250823

In the fifth step, the LLM determines that the task has been completed and calls the end function:

image-20240925114350284

Step 6: Return the final response:

image-20240925114503461

View Results

image-20240925114554332

You will find an extra file on your desktop, open it as shown below:

image-20240925114623548

The above AI Agent application can be realized using glm-4-flash, but of course you can try other models, the stronger the model, the higher the probability of success.

Implementation of file-to-file translation

Input:

image-20240925114853823

The contents of the document are as follows:

image-20240925115006964

It's a description of WPF in Chinese, now I want LLM to translate it into English for me before I save it to another file.

Again still using the free glm-4-flash

implementation process

In the first step, LLM determines the function that should be called with the following parameters:

image-20240925115631597

In the second step, LLM calls this function for us and returns the result:

image-20240925120033177

In the third step, the LLM determines that the task has been completed and calls the end function:

image-20240925115856804

In the fourth step, the final response is returned:

image-20240925115922792

View Results

image-20240925120115600

image-20240925120135716

Realization points

You may notice that the main point of the implementation is actually to allow LLM to call functions automatically, that is, to realize the function of automatic function calls.

All you have to do after that is just write the plugin based on what you want LLM to do automatically, and then import that plugin.

It is better not to have too many functions in the plugin, too many models with weak capabilities will mess up the calls. It is better to realize different characters importing different plugins according to your needs.

Plugins can be written like this, using the translation plugin above as an example:

#pragma warning disable SKEXP0050
    internal class TranslationFunctions
    {
        private readonly Kernel _kernel;
        public TranslationFunctions()
        {
            var handler = new OpenAIHttpClientHandler();
            var builder = ()
            .AddOpenAIChatCompletion(
               modelId: ,
               apiKey: ,
               httpClient: new HttpClient(handler));
            _kernel = ();
        }
        [KernelFunction, Description("Select the language in which the user wants to translate the text")]
        public async Task<string> TranslateText(
            [Description("Text to be translated")] string text,
            [Description("language to be translated,through (a gap)'Chinese writing'、'English (language)'choose one of")] string language
 )
        {
            string skPrompt = """
                            {{$input}}

                            Translate the above text into{{$language}},No need for anything else
                            """;
            var result = await _kernel.InvokePromptAsync(skPrompt, new() { ["input"] = text, ["language"] = language });
            var str = ();
            return str;
        }

        [KernelFunction, Description("Implementation of file-to-file translation")]
        public async Task<string> TranslateTextFileToFile(
           [Description("Path to the file to be translated")] string path1,
           [Description("Path to the file where the translation results are saved")] string path2,
           [Description("language to be translated,through (a gap)'Chinese writing'、'English (language)'choose one of")] string language
)
        {
            string fileContent = (path1);
            var lines = (fileContent,100);
            var paragraphs = (lines, 1000);
            string result = "";
            string skPrompt = """
                            {{$input}}

                            Translate the above text into{{$language}},No need for anything else
                            """;
            foreach (var paragraph in paragraphs)
            {
                var result1 = await _kernel.InvokePromptAsync(skPrompt, new() { ["input"] = paragraph, ["language"] = language });
                result += () + "\r\n";
            }        
           
            var str = ();

            // utilization StreamWriter Write text to file
            using (StreamWriter writer = new StreamWriter(path2, true))
            {
                (str);
            }

            string message = $"Documentation has been successfully implemented{path1}To document{path2}translations";
            return message;
        }

        [KernelFunction, Description("将文本保存To document")]
        public string SaveTextToFile(
           [Description("Text to be saved")] string text,
           [Description("Path to the file to save to")] string filePath
)
        {
            // utilization StreamWriter Write text to file
            using (StreamWriter writer = new StreamWriter(filePath, true))
            {
                (text);
            }
            return "Successfully written to file";
        }

        [KernelFunction, Description("through (a gap)文件中读取文本")]
        public string GetTextFromFile(
           [Description("Path to the file to be read")] string filePath
)
        {
            string fileContent = (filePath);
            return fileContent;
        }

    }

It's just a matter of adding some descriptions to help LLM understand the purpose of the function. I believe it's not a problem for my programmer friends to build their own AI Agent applications now.

I hope this sharing will be helpful for those interested in building AI Agent applications using LLM.

For those interested in this app, pull the code and change it to, fill in your API Key with the model name or use Ollma to fill in the address and fill in the model name for a quick experience.

GitHub Address:/Ming-jiayou/SimpleAIAgent