Prerequisites: Please make sure you have a local installation of ollama and the relevant local offline model. Or already have a remote modeling environment, etc. If not, please deploy it yourself. If you need help, you can get contact information at the end of the article to inquire. Since deploying offline big models is too easy and there is a lot of online information, I'll omit this step.
Create a project to add a nuget package using the MIT open source protocol OllamaSharp
Make sure the Ollama app is launched.
The default port of the local ollama is 11434, you can also change the environment variables to modify the address of the big model, the default port, whether to allow remote access, etc. when deploying. Create a connection, and verify that the connection is normal, return true, means the connection is normal.
You can iterate out what models are already there, let's write a select model input to select the model by input. Because the IEnumerable collection, so here for the sake of convenience, between the array to get the subscripts, so as to get the model name. Remember to optimize the writing style when you use it yourself.
Run it, and you can see the list of models I have listed locally and other information.
After selecting the model and binding the prompt prompt word to create a chat conversation. After the dialog is created, reasoning is performed based on the user input and get the return content.
The results of the run are as follows:
If you need the demo of the above demo source code, you can in the personal public number Dotnet Dancer, reply 【code demo】 can get the code open source address.
Core code snippet:
var uri = new Uri("http://localhost:11434");
var ollama = new OllamaApiClient(uri);
var models = await ();
foreach (var model in models)
{
($"{index++}:{} { / 1024 / 1024} MB"); // output model name and size
}
int selectIndex = Convert.ToInt32(());
= ()[selectIndex].Name; // select model name
var chat = new Chat(ollama, prompt);
await foreach (var answerToken in (message))
(answerToken);