What is Avalonia?
Avalonia is a cross-platform UI framework built for .NET development, providing a flexible style system that supports multiple platforms such as Windows, macOS, Linux, iOS, Android and WebAssembly. It is mature and suitable for production environments and is used by companies such as Schneider Electric, Unity, JetBrains and GitHub.
Considered by many to be the successor to WPF, Avalonia provides XAML developers with a familiar and modern cross-platform application development experience. Although similar to WPF, Avalonia is not an exact copy and contains many improvements.
What is SemanticKernel?
Semantic Kernel is an SDK that integrates large language models (such as OpenAI, Azure OpenAI and Hugging Face) with regular programming languages (such as C#, Python and Java). The special feature is that Semantic Kernel is able to automatically schedule and combine these AI models by allowing the definition and chaining of calls to plugins. The functionality is that the user can propose personalized goals to the LLM, have Semantic Kernel's planner generate a plan to achieve the goal, and then have the system automatically execute this plan.
Introduction to silicon-based flow
Silicon Mobility is committed to building AI infrastructure in the era of big models, and accelerating AGI for the benefit of mankind through the collaborative innovation of algorithms, systems, and hardware, which reduces the cost of big-model applications and the threshold of development across orders of magnitude.
SiliconCloud is a one-stop cloud service platform that collects mainstream open source big models, providing developers with faster, cheaper, more comprehensive and silkier model APIs.
At present, SiliconCloud has shelved a variety of open-source large language models, image generation models, including DeepSeek-Coder-V2, Stable Diffusion 3 Medium, Qwen2, GLM-4-9B-Chat, DeepSeek V2, SDXL, InstantID, to support users to Support users to freely switch between models that meet different application scenarios. At the same time, SiliconCloud provides out-of-the-box large model inference acceleration services, bringing a more efficient user experience for generative AI applications.
We know that it's not easy to use OpenAI in China and the cost is high. Now there are many open source models, but for individual developers, a major difficulty in deploying them is hardware resources. Without a graphics card, you can deploy some open source big models with fewer parameters, but the inference speed is definitely very slow, the reason why I chose SiliconCloud is firstly, I signed up for a $42 credit, which will not expire and can be used all the time, secondly, I tried the inference speed and it is really fast, and thirdly (and most importantly), SiliconCloud is announced: SiliconCloud platform of Qwen2 (7B), GLM4 (9B), Yi1.5 (9B) and other top open source big models for free.
What kind of tools to build
I've been learning Avalonia lately, and making a gadget to realize my needs is a good place to start. I'm also interested in SemanticKernel, so I chose to start from the most basic to make a chat application based on big model. Personally, one of my major needs for big models is translation. When I view English websites, I always ask big models to translate so-and-so into Chinese when I encounter something I don't quite understand. Therefore, I chose to build Avalonia to solve this need. The effect of the tool is shown below:
chats
English to Chinese
Chinese to English
Getting Started
Using API services provided by SiliconCloud in SemanticKernel
The first problem to be solved is how to use the services provided by SiliconCloud in SemanticKernel.
There is nothing in the SemanticKernel that tells us how to connect to other big models, but since the interface provided by SiliconCloud is compatible with OpenAI, this can be done by changing the address to which requests are sent when they are sent.
Add the OpenAIHttpClientHandler class:
public class OpenAIHttpClientHandler : HttpClientHandler
{
protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
UriBuilder uriBuilder;
switch (?.LocalPath)
{
case "/v1/chat/completions":
uriBuilder = new UriBuilder()
{
// Here's what you want to change. URL
Scheme = "https",
Host = "",
Path = "v1/chat/completions",
};
= ;
break;
}
HttpResponseMessage response = await (request, cancellationToken);
return response;
}
}
The kernel is built in this way:
var handler = new OpenAIHttpClientHandler(); var builder = (); var builder = (); var builder = ()
var builder = ()
.AddOpenAIChatCompletion(
modelId: "Qwen/Qwen1.5-7B-Chat",
apiKey: "Your apikey",
httpClient: new HttpClient(handler));
_kernel = ();
_kernel is a global private variable:
private Kernel _kernel;
Construction of the page
The axaml is shown below:
<Window xmlns="/avaloniaui"
xmlns:x="/winfx/2006/xaml"
xmlns:vm="using:"
xmlns:d="/expression/blend/2008"
xmlns:mc="/markup-compatibility/2006"
xmlns:views="clr-namespace:"
mc:Ignorable="d" d:DesignWidth="800" d:DesignHeight="450"
x:Class=""
Icon="/Assets/"
Title="AvaloniaChat">
<>
<!-- This only sets the DataContext for the previewer in an IDE,
to set the actual DataContext for runtime, set the DataContext property in code (look at ) -->
<vm:MainViewModel />
</>
<StackPanel>
<Grid>
<>
<ColumnDefinition Width="*" />
<ColumnDefinition Width="*" />
</>
<Grid ="0">
<StackPanel>
<StackPanel Orientation="Horizontal">
<Button Content="askAI" Margin="10"
Command="{Binding AskCommand}"></Button>
<!--<Button Content="translate into:"></Button>-->
<Label Content="translate into:"
HorizontalAlignment="Center"
VerticalAlignment="Center"></Label>
<ComboBox ItemsSource="{Binding Languages}"
SelectedItem="{Binding SelectedLanguage}"
HorizontalAlignment="Center"
VerticalAlignment="Center"></ComboBox>
<Button Content="rendering" Margin="10"
Command="{Binding TranslateCommand}"></Button>
</StackPanel>
<TextBox Height="300" Margin="10"
Text="{Binding AskText}"
TextWrapping="Wrap"
AcceptsReturn="True"></TextBox>
</StackPanel>
</Grid>
<Grid ="1">
<StackPanel>
<Button Content="AIresponsive" Margin="10"></Button>
<TextBox Height="300"
Margin="10"
Text="{Binding ResponseText}"
TextWrapping="Wrap"></TextBox>
</StackPanel>
</Grid>
</Grid>
</StackPanel>
</Window>
The interface effect is shown below:
Build ViewModel
The ViewModel is shown below:
public partial class MainViewModel : ViewModelBase
{
private Kernel _kernel;
[ObservableProperty]
private string askText;
[ObservableProperty]
private string responseText;
[ObservableProperty]
private string selectedLanguage;
public string[] Languages { get; set; }
public MainViewModel()
{
var handler = new OpenAIHttpClientHandler();
var builder = ()
.AddOpenAIChatCompletion(
modelId: "Qwen/Qwen1.5-7B-Chat",
apiKey: "yoursapikey",
httpClient: new HttpClient(handler));
_kernel = ();
AskText = " ";
ResponseText = " ";
SelectedLanguage = " ";
Languages = new string[] { "Chinese writing","English (language)"};
}
[RelayCommand]
private async Task Ask()
{
if(ResponseText != "")
{
ResponseText = "";
}
await foreach (var update in _kernel.InvokePromptStreamingAsync(AskText))
{
ResponseText += ();
}
}
[RelayCommand]
private async Task Translate()
{
string skPrompt = """
{{$input}}
Translate the above input into{{$language}},No need for anything else
""";
if (ResponseText != "")
{
ResponseText = "";
}
await foreach (var update in _kernel.InvokePromptStreamingAsync(skPrompt, new() { ["input"] = AskText,["language"] = SelectedLanguage }))
{
ResponseText += ();
}
}
}
Using Streaming Returns
[RelayCommand]
private async Task Ask()
{
if(ResponseText != "")
{
ResponseText = "";
}
await foreach (var update in _kernel.InvokePromptStreamingAsync(AskText))
{
ResponseText += ();
}
}
The effect of the realization is as follows:
Write prompts
When we need the translation function, we only need to translate the text and nothing else, the simple template is as follows:
string skPrompt = """
{{$input}}
Translate the above input into {{$language}} without anything else
""";
{{$input}}
together with{{$language}}
is a parameter in the template that will be replaced when used, as shown below:
await foreach (var update in _kernel.InvokePromptStreamingAsync(skPrompt, new() { ["input"] = AskText,["language"] = SelectedLanguage }))
{
ResponseText += ();
}
With these steps, we have created a simple gadget using Avalonia.