NextChat Introduction
One-Click to get a well-designed cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
Deploy your cross-platform private ChatGPT application for free with one click, supporting GPT3, GPT4 & Gemini Pro models.
Key Features
- Use Vercel in 1 minuteFree One-Click Deployment
- Provides a very small (~5MB) cross-platform client (Linux/Windows/MacOS).download address
- Full Markdown support: LaTex formulas, Mermaid flowcharts, code highlighting, and more!
- Well-designed UI, responsive design, dark color mode support, PWA support
- Extremely fast first screen load speed (~100kb) with streaming response support
- Privacy and security, all data is saved locally in the user's browser
- Pre-built character features (masks) to easily create, share and debug your personalized conversations
- Huge list of built-in prompts fromChinese writingcap (a poem)English (language)
- Automatically compresses contextual chat transcripts to support very long conversations while saving tokens
- Multi-language support: English, Simplified Chinese, Traditional Chinese, Japanese, Español, Italiano, Türkçe, Deutsch, Tiếng Việt, Русский, Čeština, 한국어, Indonesia
- Have your own domain name? Good for you, bind it and you're ready to go anywhere!barrier-freequick access
GitHub Address:/ChatGPTNextWeb/ChatGPT-Next-Web
Introduction to SiliconCloud
SiliconCloud Provide cost-effective GenAI services based on excellent open source base models.
Unlike most big model cloud service platforms that only provide their own big model APIs.SiliconCloudA variety of open-source large language models and image generation models, including Qwen, DeepSeek, GLM, Yi, Mistral, LLaMA 3, SDXL, InstantID, have been uploaded to the shelf, and users are free to switch between models suitable for different application scenarios.
More importantly, SiliconCloud offersout-of-the-boxs Large Model Inference Acceleration Service to bring a more efficient user experience to your GenAI applications.
For developers, SiliconCloud provides one-click access to top open source models. This allows developers to have better application development speed and experience while significantly reducing the cost of trial and error in application development.
Official website address:/zh-cn/siliconcloud
Accessing the SiliconCloud API in NextChat
Click Releases:
Select the installation package for the corresponding operating system:
The installation is complete and the opening screen is shown below:
Click Settings to make the following settings:
Since the API service provided by SiliconCloud is already compatible with the OpenAI format, the model service provider can leave it as it is and just change the interface address to, fill in the API Key and just fill in the model name.
Model names can be seen how to write them in SiliconCloud's documentation:
Address:/reference/chat-completions-3
Conduct a dialog to test if the configuration was successful:
The configuration has been successful.
The good thing about SiliconCloud is that it provides different advanced open source big language models, for a specific task, maybe the model with low charge and the model with high charge can be done well, then switch to the model with low charge, so that you can save the cost.
Access to SiliconCloud's API makes it easy to experience different advanced open source big language models.
For example, if I want to use meta-llama/Meta-Llama-3.1-405B-Instruct, an advanced open-source model, I just need to change the model name:
Experience the effect of answering meta-llama/Meta-Llama-3.1-405B-Instruct:
Through the above simple steps, you can access the SiliconCloud API in NextChat, and after accessing it, you can start to experience the effect of answering different open source advanced large language models.