Location>code7788 >text

He is here, a responsive programming paradigm tailored for the big model (1) - Start with access to DeepSeek

Popularity:286 ℃/2025-02-27 23:14:46

Dada, he is here! 👋 Today we are going to introduce a new Java responsive big model programming paradigm —FEL. You may have heard of langchain, so you can think of FEL as a Java version of langchain for the time being. 😎 Without further ado, today we will connect to the currently popular onesDeepSeekLet’s get to know FEL. Through FEL, you can easily orchestrate and run large-scale applications, opening a new chapter in intelligent programming! 🎉

🛠️ Quick to get started: Easy access to DeepSeek

1. Prepare the environment

First, enter the cliché - prepare for the environment. EnterFIT-Framework Project Address, download the project code. according toGetting started, you can quickly deploy a FIT environment and refer to FEL's instructions to master the power of the FEL module. The FEL module not only supports DeepSeek access, but also supports any large model that complies with the OpenAI API standards. In addition, it also provides a wealth of tools and large-scale operation primitives to help you quickly build smart applications. 🛠️

The following example address isFEL Example: 01-model, you can quickly get started with this code. For details, please refer toFEL Instruction Manual: Chat Model Use

2. Start and run

Key Code

Here we use FEL's most basic large-model access capability to access DeepSeek. passChatModelThe default implementation of this, you can specify the big model to call. Here are the key codes for normal calls and streaming calls in the following examples.

Normal call:

public ChatMessage chat(@RequestParam("query") String query) {
    ChatOption option = ().model().stream(false).build();
    return ((new HumanMessage(query)), option).first().block().get();
}

Streaming call:

public Choir<ChatMessage> chatStream(@RequestParam("query") String query) {
    ChatOption option = ().model().stream(true).build();
    return ((new HumanMessage(query)), option);
}
  1. ChatOptionSpecify the name of the big model to be called and the ID of whether to stream (set astrueIt means that the result is obtained through streaming).
  2. Then callgenerateThe method starts a call. The result returned is a responsive stream object through which you can get the execution result.
  3. At this point, a simple access code is finished! 🎉

Configure DeepSeek

Configuration file in the exampleresources/, configure your DeepSeek API key, API address, and model name. The example configuration is as follows:

Here you can use a silicon-based mobile platform, which has a certain free credit for everyone to use. Create it after registering an account.API Key. The configuration example here is configured with the information of its platform, whereapi-keyReplace with what you createdAPI KeyJust do it.

fel:
  openai:
    api-base: '/v1'
    api-key: 'your-api-key'
example:
  model: 'deepseek-ai/DeepSeek-R1'

Start the program

  1. After the configuration is complete, you can start your application. Here you can passIDEAStart DemeApplication directly
  2. When the console sees the following information, it means that you have successfully started.
Start netty http server successfully.

Experience your results

Normal call

Enter the sample request address in the browser, for example:http://localhost:8080/ai/example/chat?query=Hello, DeepSeek. You may see a response similar to the following:

{
   "content": "<think>\n\n</think>\n\nHello! I am DeepSeek-R1, an intelligent assistant developed by DeepSeek, and I will do my best to help you. Is there anything I can serve you?",
   "toolCalls": []
 }
Streaming call

Enter the sample request address in the browser, for example:http://localhost:8080/ai/example/chat-stream?query=Hello, DeepSeek. You may see a response similar to the following:

data:{"content":"<think>","toolCalls":[]}

 data:{"content":"\n\n","toolCalls":[]}

 data:{"content":"</think>","toolCalls":[]}

 data:{"content":"\n\n","toolCalls":[]}

 data:{"content":"Hello","toolCalls":[]}

 data:{"content":"!","toolCalls":[]}

 data:{"content":"I am","toolCalls":[]}

 data:{"content":"Deep","toolCalls":[]}

 data:{"content":"Se","toolCalls":[]}

 data:{"content":"ek","toolCalls":[]}

 ...

🌟What are the advantages of FEL framework?

The above example is just our most basic function! We have cooler ways to use them waiting for you to explore. Of course, in the next time, we will write a series of articles to let you know us better. 😉

1. Intuitive arrangement

The FEL framework provides intuitive orchestration methods to help developers easily build complex application logic. Whether it is a simple dialogue system or complex multitasking, FEL can achieve efficient application orchestration through concise configuration and code.

For example, a large model application orchestration code that includes a knowledge base and a large model may be as follows:

AiProcessFlow<String, String> smartAssistantFlow = AiFlows.<String>create()
     .map(query -> ("query", query)) // Convert user input to internal format
     .retrieve(new DefaultVectorRetriever(vectorStore)) // Retrieve related information
     .generate(new ChatFlowModel(chatModel, chatOption)) // Call the big model to generate answers
     .format(new JsonOutputParser(serializer, )) // Format output
     .close();

2. Rich large model operation primitives

The FEL framework has a rich built-in large model-related operation primitives, including:

  • RAG search (retrieve): Quickly extract relevant information from massive data.
  • Prompt word template (prompt): Quickly generate structured output through predefined templates.
  • Generate: Seamlessly connect to large models such as DeepSeek to achieve intelligent dialogue and generation.
  • Memory: Supports memory function of multiple rounds of conversations to improve user experience.
  • Agent(delegate): Achieve complex task decomposition and execution through agents.

These operation primitives provide developers with powerful integration and concatenation capabilities, as well as some general flow operation primitives, etc. With these, you can easily deal with various smart application scenarios! 🚀

3. Flexible scalability

The FEL framework is flexible in design, and our various primitive operations are based on interface design (this is also the core idea of ​​our FIT programming framework). If you have some custom functions, you can easily integrate and create your own smart applications. 🛠️

🌈 Future Outlook: Unlimited Possibilities of Smart Applications

We believe that the FEL framework will become your right-hand assistant to explore the intelligent world. Through continuous technological innovation and optimization, we will continue to expand the functions of the framework, provide easier to use and richer operations, and help every developer stand out in the intelligent era.

In the future, the FEL framework will support more large-scale model access, provide stronger orchestration capabilities, and help you build more intelligent and efficient applications. Whether it is an enterprise-level solution or a personal project, FEL will provide you with all-round support. 🌟

Project address: Github: FIT-Framework GitCode: FIT-Framework

Questions to think about:If your team adopts FIT, what historical pain points would you use it to solve? Welcome to discuss in the comment section!

Technicians, speak with code and think with architecture

Follow us and explore more "elegant decoupling" engineering practices! 🛠️