AI search is hot these days with Perplexity, Secret Tower AI, MindSearch, Perplexica, memfree, khoj, and more.
In the process of using the Large Language Model, perhaps you have also encountered this limitation of not being able to access the latest information on the Internet, resulting in answers that are not based on the latest information, in order to solve this problem, it can be realized by means of the LLM + search engine.
As an example, a simple project I previously open-sourced, which is not known if you ask directly for the usual big language model, is as follows:
Compare the answers that can be networked:
Perplexity
khoj
Kimi
So how do we achieve a similar effect ourselves?
Let's take a look at the effect of your own implementation first:
Source code GitHub address:/Ming-jiayou/SimpleAISearch
If this is of interest, read on.
Ideas for implementation
Essentially it's LLM + search engine.
First of all, you need to be able to realize the function call function , in the previous article has been explained. Mainly introduces the realization of the idea that the source code has been open source, if you are interested in their own to see the specific code.
First of all in the plugin to add the code to call the search engine, I here the search engine selection is DuckDuckGo.
At the start of execution, LLM determines that this function needs to be called and that the argument is the problem:
This function is shown below:
Search engines will find relevant content:
Let the LLM give an answer based on this information obtained:
It is currently displayed on the interface after being summarized, or it can be modified to not be summarized.
The above is a simple idea for implementation.
Quick Experience
Built from source code
As with the previous LLM project, just modify it to select the platform you are using and fill in the API Key.
Direct experience
I've posted two versions on github one dependent on the framework and one not:
After downloading and unzipping, fill in your api key in appsettings to use it.