langchain_chatchat+ollama Deployment of local knowledge bases, networking queries and queries against database (Oracle) data
There's actually quite a lot involved, so try to minimize the space
-
langchain_chatchat+ollama Deployment of local knowledge bases, networking queries and queries against database (Oracle) data
-
Preparation:
- Deploying ollama and pulling models
- Deploy langchain_chatchat
- Deployment of oracle database
- Initial configuration file tweaks to langchain-chatchat:
- langchain-chatchat execution:
- langchain-chatchat simple operation:
- langchain-chatchat networking queries:
-
langchain-chatchat connects to oracle database and queries the contents:
- Important!!! The most important!!!
-
Preparation:
Preparation:
Deploy ollama and pull qwen2.5:14b and quentinz/bge-large-zh-v1.5:latest
Deploy langchain_chatchat
Deployment of oracle database
Deploying ollama and pulling models
You can refer to the article below:
/jokingremarks/p/18151827
Deploy langchain_chatchat
Langchain_chatchat's github path:/chatchat-space/Langchain-Chatchat
Quickly create a venv virtual environment management tool using vscode
Download the Langchain-Chatchat python library directly in your current environment
Note: This can only be done with Python 3.8-3.11 or it will report an error!
Langchain-Chatchat is available as a Python library, please run it:pip install langchain-chatchat -U
If you want to use Xinference to access Langchain-Chatchat, it is recommended to use the following installation method:pip install "langchain-chatchat[xinference]" -U
In this article, we use ollama as a local model call, so we don't need to install Xinference.
Deployment of oracle database
Here I am directly downloaded to the local, using the version is Oracle 19c, installation tutorials online a lot, remember to create a database.My database name here is orcl
Initial configuration file tweaks to langchain-chatchat:
First things first, adjust model_settings.yaml
DEFAULT_LLM_MODEL and DEFAULT_EMBEDDING_MODEL by replacing them with the model names downloaded from ollama, here we use qwen2.5:14b as the LLM, and quentinz/bge-large-zh-v1.5:latest as the Embedding
# LLM name chosen by default
DEFAULT_LLM_MODEL: qwen2.5:14b
# Embedding name chosen by default
DEFAULT_EMBEDDING_MODEL: quentinz/bge-large-zh-v1.5:latest
MODEL_PLATFORMS section only retains ollama while modifying the content
llm_models:
- qwen2.5:14b
embed_models:
- quentinz/bge-large-zh-v1.5:latest
langchain-chatchat execution:
Details can be found in the documentation:/chatchat-space/Langchain-Chatchat
It's really just three steps.
Perform initialization
chatchat init
Initializing the Knowledge Base
chatchat kb -r
Initiation of projects
chatchat start -a
It usually automatically jumps inside the browser at http://127.0.0.1:8501/
langchain-chatchat simple operation:
Model dialog, that is, the most basic dialog operation, when enabling agent you can choose different tools to conduct the dialog
RAG Conversation, which allows you to select different scenarios for the conversation, among which are Knowledge Base Q&A, Documentation Conversation and Search Engine Q&A
Knowledge Base Q&A is to use the content of the file under the project path to answer, there will be some self-contained files in it, you can upload your own!
File dialog is a quiz based on the content of the uploaded file.
The search engine dialog will be added later and will require another tweak to the configuration file
Knowledge base management, i.e. adding and deleting knowledge bases as well as rebuilding vector bases within a project's internal knowledge base.
langchain-chatchat networking queries:
You may need to FQ if you use duckduckgo as a search engine, this takes care of itself!
Install duckduckgo-search first.
pip install -U duckduckgo-search
Set search_engine_name of search_internet in tool_settings.yaml to duckduckgo
If you want to query the weather or map-related, you can increase the configuration with the Gaode map, the api can go directly to the Gaode application, easier
Change DEFAULT_SEARCH_ENGINE in kb_settings.yaml to duckduckgo as well
After reloading the project, you can use the search engine conversation
langchain-chatchat connects to oracle database and queries the contents:
Official Documentation:/chatchat-space/Langchain-Chatchat/blob/master/docs/install/README_text2sql.md
First we find the text2sql in tool_settings.yaml and modify it
There are a few things to keep in mind
For oracle connection I am using oracledb.So you need to install oracledb
python -m pip install oracledb
table_comments are some tips with, if you find that the model formation of the sql is always looking for the wrong table or field, on the inside of the instructions, the accuracy will be greatly improved!
Important!!! The most important!!!
Because of the special syntax of oracle, the source code of langchain has to be modified
Find the project /envs/chat_0.3.1/lib/python3.11/site-packages/langchain_experimental/sql/
There is some processing of SQL in it, and I've encountered the following so far, all of which need to be re-split and processed before they work
if "sql" in sql_cmd.
sql_cmd = sql_cmd.split("sql")[-1].strip() # added sql filter, split by sql, take the last paragraph, in order to remove the beginning of the ```sql
if "`" in sql_cmd.
sql_cmd = sql_cmd.split("`")[0].strip() # added sql filter, split by sql, take the last paragraph, in order to remove the end of the ``` sql
if "LIMIT" in sql_cmd.
sql_cmd = sql_cmd.split("LIMIT")[0].strip() # added sql filter, split by sql, take the last paragraph, in order to remove the LIMIT
Then re-run the project, choose to enable agent and select the database dialog, enter what you want to search for, you can see the corresponding sql and query results in the terminal
You can see that the answer is the same as the query in the database
It doesn't seem to be very friendly to Oracle databases, though, and sometimes there are still some strange errors reported
that amount or more