This article describes how theModeling Service DashScopecarry outMultimodal vector generation , and into the vector retrieval service DashVector for vector retrieval.
Modeling Service DashScope, through a flexible and easy-to-use model API service, the capabilities of various modal models are made easily available to AI developers. Through the Spirit Accumulation API, developers can not only directly integrate the powerful capabilities of large models, but also fine-tune the training of models to achieve model customization.
pre-conditions
-
DashVector:
- Cluster created
- API-KEY has been obtained
- The latest version of the SDK has been installed
- Service enabled and API-KEY obtained
- The latest version of the SDK has been installed
ONE-PEACE multimodal vector characterization
synopsis
ONE-PEACangraphico-acoustic tri-modal The generic representation model, which achieves new SOTA performance in several tasks of semantic segmentation, audio-text retrieval, audio categorization, and visual localization, also achieves comparable leading results in video classification, image categorization, graphic retrieval, and the multimodal classicbenchmark.
clarification
For more information on the Lingzhi ONE-PEACE multimodal vector characterization see:ONE-PEACE multimodal vector characterization。
usage example
clarification
The following replacement code is required for proper operation:
-
The DashVector api-key replaces the example's
-
The DashVector Cluster Endpoint replaces the sample
-
The DashScope api-key replaces the example's
Python
import dashscope
from dashscope import MultiModalEmbedding
from dashvector import Client
dashscope.api_key = '{your-dashscope-api-key}'
# invocationsDashScope ONE-PEACEmould,Putting the various modal materialsembeddingvector
def generate_embeddings(text: str = None, image: str = None, audio: str = None):
input = []
if text:
({'text': text})
if image:
({'image': image})
if audio:
({'audio': audio})
result = (
model=.multimodal_embedding_one_peace_v1,
input=input,
auto_truncation=True
)
if result.status_code != 200:
raise Exception(f"ONE-PEACE failed to generate embedding of {input}, result: {result}")
return ["embedding"]
# establishDashVector Client
client = Client(
api_key='{your-dashvector-api-key}',
endpoint='{your-dashvector-cluster-endpoint}'
)
# establishDashVector Collection
rsp = ('one-peace-embedding', 1536)
assert rsp
collection = ('one-peace-embedding')
assert collection
# Vector entryDashVector
(
[
('ID1', generate_embeddings(text='AliCloud Vector Search ServiceDashVectoris performance、One of the best vector databases in terms of price/performance ratio')),
('ID2', generate_embeddings(image='/images/256_1.png')),
('ID3', generate_embeddings(audio='/audios/')),
('ID4', generate_embeddings(
text='AliCloud Vector Search ServiceDashVectoris performance、One of the best vector databases in terms of price/performance ratio',
image='/images/256_1.png',
audio='/audios/'
))
]
)
# vector search
docs = (
generate_embeddings(text='The best vector database')
)
print(docs)
Related Best Practices
- DashVector + DashScope Upgrade Multimodal Retrieval