Prepare the following models for vector encoding: sentence-transformers/all-MiniLM-L6-v2 BAAI/bge-large-en-v1.5 openai/clip-vit-base-patch32 For embedding model ...
Support for PIL library image input (path) instead of Base64 encoding. For example, when using models with transformers library, I provide images this way img = Image.open(path).convert("RGB") which ...