How to use zeroentropy/zembed-1-embedding with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("zeroentropy/zembed-1-embedding") sentences = [ "The weather is lovely today.", "It's so sunny outside!", "He drove to the stadium." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3]
Correct max_seq_length in sentence_bert_config.json
· Sign up or log in to comment