Sentence Similarity
sentence-transformers
Safetensors
English
Chinese
multilingual
qwen3
feature-extraction
embedding
text-embedding
retrieval
text-embeddings-inference
Instructions to use Octen/Octen-Embedding-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use Octen/Octen-Embedding-8B with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("Octen/Octen-Embedding-8B") sentences = [ "That is a happy person", "That is a happy dog", "That is a very happy person", "Today is a sunny day" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
Octen Reranker models
#3
by vvekthkr - opened
I am curious about the missing reranker model, Should we use the respectively sized qwen3 rerankers, if we were to use octen generated embeddings for hybrid search?
Yep, that works. And the rerank model doesn’t have to be the same size as the embedding model — they can be chosen independently.
Thanks for the inputs
vvekthkr changed discussion status to closed