Comment by keeeba

Comment by keeeba 5 days ago

0 replies

What use-cases do you see for the 270M’s embeddings, and should we be sticking to token embeddings or can we meaningfully pool for sentence/document embeddings?

Do we need to fine-tune for the embeddings to be meaningful at the sentence/document level?