Comment by nostrebored

Comment by nostrebored 5 days ago

1 reply

Depends immensely on use case — what are your compute limitations? are you fine with remote code? are you doing symmetric or asymmetric retrieval? do you need support in one language or many languages? do you need to work on just text or (audio, video, image)? are you working in a specific domain?

A lot of people wind up using models based purely on one or two benchmarks and wind up viewing embedding based projects as a failure.

If you do answer some of those I’d be happy to give my anecdotal feedback :)

ryeguy_24 4 days ago

Sorry, I wasn’t clear. I was speaking about utility models/libraries to compute things like meaning similarity with not just token embeddings but with attention too. I’m really interested in finding a good utility that leverages the transformer to compute “meaning similarity” between two texts.