Comment by authorfly
You are missing the woods for the trees in my point. LLM based (especially RLHF) embeddings allow you to do much more and encode greater context that either "this thing is being used as a potent adjective", or "this thing is a noun similar to that other [abstraction] noun" <-- Word2Vec or "this thing is similar in terms of the whole sentence when doing retrieval tasks" <-- SBERT
If you can't see why it is useful that neither Word2Vec or SBERT can put "positive charge" and "negative charge" in very different, opposite embedding space while LLM and RLHF based embeddings can, you don't understand the full utilization possible with embeddings.
Firstly, you can choose what you embed the word with, such as "Article Topic:" or "Temperature:" to adjust the output of the embedding and results of cosine similarity to be relevant for your use case (if you use a word-based embedding, which captures much less than a sentence for search/retrieval/many other tasks like categorising)
Secondly, by default, these models are not as "dumb state" as the original slew of Word2Vec and GloVe, which yes would score very highly for words like "loved" and "hated" in similar use as adjectives, which caused issues for things like semantic classification of reviews, etc. Whereas these models encode so much more, that they see the difference between "loved" and "hated" is much bigger than that between "loved" and "walk" for example. *This is already a useful default step up, but most anyone using RLHF embeddings is embedding sentences to get the best use out of them*
Your understanding of embeddings is rather flawed to focus on "hey're both english words, they're both words that can be a verb, a noun and an adjective (not that many such words)". Why do embeddings in different languages with the same semantic meaning land closer in space than two unrelated english languages? The model has no focus on part of speech type, and is ideally suited to embedding sentences, where with every additional token, it can produce a more useful embedding. Being spelled correctly belies that you have a miscomrephension that these systems are a "look up" - yes they are for one word, and if you spelt that one word wrong (or token which represents multiple words, one token), you'd get a different place in embedding space and one very wrong. However, when you have multiple tokens, a mispelling moves the embedding space very little, because the model is adept at comprehending mispelling and slang and other "translation"-like tasks early, and making their effects irrelevant for downstream tasks unless they are useful to keep around. Effective resolution of spelling mistakes is anyhow possible with models as small as 2-5GB, as T5 showed way back in 2019, and I'd posit even some sentence similarity trained models (e.g. based on BERT which had a training set with some spelling errors) treat spelling mistakes essentially the same way.
I am aware of the options from OpenAI for embeddings, as I have used them for a long, long time. The original options were each based on the released early models, especially ada and babbage, and though the naming convention isn't clear any more, the more recent models are based on RLHF models, like ChatGPT, and hence I mention ChatGPT to make it clear to cursory readers that I am not referring to the older tier of embedding models by OpenAI based on non-RLHF models.
Tone of your post is really strange and condescending, not sure why. You made a statement that I, in my work, very often see people make when they first start learning about embeddings (expecting words that we humans see as "opposite" to actually have opposite embeddings), and I corrected it, as it might help other people reading this thread.
> Firstly, you can choose what you embed the word with, such as "Article Topic:" or "Temperature:" to adjust the output of the embedding and results of cosine similarity to be relevant for your use case
As far as LLM-based embeddings go, unless you train the model for this type of format, this is not true at all. In fact, the opposite is true - adding such qualifiers before your text only increases the similarity, as those two texts are, in fact, more similar after such additions. I am aware that instruct-embedding models work, but their performance and flexibility is, in my experience, very limited
As for the rest of your post, I really don't see why you are trying to convince me that LLM-based embeddings have so much more to them than previous models. I am very well aware of this - my work revolves around such new models. I simply corrected a common misconception that you gave, and I don't really care if you "really think that" or if you know what the truth is but just wrote it as an off-hand remark.